Our Products:   CompleteFTP  edtFTPnet/Free  edtFTPnet/PRO  edtFTPj/Free  edtFTPj/PRO
0 votes
6k views
in Java FTP by (300 points)
Hi,

I was wondering why you disabled the RESUME mode in uploadStream.

I see no particular reason for this not to work. If you change your FileTransferClient implementation from

        public synchronized FileTransferOutputStream uploadStream(String remoteFileName, WriteMode writeMode) 
            throws FTPException, IOException {
            checkTransferSettings();
            if (WriteMode.RESUME.equals(writeMode))
                throw new FTPException("Resume not supported for stream uploads");
            boolean append = WriteMode.APPEND.equals(writeMode);
            return new FTPOutputStream(ftpClient, remoteFileName, append);
       }



to

        public synchronized FileTransferOutputStream uploadStream(String remoteFileName, WriteMode writeMode) 
            throws FTPException, IOException {
            checkTransferSettings();
          boolean append = false;
          if (writeMode.equals(WriteMode.RESUME)) {
              ftpClient.resume();
          }
          else if (writeMode.equals(WriteMode.APPEND)) {
              append = true;
          }
            return new FTPOutputStream(ftpClient, remoteFileName, append);
       }


we should be almost set. The only missing thing is an getResumeMarker() on FTPOutputStream so that I can skip the input stream for x number of bytes before starting the copying.

Can this be added?

Thanks

5 Answers

0 votes
by (162k points)
This might be a good way to do it - we'll take a look.
0 votes
by (300 points)
Cool, I prototyped that locally and it is working just fine.

My changed TransferUsingStreams sample now looks like:

         String s1 = "Hello world";
         InputStream inputStream = new ByteArrayInputStream(s1.getBytes());

         log.info("Putting s1");
         FTPOutputStream out = (FTPOutputStream) ftp.uploadStream(
               "Hello.txt", WriteMode.RESUME);
         // TODO check we skipped the right number of bytes...
         inputStream.skip(out.getResumeMarker());
         copyAndCloseStreams(inputStream, out);

and
   public static void copyAndCloseStreams(final InputStream _in,
         final OutputStream _out) throws IOException {
      int read;
      final byte[] buf = new byte[1024];
      try {
         while ((read = _in.read(buf)) != -1) {
            _out.write(buf, 0, read);
         }
      } finally {
         _out.close();
         _in.close();
      }
   }


I had to do the change on FTPOutputStream since I just use the free version but anyway.. works just fine

Thomas
0 votes
by (300 points)
Bruce,

any chance that would make it into the next release?

Thanks
Thomas
0 votes
by (162k points)
Hi, we've been giving this one some thought, sorry for a slow response.

The main question I have is, what does resume do for you here? Why not just get the size of the remote file and use append?

Resume is supposed to work automatically, but here you are having to manually skip over bytes in the source stream. So it isn't really working as resume should, if you know what I mean. That's why we don't normally use resume for uploading streams.
0 votes
by (300 points)
Well good questions. Here are some reasons I prefer resume over append:
    no matter if I use append or resume I need to skip my input stream up to the position where I want to start writing from. Or in other words: I need to provide an input stream that at the current position reflects the remote size before I really perform the upload. So I see no advantage either way.
    the ftp APPE command seems to be less common then the REST command
    streams give a unified interface. In our scenarios the upload can happen both from memory and disk, thus, I really just want to provide a ByteArrayInputStream or FileInputStream to the lower level upload API.
    using the resume mode I can skip one API call (for the file size) since your library does that automatically for me.
    the changes work for us and are minimally intrusive :D


I know that none of the reasons are very strong except maybe the unified API from an end users perspective.

Categories

...