Our Products:   CompleteFTP  edtFTPnet/Free  edtFTPnet/PRO  edtFTPj/Free  edtFTPj/PRO
0 votes
2.8k views
in Java FTP by (120 points)
we have been using the open source edtftp jar for close to a year now and all works fine, with one key exception. some of our customers have started to upload files in the 300-600 mb range and while not large by ftp standards they take awhile for a user on a dls or cable modem to upload. We started to notice that most of these uploads failed.

Our tests seem to show a connection to ftp uploads that take place over a long time duration (70-90 minutes) and the Progress_Interval set to a short value, initially we used 2048. we have managed to reproduce this problem with one of your JUnit test cases; TestBigTransfer.java.

My question is what is the optimal Progress_Interval value? is it ok to set it to a very large value say 2 or 3 times the 1048575 value you use in your tests cases? But on a very slow dsl line, the upload speed can be in the 60--80kb/sec range, I assume the slower the upload speed the longer the Progress_Interval value is needed for files in the 300-600 mb range.

To sum up:
All ftp upload sessions successfully get the file upload regardless of upload speed or file size
Large files on slow uploads with the progress_interval set to a short value seem to fail to quit properly causing our app to hang at 100%
On fast upload speeds (t1 or higher) all uploads (large or small files) upload and quit successfully

3 Answers

0 votes
by (51.6k points)
(copy of message e-mail to gphilpott)

While the value of the interval can affect performance, it should never affect the success or failure of the transfer. Another point regarding this interval is that it is related to the size of the transfer-buffer in that the progress monitor cannot get called more often than indicated by the transfer-buffer. The transfer-buffer size is in fact more likely to affect performance. I don't have any concrete statistics to give you, but for fast bandwidths a larger transfer-buffer-size should yield higher performance, but this is at the expense of progress monitor update frequency.

Now, yours is not just a performance problem in that the connection is actually being lost mid-transfer, right? Could you please e-mail a DEBUG-level log showing the failure?

I also want to call you attention to the fact that interrupted transfers can be resumed. This means that you can detect failed transfers and automatically (or otherwise) resume them after reconnecting.

- Hans (EnterpriseDT)
0 votes
by (51.6k points)
(copy of second reply to gphilpott)

What often happens with very large transfers is that the FTP server times out the connection (many FTP servers have eg 10 minute timeout), and the log might indicate that. If this is the case the server timeout needs to be altered.

BTW, a 2048 size for the progress monitor is very small for such a large file, and if it is logging, will slow the transfer down.

- Bruce (EnterpriseDT)
0 votes
by (51.6k points)
(copy of reply to related message)

I suggest the following:
  1. Have a look at the server log to see if it's reporting any errors.
  2. You could also try an FTP client application (with decent logs) such as FileZilla and see if the same problem occurs.
  3. Try it in active mode. This uses a quite different technique and may not suffer from the same problem.
  4. If you're really stuck, have you set the time-out on the client-side? This would allow you to write some code that tries to recover from the error, such as logging back in and checking the size or something like that. A time-out around one minute should be sufficient.
  5. Be aware of the resume feature which will allow you to resume partially complete transfers - see the FTPClient.resume() method.


- Hans (EnterpriseDT)

Categories

...