- Joined
- Dec 8, 2017
- Messages
- 442
Thanks Dominick. I was able to get around the issue of the bucket not showing up correctly by creating the task using S3 credentials, selecting the bucket, and then changing the credentials to be Storj_IX. I have contacted support to see if adding the correct flag to my bucket will make this smoother. I probably didn't use the right link when creating my account. I don't think it had a negative impact on anything though.IX is specifying 64mb in rclone (via cloud sync tasks) so you should not have 5mb multipart uploads. If this is still and issue please advise the version you are on and I'll investigate.
If you are manually running rclone see the following example command. Scale concurrency to control throughput.
Code:rclone copy --progress --s3-upload-concurrency 8 --s3-upload-cutoff 64M --s3-chunk-size 64M data.zip rclonename:path
I am pretty sure I am uploading with the correct segment size, since the number of segments in my bucket is not drastically larger than the number of files. The number of objects hasn't been an issue for me. Performance has though as the speed is very sporadic. It will upload at 600-800mbps for a few seconds, then pause for a few seconds before continuing, making my average upload speed around 300mbps (I have 1gbps up from my ISP).
Restore speeds are consistent at 800mbps though, so that is very nice.
Storj support indicated that IX doesn't provide an option in the GUI for how many chunks of one file can be transferred in parallel, so from what I understand when using the built in integration, I'm stuck with the current performance when transferring large files. I have experimented with increasing the number of transfers, but I think this only helps when transferring many files at once.
"Then only remained part is to tune parallelism (how many chunks of one file maybe transferred in parallel), but I did not see a configuration option for that, only number of parallel transfers (how many files transfer in parallel)." - Aleksey at Storj support