10Gb New Build - Looking for some advice - Horrible RW speeds

Status
Not open for further replies.

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
Hardware details in my sig. Looking to get as close as possible to saturating 10gb (hopefully at least 5-6Gb/sec) over AFP while maintaining a safe amount of redundancy. Some of the hardware is still in the mail. Running transmission, sonarr, couch potato, and Headphones plugins. Probably going to switch over to usenet for downloads.

Is this doable?
 

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
I do have some experience with freenas, just finally upgrading to a proper server build. Had been using the hardware of the pfsense box as a 4tb server since version 7 (never should have upgraded it to 9 or from UFS to ZFS), but I'm basically overhauling my entire home network and server to the tune of $2000.

Details on the network config - Planning to use the 10gb uplink ports on the switch for the primary user and the freenas box.

Guess my two main questions are how close should I be able to get to saturating the link, and what the of pool config or configs should I use to do so. Not considering raidz1, wondering if a 10 array would be more suitable or faster than a z2 or z3 array or two mirrored arrays.
 
Last edited:

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
it all depends on your work load. just tell us more about it.
 

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
Decided to go with 8x mirrored vdevs (2x2tb drives each) striped to form a pool. I will backup any critical files (bulk of the storage is used for media, so I am not overly worried about the 87.5% chance of a second disk failure being in the same vdev as a possible first failed disk. I see this as an acceptable and calculated risk and think that I should be able to come close to my goal of saturating my 10Gb link. This also allows me to miggrate my data to the 4 drives (doing that now) I am currently able to hook up and expand the pool with additional mirrored vdevs when the parts arrive in the mail and the other disks are freed up and should allow me to simply swap disks in a single mirrored vdev one at a time to larger capacity drives in the future with fairly quick rebuild times to expand my storage without taking the whole pool down for a rebuild. The quick transfer speeds are pat for the cool factor but also quite useful for editing large adobe cs6 files it will store and moving things around and to my sad quickly when I would like to.

Does it sound like that will all work as planned?
 

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
Still on gigabit while awaiting new parts (running with a G2130 pentium at the moment and onboard gigabit ethernet, and only 8 drives, and some of the drives are on the onboard sata, but the rest of the config is the same), however, I am very concerned with the performance I am currently getting about 20mb/s write and 3mb/s read speeds with the 4x2tb mirrored vdevs in the pool and the file transfers seem to be only going to some of the drives. Originally built the pool with 4 drives, moved over my data which went fairly quickly for 2tb, only a couple of hours or so, then expanded the pool with two more pairs of drives as shown. I tried scrubbing the array, but the results are the same, and it only seemed to scrub the 3 disks according to the drive io graph in the reporting as well. With a 5 disk raidz2, I was getting 115mb/s write speed and 96mb/s read speed. From what I've researched, the more vdevs in a pool, the faster the performance, and simple mirrors should use far fewer resources than a raidz2.

Any help is greatly appreciated. Updated the thread title appropriately.
 

Attachments

  • Screen Shot 2015-04-30 at 6.32.15 AM.png
    Screen Shot 2015-04-30 at 6.32.15 AM.png
    852.4 KB · Views: 287

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You can't test performance using that GUI tool. You should check locally in the box then test network speed second.
 

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
Following this: https://forums.freenas.org/index.php?threads/write-performance-issues-mid-level-zfs-setup.13372/ Here are my results. The resulting test file was a hair over 1TB

Also, swapped in the xeon and my sas sff cables came, so I'm running with 12x drives in the pool as mirrored vdevs. Just waiting on 4x drives now and the network card, switch, fiber cables, and macbook card and thunderbolt 2 enclosure.

Second picture: still seems to like to ignore one of the disks in the pool, but smart checks and scrubs have no errors.
 

Attachments

  • Screen Shot 2015-04-30 at 3.43.33 PM.png
    Screen Shot 2015-04-30 at 3.43.33 PM.png
    622.4 KB · Views: 271
  • Screen Shot 2015-04-30 at 3.59.31 PM.png
    Screen Shot 2015-04-30 at 3.59.31 PM.png
    41.9 KB · Views: 265
Last edited:

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
Is there any way to force freenas to move the existing 2tb or so to an evenly distributed stripe across the drives for better performance with the existing data, or would it just be easier to transfer it all out of that pool, recreate the pool, and transfer the data back on?

Also, I think I am having issues with my AFS sharing and often times have to restart the service to allow me to connect to the sharp. I think I saw a post on the forum about that, so I can look into that.


Thanks again for the responses. Can't wait to get this thing running well. I think I may be able to get close to saturating that link.
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Good luck man, I can't get anywhere close to saturating a 10 gig link. I got some pretty beefy hardware too. 256 gigs of ram, 15K sas drives in mirrors etc. I can get close with iPerf, but doing any moving of actual data is not so good. The only time i see decent speeds is when I'm doing a ZFS send/receive. Even then I'm pushing 3gig if I'm lucky.
 

Donald

Cadet
Joined
Feb 14, 2015
Messages
5
Currently with my setup I get around 400-500Mb when transferring across my mechanical harddrives.

If I want to break the 4Gb barrier with my 10Gb nic i need to transfer my data from my ssd.

the seagate 5900rpm hd's when i tried out a few just did not like my hardware. I went with a mixture of seagate 7200rpm and wd 7200rpm.

Not my hardware is not sas but sata.
 

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
Ok, turned off compression to get more realistic test results (with 12x2tb in 6x2 striped mirror):

[root@freenas ~]# cd mnt/storage
bash: cd: mnt/storage: No such file or directory
[root@freenas ~]# cd /mnt/storage
[root@freenas /mnt/storage]# dd if=/dev/zero of=testfile bs=1048576
^C60780+0 records in
60779+0 records out
63731400704 bytes transferred in 101.692433 secs (626707403 bytes/sec)
[root@freenas /mnt/storage]# dd if=testfile of=/dev/null bs=1048576
60780+0 records in
60780+0 records out
63732449280 bytes transferred in 127.226468 secs (500937032 bytes/sec)
[root@freenas /mnt/storage]#

Created the pool from scratch with my data on other disks to move over. All hard drives are being written to evenly with the stripe now.

Not gonna saturate 10gb, but gonna stick with 12 disks for now and possibly expand later. Fairly happy with the results. Though kind of surprised that the write speeds on what is essentially a raid 10 array would be significantly faster than the read speed, though neither is disappointing. Will play around with network options and sharing protocols on monday when my fiber cables and switch arrive.
 

Attachments

  • Screen Shot 2015-05-02 at 6.57.54 PM.png
    Screen Shot 2015-05-02 at 6.57.54 PM.png
    19.8 KB · Views: 277

joshuahandrich

Dabbler
Joined
Apr 28, 2015
Messages
10
Any updates?

Sorry, been busy. Finally got everything setup well. Enabling auto tune and enabling flow control on the router seemed to help quite a bit. Getting about 500MB/s read and write. Quite happy with the results. I'll probably mess around with it a bit more and see if there's any more optimizing to do, but I'm quite happy with the results.

Thanks for all the advice.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Status
Not open for further replies.
Top