Allow me to make a few statements, and please correct me every time Im off.
Lets assume my main use of FreeNAS is as file server, via SMB, clients are Windows workstations and files are big, for the most part.
(by big I mean, 2-10 GB each, sort of Raw photos or videos)
Now, the statements:
The theoretical transfer rate on Gigabit LAN is about 125 MB/sec, so no matter how fast are the HDDs on the workstation or how efficient the FreeNAS box, the transfer rate will never exceed that number.
(lets not involve cache, for the sake of simplicity)
Therefore, if I have a workstation with SSD (capable of lets say 550 MB/sec) and whatever combination of HDDs on the FreeNAS, the fastest I will ever go is 125 MB/sec (of course, little more, little less but pretty much around that number, sustained)
All this assuming no one else is using the LAN or the FreeNAS box.
The moment two workstations try to pull a file from the FreeNAS, they start competing for the bandwidth, and if we assume the FreeNAS is not the bottleneck, then each user will get a transfer rate of 125 MB/sec divided by 2, (about 60 MB/sec) . Please dont get picky on me on 58 MB/sec or 73 MB/sec, hopefully I'm coming across with the theory.
Now, lets say I want to make things faster, so I enable Link Aggregation on the FreeNAS box, and LACP on the LAN switch, contrary to some people that think LACP on the FreeNAS will make your workstation pull files faster, we actually do nothing to improve the speed of one single workstation, first because the workstation doesnt have LACP, and second because LACP don't increase the total bandwidth, but just "pool" the number of physical wires you have instead, and balance connections among them.
Basically, one workstation will go to the LAN switch at Gbit speed, connect to the FreeNAS on ones of the LACP wires and pull a big file at 125MB/sec, and if a few seconds later the second workstation comes, then LACP will put that traffic on the second wire on the Link Aggregation of the FreeNAS, at also 125MB/sec, so with LACP on two wires you can have theoretically two workstation pulling big files at 125MB/sec (again, as long as the FreeNAS is not the bottleneck).
By the way, in reality, for the tests Ive done with Cisco LACP , the relationship of bandwidth-wires is not 1:1 but sort of 3:2 , basically to get 125MB/sec on each workstation you have to set LACP with 3 wires, and so on.
I guess there is some sort of overhead on the balancing algorithm (I tried src mac , dst mac, src ip, etc , all the same)
Now, lets say we want to really make the file transfer faster, so lets go to 10GBE, so we go ahead and install a nice Mellanox NIC on the FreeNAS and another on one workstation, FO wires, direct connected to one another (no switch), voila, we expect to hit the transfer rate of 1000MB/sec but it doesnt happen.
Part of the reason if because you have to have a HDD on the workstation that can move at that rate (a brand new m.2 lets say) and also the FreeNAS HDD needs to be capable of that transfer rate.
Well, I was playing with that today, with seven WD RED 3TB, but i cant pass 600MB/sec (the WD RED are rated 130MB/sec so a stripe should give me about 900 MB/sec but im not near that).
at some point I though it was my workstation, so i started running read/write tests on the freenas box itself with dd command,, same thing, cant pass 600MB/sec.
Then I though it was FreeNAS, so I installed Solaris 11_3 , configured a ZFS pool and the same thing (I even enabled SMB and tried from the Windows machine, but no difference).
I also installed OpenIndiana, same thing.
On all those, If I run dd command over a pool with only one HDD on the zpool I get 130 MB/sec, just as the WD RED specs are, but when I get 7 drives together my math doesnt work.
Im installing FreeBSD 10 right now.
After I do that, I will install Ubuntu with an Areca RAID card that I know works excellent, and I will test performance again (Im sure Ive passed that mark with Hardware RAID before in similar tests, but i dont really remember)
Am I making any sense with my assumptions and expectations?
Anyone out there with real word experience on this?
How can one extrapolate the expected performance of a zpool , based on the HDDs specs and the number of disks on the pool?
Lets assume my main use of FreeNAS is as file server, via SMB, clients are Windows workstations and files are big, for the most part.
(by big I mean, 2-10 GB each, sort of Raw photos or videos)
Now, the statements:
The theoretical transfer rate on Gigabit LAN is about 125 MB/sec, so no matter how fast are the HDDs on the workstation or how efficient the FreeNAS box, the transfer rate will never exceed that number.
(lets not involve cache, for the sake of simplicity)
Therefore, if I have a workstation with SSD (capable of lets say 550 MB/sec) and whatever combination of HDDs on the FreeNAS, the fastest I will ever go is 125 MB/sec (of course, little more, little less but pretty much around that number, sustained)
All this assuming no one else is using the LAN or the FreeNAS box.
The moment two workstations try to pull a file from the FreeNAS, they start competing for the bandwidth, and if we assume the FreeNAS is not the bottleneck, then each user will get a transfer rate of 125 MB/sec divided by 2, (about 60 MB/sec) . Please dont get picky on me on 58 MB/sec or 73 MB/sec, hopefully I'm coming across with the theory.
Now, lets say I want to make things faster, so I enable Link Aggregation on the FreeNAS box, and LACP on the LAN switch, contrary to some people that think LACP on the FreeNAS will make your workstation pull files faster, we actually do nothing to improve the speed of one single workstation, first because the workstation doesnt have LACP, and second because LACP don't increase the total bandwidth, but just "pool" the number of physical wires you have instead, and balance connections among them.
Basically, one workstation will go to the LAN switch at Gbit speed, connect to the FreeNAS on ones of the LACP wires and pull a big file at 125MB/sec, and if a few seconds later the second workstation comes, then LACP will put that traffic on the second wire on the Link Aggregation of the FreeNAS, at also 125MB/sec, so with LACP on two wires you can have theoretically two workstation pulling big files at 125MB/sec (again, as long as the FreeNAS is not the bottleneck).
By the way, in reality, for the tests Ive done with Cisco LACP , the relationship of bandwidth-wires is not 1:1 but sort of 3:2 , basically to get 125MB/sec on each workstation you have to set LACP with 3 wires, and so on.
I guess there is some sort of overhead on the balancing algorithm (I tried src mac , dst mac, src ip, etc , all the same)
Now, lets say we want to really make the file transfer faster, so lets go to 10GBE, so we go ahead and install a nice Mellanox NIC on the FreeNAS and another on one workstation, FO wires, direct connected to one another (no switch), voila, we expect to hit the transfer rate of 1000MB/sec but it doesnt happen.
Part of the reason if because you have to have a HDD on the workstation that can move at that rate (a brand new m.2 lets say) and also the FreeNAS HDD needs to be capable of that transfer rate.
Well, I was playing with that today, with seven WD RED 3TB, but i cant pass 600MB/sec (the WD RED are rated 130MB/sec so a stripe should give me about 900 MB/sec but im not near that).
at some point I though it was my workstation, so i started running read/write tests on the freenas box itself with dd command,, same thing, cant pass 600MB/sec.
Then I though it was FreeNAS, so I installed Solaris 11_3 , configured a ZFS pool and the same thing (I even enabled SMB and tried from the Windows machine, but no difference).
I also installed OpenIndiana, same thing.
On all those, If I run dd command over a pool with only one HDD on the zpool I get 130 MB/sec, just as the WD RED specs are, but when I get 7 drives together my math doesnt work.
Im installing FreeBSD 10 right now.
After I do that, I will install Ubuntu with an Areca RAID card that I know works excellent, and I will test performance again (Im sure Ive passed that mark with Hardware RAID before in similar tests, but i dont really remember)
Am I making any sense with my assumptions and expectations?
Anyone out there with real word experience on this?
How can one extrapolate the expected performance of a zpool , based on the HDDs specs and the number of disks on the pool?