10Gbe Saturation Question

Status
Not open for further replies.

mutation666

Dabbler
Joined
Jul 27, 2016
Messages
11
So I just upgraded my FreeNAS box to have a 10Gbe to my primary workstation and was wondering if there was any way to guess or estimate how many drives would be needed to come close to saturate read and write to a RaidZ2 Array. I would guess saturating write would be easiest using a ZIL cache (maybe 250gbx2 raid0) but not sure what would be the best to saturate read. I currently get around 180MB/s over the link.

FreeNas Specs:
CPU:i3 4130T
Mobo: ASRock E3C224
RAM 32GB ECC ram
Raid Array 1: 3x2TB WD Red RaidZ1
Raid Array 2: 4x3TB WD Red RaidZ2
HBA:LSI LSI00244 (9201-16i)
Intel X540-T2 10G Dual RJ45

Workstation:
CPU: i7 5820k
Mobo: Asus X99 Deluxe
Ram: 32GB
GPU: GTX980
Storage: 950Pro 512GB (drive being used to saturate 10gbe)
850Pro 256gb x2
NIC: Asus ROG 10GB Express (Direct connect to Freenas box)
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
https://calomel.org/zfs_raid_speed_capacity.html I have looked here but it seems like 12 drives would be needed at a rough guess for read, is that accurate or in the ball park?

I would advise taking everything on that site with an appropriate amount of salt, but that'd probably turn you into a pickle. :D

What it takes to "saturate" a 10 gigabit link depends heavily on how you're using your server. It might not even be possible. You should give a lot more details. There isn't going to be a silver bullet "just buy this and you're golden"

By the way, ZIL isn't a write cache. The size of ZFS transaction groups is based on the amount of RAM in your system. You should probably read the following (as well as various other stickies in the relevant subforums):
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
https://forums.freenas.org/index.php?threads/10-gig-networking-primer.25749/

Look for posts on the topic by jgreco and cyberjock. You'll want to look for general design tips and information about ZFS rather than specific 10 gigabit tuning. Figure out where you're bottlenecking then try to fix it.
 
Last edited:

mutation666

Dabbler
Joined
Jul 27, 2016
Messages
11
What it takes to "saturate" a 10 gigabit link depends heavily on how you're using your server. It might not even be possible.
How would it depend on how I am using it, if the hardware can support the transfer rate then I could saturate it(minus overhead of course). Yeah you wouldn't saturate it watching a video or sshing to a jail or anything but file transfer is something limited by the hardware choice, and that's clearly what I am talking about.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
How would it depend on how I am using it, if the hardware can support the transfer rate then I could saturate it(minus overhead of course). Yeah you wouldn't saturate it watching a video or sshing to a jail or anything but file transfer is something limited by the hardware choice, and that's clearly what I am talking about.
What protocol? What types of files? Big files? Small files?
I've absolutely tanked performance of samba servers by spewing multiple gigabytes of 16KB files at them. More disks wouldn't help because I was pegging the CPUs at 100% and getting about 20mbits/sec for the trouble.
 

mutation666

Dabbler
Joined
Jul 27, 2016
Messages
11
What protocol? What types of files? Big files? Small files?
I've absolutely tanked performance of samba servers by spewing multiple gigabytes of 16KB files at them. More disks wouldn't help because I was pegging the CPUs at 100% and getting about 20mbits/sec for the trouble.
Obviously they would have to be larger files to saturate the link. Small files wouldn't work in any type of easy manner. My question is quite simple really how many drives in a RaidZ2 on average would it take to be able to saturate the max throughput of a 10gig connection. Drives have rated Read/Write speed at different file sizes, and different file block sizes not shit. I am saying on average what would you need in a perfect world to do this, I don't get whats fucking hard to understand.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
How would it depend on how I am using it, if the hardware can support the transfer rate then I could saturate it(minus overhead of course). Yeah you wouldn't saturate it watching a video or sshing to a jail or anything but file transfer is something limited by the hardware choice, and that's clearly what I am talking about.
You need to read the 10G primer thread. Because just throwing hardware at it won't do it. Currently you should be able to see 500 Mbs read speeds since you have 5 drives. I don't understand what you mean by array unless you mean pool. In which case figure 100mbs per disc for read speeds. If you are below by say the current 50% you are now then you need to edit your tunables. I think SSDs work best for testing these things but I may be wrong.


Sent from my iPhone using Tapatalk
 

mutation666

Dabbler
Joined
Jul 27, 2016
Messages
11
By the way, ZIL isn't a write cache. The size of ZFS transaction groups is based on the amount of RAM in your system. You should probably read the following (as well as various other stickies in the relevant subforums):
"The ZIL stores data that will need to be written to a zpool later and acts as a “non-
volatile write cache” for the zpool." That doesn't mean its a write cache.
 

mutation666

Dabbler
Joined
Jul 27, 2016
Messages
11
You need to read the 10G primer thread. Because just throwing hardware at it won't do it. Currently you should be able to see 500 Mbs read speeds since you have 5 drives. I don't understand what you mean by array unless you mean pool. In which case figure 100mbs per disc for read speeds. If you are below by say the current 50% you are now then you need to edit your tunables. I think SSDs work best for testing these things but I may be wrong.


Sent from my iPhone using Tapatalk
Thanks you 100 Mbs / per drive clearly wasnt to hard. I didn't think my 10gbe was perfectly set up was just wondering on average whats the speed on raidz2 per drive. So about 10 drives if the world was perfect.
 

mutation666

Dabbler
Joined
Jul 27, 2016
Messages
11
What was I suppose to get from the primer? I come from a network background, was all pretty basic information and was more about this is what 10gbe is and this is the hardware we recommend. Is there more then just random talk later in the thread? I typically just read the main post since thats usually edited on most forums to include all the pertinent information.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
What protocol and are you trying to handle large sequential reads and writes with a single workload?

CIFS will be CPU limited (only uses a single core - so frequency is important). (CIFS doesn't take advantage of a SLOG. The ZIL is is RAM).
IOPS and Bandwidth are the 2 different speed considerations (assuming your network is OK).
For a single RAID-Z vdev, you will get the IOPS of the slowest drive and the bandwidth of the sum of the data drives.
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
On my old 10 disk Z2 I was able to benchmark read and write at around 850MB/s I recall so yes, I would imagine 12 disks would allow you to pretty much saturate practical 10gig bandwidth. Real world performance, especially with multiple users saw that number quickly degrade. I don't have full documentation to hand of the results but it was broadly in line with the results posted by Gea on his Napp-it site.
 
Status
Not open for further replies.
Top