Disk Layout Advise

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
----

Ok, so the three way mirror sound expensive, but may be doable and more reliable.

How would you layout 33 960G DC SSD drives? (assuming we're doing mostly heavy sequential reads/writes for the most part in the file sizes mentioned previously)

Ten two-way mirrors. Wait six months and then add a third drive to each mirror. Three hot spares.

*Would you just have one pool or multiple?

One pool is usually a lot easier to work with.

* Would you have two Intel 750 400G SSD PCIe cards for L2ARC or one 1.2TB 750 card installed

There's (almost) no value to L2ARC for a SSD based pool.

* What size SLOG would you suggest and how many?

You can't buy a sufficiently small SLOG so just get the obvious choice of 400 or whatever the smallest is, and don't worry about size. Pick one or two devices based on your tolerance for failure. If you're really failure intolerant, then get three, install them all, and use two in mirror with one in warm standby.

* Also what percentage of disk usage would you say is optimal as a ratio of used/free space on the disk to sustain decent performance?

For hard drives, don't go past 50%. This chart tells the general story.

delphix-small.png


For SSD, it is less well-defined. Still, ZFS will tend to work better if it has to work less hard at allocating space.

* Would turning off compression benefit performance when every bit counts?

Probably not. Optimizing compression is a good idea, but generally you get more benefit from some form of compression.
 

L3192

Dabbler
Joined
Jan 25, 2016
Messages
22
Ten two-way mirrors. Wait six months and then add a third drive to each mirror. Three hot spares.



One pool is usually a lot easier to work with.



There's (almost) no value to L2ARC for a SSD based pool.



You can't buy a sufficiently small SLOG so just get the obvious choice of 400 or whatever the smallest is, and don't worry about size. Pick one or two devices based on your tolerance for failure. If you're really failure intolerant, then get three, install them all, and use two in mirror with one in warm standby.



For hard drives, don't go past 50%. This chart tells the general story.

delphix-small.png


For SSD, it is less well-defined. Still, ZFS will tend to work better if it has to work less hard at allocating space.



Probably not. Optimizing compression is a good idea, but generally you get more benefit from some form of compression.


-----
Thank you!

Do you typically see any performance penalty using 3-way vs 2-way mirrors?

Also, does FreeNAS create the devices by id or by-path?

Since FreeNAS turns the device into an appliance, is there anyway to look under the hood at all?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Also, does FreeNAS create the devices by id or by-path?

By GPTID so you can move the drives from port to port or even from controller to controller it doesn't care ;)

Since FreeNAS turns the device into an appliance, is there anyway to look under the hood at all?

Yes, but it's usually a bad idea to modify things under the hood.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
-----
Thank you!

Do you typically see any performance penalty using 3-way vs 2-way mirrors?

Also, does FreeNAS create the devices by id or by-path?

Since FreeNAS turns the device into an appliance, is there anyway to look under the hood at all?

The performance penalty for using two way mirrors rather than three way mirrors is that it may go at only 66% of the speed of the three way mirror. In a three way mirror, each component can potentially be reading a separate thing, so read performance can increase as width increases. Write performance, of course, is approximately that of a single underlying device, since writes happen in parallel to all underlying component devices.

You can look under the hood all you want. That's why there's the command line and SSH. Modifying things under the hood is a bad idea unless you really understand how the appliance does its stuff.
 

L3192

Dabbler
Joined
Jan 25, 2016
Messages
22
The performance penalty for using two way mirrors rather than three way mirrors is that it may go at only 66% of the speed of the three way mirror. In a three way mirror, each component can potentially be reading a separate thing, so read performance can increase as width increases. Write performance, of course, is approximately that of a single underlying device, since writes happen in parallel to all underlying component devices.

You can look under the hood all you want. That's why there's the command line and SSH. Modifying things under the hood is a bad idea unless you really understand how the appliance does its stuff.

Thanks again!


In my case the potential read speedup using the three-way mirror will be really good.

One last question, before I go away and finish my installation.

In general, how would you improve the write performance what types of things would you do or can you do anthing?
 

L3192

Dabbler
Joined
Jan 25, 2016
Messages
22
By GPTID so you can move the drives from port to port or even from controller to controller it doesn't care ;)



Yes, but it's usually a bad idea to modify things under the hood.

Don't intend on modifying things, just want to be able to confirm what the GUI is doing. Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks again!


In my case the potential read speedup using the three-way mirror will be really good.

One last question, before I go away and finish my installation.

In general, how would you improve the write performance what types of things would you do or can you do anthing?

For ZFS in general, maintaining lots of free space on the pool translates to better write performance. ZFS tends to organize both sequential and random writes into large contiguous chunks when possible. High levels of fragmentation cause ZFS to write more slowly. Having multiple vdevs increases the speed, somewhat linearly. Generally this means use larger devices than you might have originally thought to use. This is ESPECIALLY true with storage applications like VM storage that rewrite small chunks of data. It is true to a lesser extent for other general small file storage purposes.

For RAIDZ, writing large records is very fast (think: large file storage). Often faster than mirrors.

For mirrors, writing all records is fairly fast (but the RAIDZ for large records is often faster).

You should definitely optimize your record size along the lines of the type of traffic you expect for your pool, and pick RAIDZ for large records type storage, and mirrors for block data type storage. This usually ends you up with the most idealized performance possible. There may be exceptions though.

If you feel that the additional read speeds from the three-way mirror is useful, be sure to also pay careful attention to having a sufficiently large ARC and L2ARC, which will make a larger impact if your access patterns have locality considerations.
 

L3192

Dabbler
Joined
Jan 25, 2016
Messages
22
For ZFS in general, maintaining lots of free space on the pool translates to better write performance. ZFS tends to organize both sequential and random writes into large contiguous chunks when possible. High levels of fragmentation cause ZFS to write more slowly. Having multiple vdevs increases the speed, somewhat linearly. Generally this means use larger devices than you might have originally thought to use. This is ESPECIALLY true with storage applications like VM storage that rewrite small chunks of data. It is true to a lesser extent for other general small file storage purposes.

For RAIDZ, writing large records is very fast (think: large file storage). Often faster than mirrors.

For mirrors, writing all records is fairly fast (but the RAIDZ for large records is often faster).

You should definitely optimize your record size along the lines of the type of traffic you expect for your pool, and pick RAIDZ for large records type storage, and mirrors for block data type storage. This usually ends you up with the most idealized performance possible. There may be exceptions though.

If you feel that the additional read speeds from the three-way mirror is useful, be sure to also pay careful attention to having a sufficiently large ARC and L2ARC, which will make a larger impact if your access patterns have locality considerations.

---

What would be the ratio of RAM to ARC used for SSD drives ideally and for that matter of L2ARC and regular disks -- looks like I'll have a combination of pools of SSD drives and 4TB disks.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It depends on your workload. If you have a petabyte of hard disk in your pool but you only repeatedly access one gigabyte of it, you will never have much more than a gigabyte of truly useful information in your ARC. If you are frequently accessing a hundred gigabytes of it, it would be handy to have at least 128GB of ARC, and probably more (because ZFS needs to be able to store data over a period of time in order to identify what's useful to hold on to).

There are no magic answers. It comes down to understanding what your workload consists of, and whether or not ZFS can be sized to mitigate.
 

L3192

Dabbler
Joined
Jan 25, 2016
Messages
22
---

What would be the ratio of RAM to ARC used for SSD drives ideally and for that matter of L2ARC and regular disks -- looks like I'll have a combination of pools of SSD drives and 4TB disks.
It depends on your workload. If you have a petabyte of hard disk in your pool but you only repeatedly access one gigabyte of it, you will never have much more than a gigabyte of truly useful information in your ARC. If you are frequently accessing a hundred gigabytes of it, it would be handy to have at least 128GB of ARC, and probably more (because ZFS needs to be able to store data over a period of time in order to identify what's useful to hold on to).

There are no magic answers. It comes down to understanding what your workload consists of, and whether or not ZFS can be sized to mitigate.

----
Reasonable enough. Just looking for past experiences on how FreeNAS performs with ZFS so I can derive the best possible solution for my own test environment and possibly production.

Thanks again to you and everyone for all the replies, suggestions, and general information they have been very helpful.
 
Status
Not open for further replies.
Top