X8DTN+-f raidz3 16x6TB raise nfs write speed

Status
Not open for further replies.

John_G

Dabbler
Joined
Jul 27, 2016
Messages
18
Hi guys.

I'm using FreeNAS (Supermicro X8DTN+-F, 2xE5620, 56GB RAM, 2x10Gbps RJ45, Raidz3: 16x6TB HGST 7.2k on LSI 9211 IT mode) primarily for backups via CIFS (Veeam) and NFS (Oracle DB). Most of time I see very low speeds for NFS write. I did some research, looks like it's because of using sync'ed write NFS does, raidz2+ doesn't like that. Correct me if I'm wrong. So I consider some upgrades, because I don't really want to turn sync off, though server is in Tier3+ DC with 2PSU.

Some post I've read about that: https://forums.freenas.org/index.ph...my-supermicro-based-lab-freenas-server.58567/
and https://forums.freenas.org/index.ph...n4f-esxi-freenas-aio.57116/page-4#post-403374
According to that note: http://www.freenas.org/blog/zfs-zil-and-slog-demystified/ I really need to use SLOG, not ZIL.

Because of no free slots anymore for hard drives I'll stick with PCIe. I want to buy 2 PCIe SSDs, configure them as mirror and create an SLOG.
First I've looked into new optane devices, I've found 32GB Optane for <100$ per device. But it's not very fast for writing: https://ark.intel.com/products/9974...-Series-32GB-M_2-80mm-PCIe-3_0-20nm-3D-Xpoint

I saw not much devices with PCIe connectors, mostly M.2, but I'm not sure if M.2 -> PCIe connector will work on old X8DTN+-F, anyone uses them with old mobos? If someone does, please write about model, so I could take a look. :)

OCZ RVD400-M22280-256G-A is nice and PCIe adapter is included. Good speeds (1+GB/s for seq writes and 2+GB/s for seq reads) and costs not much, just a bit above 100$.
I liked a lot performance of M.2 Patriot Memory PH240GPM280SSDR, 2+GB/s for seq writes and 3+GB/s for seq reads, but it has no adapter included. It's price is a bit higher than OCZ. And I'm not sure, maybe it will be an overkill for my system.

And for pure sequential speed for raidz3 (16x6TB) vs 2xraidz2 (8x6TB each), is 2xraidz2 is more appropriate for my type of usage? And how about using NVMe PCIe M.2 -> PCIe x4 adapter with my mobo? And which SSD to choose?

Thanks for everyone involved :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

John_G

Dabbler
Joined
Jul 27, 2016
Messages
18
This is the problem. Reconfigure that storage pool. If you want speed, you might try your own suggestion of "2xraidz2 (8x6TB each)".
More vdevs give you more IOPS but there is more fail in your post that I just don't have time to address. Did you read any of the recommendations before you set this up?
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/
https://forums.freenas.org/index.php?resources/
Chris, thanks for fair criticism, I've read hw guide very long ago. Somewhere found performance comparison, and raidz3 was ~ 2xraidz2 in speed. But I didn't consider NFS specifics. Looks like I can't use consumer SSDs because of "SLOG devices should be high-end PCIe NVMe SSDs with Power Loss Protection", that's pretty bad, cuz they cost a LOT higher.
 
Joined
Apr 9, 2015
Messages
1,258
There has been some testing done outside of the forums that will give you some information on the raw speed of some array's. It is limited to read, write, and read/write tests but gives a good idea how certain things should perform and some can be extrapolated form the data when a particular scenario is missing.

https://calomel.org/zfs_raid_speed_capacity.html
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
There is a 'width' limit related to performance on RAIDz sets. The max width for a RAIDz3 set is 11 drives, obviously you can put more drives in, but the performance goes down after that. RAIDz2 is 9 if I recall correctly but I am not sure.
If you want speed, glorious speed, you need more vdevs and to get them some people even use all mirrors. With 16 drives, it would give you 8 vdevs and that really pushes your IOPS up, but at the sacrifice of storage space. The two RAIDz2 vedvs you mentioned is a fair compromise. If you could fit a few more drives in the system, you might want to go with 3 vdevs of 6 drives. More drives helps, but the real boost in ZFS is the additional vdevs as each vdev gives roughly the performance of a single drive.
I don't think SLOG is really going to help as much as having more drives. I run a server at work that has 60 drives and a big part of the reason for so many drives is the speed.
 

John_G

Dabbler
Joined
Jul 27, 2016
Messages
18
nightshade00013, Chris Moore
thanks for your input. I will consider it and implement smarter approach for building next rigs. I'm still thinking about using SSDs, but at first I need to test them (even unreliable consumer ones, just to test speed difference). I did some googling, and found that not much SSD have data in-flight power loss protection (PLP for most devices and PLI for Intel's). Most of them are SATA, only few are PCIe NVMe.
Micron M510DC
Toshiba HK4E
Samsung SM863a
seem to be a good choice price-wise. Need to take a better look for DWPDs.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Status
Not open for further replies.
Top