12 drive recommendations (and i made a mistake)

Jason Brunk

Dabbler
Joined
Jan 1, 2016
Messages
28
My freenas system is a super micro board, 32 gigs ECC memory. 12 x 2tb drives.

I used to have it configured with 4 x 3 raidz vdevs. I was very weary of the chance of 2 disks going down would wreck everything. So I redid my config and made a BIG mistake. I moved everything off and reconfigured to 1 vdev with 12 x 2tb raidz2. My performance for my ESX lab and what not went to crap (yep, big mistake on my iops.)

Soooo considering making some changes again, but not 100% on the best course of action. Definitely need to get my performance back up, so single vdev doesn't seem to be ideal for that. But would like to maximize my capacity as best I can.

Wondering if anyone has any suggestions on maybe if I should go with 2 mirrored vdevs? would each of those need to be raidz or raidz2 or just stripped?

Or should i look at a different configuration? Not sure if if slog or zil would help in my case yet based on smaller size of my setup.

Thanks for any recommendations!
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
What about doing 6 mirror vdevs? You lose some space but easiest way to get your IOPs back. You could do 2 X RAIDZ2 vdevs which is 11.2TB of usable space vs. 8.4TB for mirrors (after 80% threshold taken into account) and is the only other thing that would make sense to me. Depends on how much performance you really need though.

How much of your pool is for VMs and how much is for other data like media?
 
Last edited:

Jason Brunk

Dabbler
Joined
Jan 1, 2016
Messages
28
What about doing 6 mirror vdevs? You lose some space but easiest way to get your IOPs back. You could do 2 X RAIDZ2 vdevs which is 11.2TB of usable space vs. 8.4TB for mirrors (after 80% threshold taken into account) and is the only other thing that would make sense to me. Depends on how much performance you really need though.

How much of your pool is for VMs and how much is for other data like media?

I have only 750 gigs dedicated to iscsi for my vms. Most of my vm's just do a nfs mount to the nas for access to other data. But in short, a small portion is for vms.

I had contemplated, leaving the data as is in 1 big vdev for mass storage. But add a few SSD's to freenas and create a separate raidz/z2 volume for fast storage. Then put my iscsi there and any other shares/data that needs better speeds.

Didn't know if slog or zil would help at all either.

Thanks for the feedback!
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
More memory might help, and I would add that before a l2ARC (I think that might be what you are referring to). If you are using NFS and have sync writes set to standard on the dataset, that will slow it down pretty drastically as well unless you have an SSD for a slog. There are some safety ramifications to disabling sync writes though - I think more so with NFS than with iSCSI. I remember seeing something about NFS not writing metadata with sync disabled. Someone else might correct me on that though.

If you can add a couple SSDs in mirror and use that for VMs, that would be the best solution to me. That's what I ended up doing for my VMs so that I could have another pool set up just for storing data.
 

Jason Brunk

Dabbler
Joined
Jan 1, 2016
Messages
28
So possible recommendation is keep system as is. Add some SSD storage. Possibly more memory as well. My current ram usage is attached.

I have read some stuff on the sync writes slowing some things down. If I did a small SSD for slog would that make any difference on that? Definitely seems like doing the I2ARC may not be necessary at this time. Since it seems I have some free memory, should I maybe tweak the arc size?

1562000860368.png
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Interesting that shows 13GB free which to me is a lot. Usually ARC fills pretty fast, unless you recently restarted the system. For instance, I have 64GB and show only 1.9 free right now.

If you are doing sync writes to a wide RAIDZ2 pool like that, it would slow things down massively and adding a slog would help a lot. I've never set up a slog, but I think there are special considerations, like it needs to have power protection, or you lose the whole point of having sync writes and ZIL. If you do some research that information should be available.

You could just disable sync writes temporarily to test out the speed impact. If it's still not fast enough you might look at pool made of SSD's option.

I've been using iSCSI which doesn't do all sync writes unless forced, and I've left my zvol sync to standard. There is some risk with that set up, but I have of regular snapshots and backups for my VMs, and it's a home system. I have not seen any issues in the year I've been running it this way, but it's not really the recommended route for most data integrity. If I was in your position wanting to use NFS, I'd research the ramifications of disabling sync writes on NFS, or just get a slog.

I found the following to be helpful in deciding: http://nex7.blogspot.com/2013/04/zfs-intent-log.html It's mostly risk vs. cost.
 

Jason Brunk

Dabbler
Joined
Jan 1, 2016
Messages
28
Interesting that shows 13GB free which to me is a lot. Usually ARC fills pretty fast, unless you recently restarted the system. For instance, I have 64GB and show only 1.9 free right now.

If you are doing sync writes to a wide RAIDZ2 pool like that, it would slow things down massively and adding a slog would help a lot. I've never set up a slog, but I think there are special considerations, like it needs to have power protection, or you lose the whole point of having sync writes and ZIL. If you do some research that information should be available.

You could just disable sync writes temporarily to test out the speed impact. If it's still not fast enough you might look at pool made of SSD's option.

I've been using iSCSI which doesn't do all sync writes unless forced, and I've left my zvol sync to standard. There is some risk with that set up, but I have of regular snapshots and backups for my VMs, and it's a home system. I have not seen any issues in the year I've been running it this way, but it's not really the recommended route for most data integrity. If I was in your position wanting to use NFS, I'd research the ramifications of disabling sync writes on NFS, or just get a slog. Someone else might have more advice around that.

Interesting. I rebooted on saturday. I wonder if there is a setting somewhere that is limiting me. The NFS is really just for stuff like, my plex system to access media, server downloads ect. So the NFS is not as important. So I can leave the sync settings. Ill look at the slog info. I saw some stuff about the slog and power backup ect so Ill do some digging there.

Time to do some SSD shopping for my fast storage :)
 

Jason Brunk

Dabbler
Joined
Jan 1, 2016
Messages
28
Interesting that shows 13GB free which to me is a lot. Usually ARC fills pretty fast, unless you recently restarted the system. For instance, I have 64GB and show only 1.9 free right now.

If you are doing sync writes to a wide RAIDZ2 pool like that, it would slow things down massively and adding a slog would help a lot. I've never set up a slog, but I think there are special considerations, like it needs to have power protection, or you lose the whole point of having sync writes and ZIL. If you do some research that information should be available.

You could just disable sync writes temporarily to test out the speed impact. If it's still not fast enough you might look at pool made of SSD's option.

I've been using iSCSI which doesn't do all sync writes unless forced, and I've left my zvol sync to standard. There is some risk with that set up, but I have of regular snapshots and backups for my VMs, and it's a home system. I have not seen any issues in the year I've been running it this way, but it's not really the recommended route for most data integrity. If I was in your position wanting to use NFS, I'd research the ramifications of disabling sync writes on NFS, or just get a slog.

I found the following to be helpful in deciding: http://nex7.blogspot.com/2013/04/zfs-intent-log.html It's mostly risk vs. cost.


well, looks like i have some lingering auto tune settings from who knows when. Would you recommend just dropping these values and let them be default, or maybe re-run autotune.

1562002751433.png
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
I have never used autotune, but my impression of it is that it's more for troubleshooting than something that will "tune for best performance." I would remove all autotune settings and reboot, unless you know specifically why you need it.

Without looking them up individually, it looks like the first one is to manage arc size, which ZFS should do automatically. I wouldn't mess with that unless you know exactly what you are doing. The rest are L2ARC, which if I'm not mistaken you don't even have. If you do, I've tweaked no prefetch and write_boox/write_max to noticeable effect.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
You would be much better off with SSD storage for ESXi VMs. 1TB SSD drives are "cheap" these days. If you have at least daily backups (Veeam) you could likely get away with (2) SSDs in a mirror. If you lose a drive, vMotion them to slow storage until you get a replacement.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
One more thing I'll add - I have a couple VMs that would be a royal pain to replace as they took many hours of configuration to set up. For those I specifically have sync=always on my zvol which for nfs is the same as sync=standard. Since they are on SSDs, and they aren't doing much writing, the performance impact is negligible.
 
Top