Building new host, after some opinions

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Hi all,

I am moving my current virtualized setup from a Dell R710:
  • X5670's (VM has 4 vCPU)
  • gigabit nic
  • VM given 48gb ram
  • Passthrough of the HBA
  • 6 x 6 WD reds 5400rpm (EFRX)
  • Pool is in mirror with no slogs or L2arc
Moving to R730xd:
  • E5-2640 v3 x2
  • 256 gb ddr4 ram
  • 10gbit nic
  • Thinking like for like specs for the VM
  • Discs will be 2 x 200gb enterprise SSD's
  • 8 x 1.2 tb 10k SAS HDDs
I am wondering what pool configuration I should go with on TrueNAS core, should I use the SSD's for metadata, slog or l2arc?
I am also wondering if I should stick with mirror or go with raidz1/2, they are faster drives will I be able to saturate 10gbit? What would be a good approach for the pool config? I am looking for speed (but don't want to mirror if it can be avoided, due to space loss), I will likely expand the pool with more drives as time goes on but will have the 8 drives initially.

Open to any input, It will be used for Plex (Plex VM will be sitting on the same host) and also as iSCSI targets for itself and 2 other ESXi hosts, running my home lab with a bunch of VMs.

Cheers

I have added some DD results and also crystal disk marks from a remote host, freenas and it are connected by 1 gbit.
1599001916618.png
1599002390050.png
 
Last edited:

kspare

Guru
Joined
Feb 19, 2015
Messages
508
don't get your hopes high on saturating a 10gb link.....

on my 2 test boxes, I run dual E5-2697 v2 , 256gb of ram, 40gb chelsios cards, 12gb 9300 lsi card, 11 mirrored vdev, p3700 mirrored slog/zil, p3700 mirrored meta drive, and 1 2tb p3700 l2arc......

i've been working on tweaks for years and I can usually hit around 6gb (60%) of a 10gb link.

i've been playing around with iscsi tonight and I can saturate a 10gb link (without a meta drive), infact with sync set to standard I could get about 14gb out of my 40gb link.....but I value my data so it doesn't matter and with sync set to always I can get 10gb.

I'm building a box to do massive vm load, depending on what you are doing I wouldn't worry about the meta drive. 200gb isn't much of a l2arc either. i'd mirror them for your zil/log drive as you have lots of ram.

I hope that helps a bit!
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Thanks sure does! And I will also be running iSCSI, I should test it in different pool configs, with and without zil and sync standard vs always.
How are you testing ?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @Shankage

In your original post, there are 2 things that point you directly to mirror :
The need for speed is your first one. The speed of a pool is directly related to the number of vdev you have. Mirrors being the way for maximum number of vdev, it is the way for maximum speed.

Second thing that points you to mirror is your desire to expand. To mix-N-match vdev is not a good practice. All vdevs in a pool should be the same type and using the same number of drive. Should you go with a single Raid-Z2 8 drives vdev, you will have to add 8 drives at once to respect that reality. By going with mirrors, you can add drives 2 at a time.

Finally, I would not trust Raid-Z1 for a main pool. Here, I forced myself to use it for Thanatos because it is only a third copy and my 2 others server are much stronger. Still, when I will have the opportunity, I will get rid of it.
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Hey @Shankage

In your original post, there are 2 things that point you directly to mirror :
The need for speed is your first one. The speed of a pool is directly related to the number of vdev you have. Mirrors being the way for maximum number of vdev, it is the way for maximum speed.

Second thing that points you to mirror is your desire to expand. To mix-N-match vdev is not a good practice. All vdevs in a pool should be the same type and using the same number of drive. Should you go with a single Raid-Z2 8 drives vdev, you will have to add 8 drives at once to respect that reality. By going with mirrors, you can add drives 2 at a time.

Finally, I would not trust Raid-Z1 for a main pool. Here, I forced myself to use it for Thanatos because it is only a third copy and my 2 others server are much stronger. Still, when I will have the opportunity, I will get rid of it.

Thank you for your response, Ideally I would love to go mirror.. but I would have to buy more drives! Out of curiousity, if we are talking about raw speed here, how much of a real world difference would there be between the choices?
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Thank you for your response, Ideally I would love to go mirror.. but I would have to buy more drives! Out of curiousity, if we are talking about raw speed here, how much of a real world difference would there be between the choices?

When you talk about speed - du you mean sequential or random ? For random i/o workloads with lots of IOPS mirrors are the way to go. But for sequential low IOPS raidz blasts away mirrors
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
When you talk about speed - du you mean sequential or random ? For random i/o workloads with lots of IOPS mirrors are the way to go. But for sequential low IOPS raidz blasts away mirrors
Thats a good question, do using vms, spinning them up and plex use seq or random or a mix ?
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Thats a good question, do using vms, spinning them up and plex use seq or random or a mix ?

VMs usually use random I/O, but if you would benefit from mirrors depends a lot on the usecase. I'm running a single VM with Ubuntu and Unifi on my 6 disk raidz2 together with 4 jails at home - and for that usecase I can't really feel any difference compared to my mirrored servers @work (the ones still on rotating disks, the SSD ones are a whole other ballgame :P )
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
So ill be running anywhere from 20-40 vms as its my homelab and used for testing. Secondary is plex.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
So ill be running anywhere from 20-40 vms as its my homelab and used for testing. Secondary is plex.

With that amount of VMs mirrors seems like the best way to go (unless the VMs are low disk I/O) - but hopefully someone running a lot of VMs can be more helpful than I :smile:
 
Last edited:

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
They definitely aren't used all the time and all always active, at any given time there should be a minimum of 8 vms on, all monitoring and management ones.
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
So doing some tests with DD, new host vs old and interestingly got a slightly better result on the old host. Both are configured like for like now with mirrors. TrueNAS is the new system, fresh install.
1599219582123.png
1599219596113.png
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Ok ran it a few more times on both systems and the older system stayed around 104 - 10810026521 bytes/sec and the new system was around 11591460012 bytes/sec more consistently. I also upped it to a 40k count.

Adjusting the bs=2048k actually yields better performance on the old system more consistently than the new one.

Hmm thinking about it, they aren't identically setup, the older host has a hba in passhthrough to the vm, where as the new host has a h730 that has drives in passthrough mode and then added to the vm in its config. Also the TrueNAS install is on a usb datastore (temp setup for testing) but I would think that after it is booted it gets loaded into memory and won't matter anyway?
 
Last edited:

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Did some more reading and tested creating a file that would be larger than the memory cache and got the below results, older system with wd reds still faster than the 10k sas drives... can't think why either?
1599221156145.png
1599221168466.png
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Started testing again this morning, first stop was to check the sync status of both pools, first host was disabled and second was enabled.. so disabled it on the new host. Testing revealed no difference.

Doing some searching around, the write cache on the disks in host 1 was set to enabled but disabled on the new one..after enabling that results were in the 1325663016 bytes/sec range, equates to about a 267MB/s increase.
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
wow Ok, downgraded from TrueNAS to the same version of FreeNAS 11.2-u8 and performance was much better with a like for like setup.
1599266506105.png
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
You are getting over 1GB/s on these tests... that's a lot for 6-8 drives. What's the problem?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Smaller 10Krpm drives are better for IOPS, but not necessarily faster for bandwidth operations. Most ZFS builds don't recommend the 10Krpm drives, just get a better/larger cache instead.
 
Top