TrueNAS storage solution for virtualization environment

x007alfa

Cadet
Joined
May 3, 2017
Messages
6
Hello forum,
I have some doubts regarding a new project I’m tackling at work this month.
We are running out of space on my old HP StoreVirtual SAN and I’m trying to figure out a budget minded solution to the problem in the topic title.
Basically what I have come up with is the following.
For the storage solution I wanted to go with TrueNAS, probably core because I don’t need anything exceedingly fancy like scaling out and stuff).
I spec’d out a head server in the form of a Supermicro CSE-829U X10DRU-i+ equipped with 2 x E5-2667V4 CPUs because we use Samba shares in the office and I know good single thread power is basically required if I want encryption, which I do. I added 256GB of ram (8 x 32GB DDR4) so I can have a good ARC capacity. The HBA will be either a 9305-16i or 9300-16i, the first is like 200€ more then the latter but I heard the silicon is newer and way more efficient so I’m still considering what to do.
I will connect a JBOD box to the head server in the form of a HP M6720 24 x 3.5" box.
In total I will have 36 x 3.5" bays at my disposal.
I wanted to populate them as follows:
32x 4TB Enterprise SAS HDD to form into a single pool of 4 x 8 wide RaidZ1s
2x 960GB SSD for metadata stuff to make everything a bit more snappy
2x 480GB SSD for L2ARC
I also wanted to add a 16 or 32GB optane ssd for SLOG.
Everything will be connected to my hosts via a pair of stacked Dell PowerConnect 8024F so all 10G links.
In the pool I wanted to have 2 datasets, one NFS for VMWare and one SMB for our windows data sharing stuff.
This is my plan at the moment.
I was also wondering about deduplication functionality but I have no idea what that would imply on the overall responsiveness of the system.
We normally have around 20 to 30 VMs active at any given time.
All the VMs run either Windows 10 or Windows Server 2019 or later.
My question to you experts is would this solution be satisfactory for 10 PLC software developers and maybe 3 or 4 power users?
I want the machines to feel snappy but as I said cash is… well… not a friend in this period.
The proposed solution would come in at a grand total of 6500€ or thereabouts.
Thanks in advance for any suggestions.
 
Joined
Jun 15, 2022
Messages
674
I think the first thing members usually say is RAID-Z1 should be -Z2 with a good 3-2-1 backup plan.

The second solid suggestion might be: Buy some items used in order to save money (one of your goals).

Third: The 9300-16i is pretty fast. Is the 9305-16i that much faster?

Some links to such topics.
Resources List including Detailed Hardware and System Build Notes
 
Last edited:

x007alfa

Cadet
Joined
May 3, 2017
Messages
6
I think the first thing people will say is RAID-Z1 should be -Z2 with a good 3-2-1 backup plan.

The second solid suggestion might be: Buy Used

Resources List including Detailed Hardware and System Build Notes
Hi I do have a small NAS as a backup target, I just need to expand it.
Moreover there is no mission critical stuff on those VMs so it's not like everything rolls to a halt if something fails.
The head server is refurbished by a German etailer who I rely on, they do really great service and are trustworthy.
The disks are all new, for obvious reasons.
I can go Z2 if necessary, I'd be losing 32TB instead of 16, probably not the end of the world...
 

x007alfa

Cadet
Joined
May 3, 2017
Messages
6
I think the first thing members usually say is RAID-Z1 should be -Z2 with a good 3-2-1 backup plan.

The second solid suggestion might be: Buy some items used in order to save money (one of your goals).

Third: The 9300-16i is pretty fast. Is the 9305-16i that much faster?

Some links to such topics.
Resources List including Detailed Hardware and System Build Notes
The 9305 is not really the fact that is considerably faster but they say it runs cooler and is more efficient... the 9300 could work I think, there is AC in the room anyways
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
You're proposing block storage as your primary goal here... how can you be proposing any kind of RAIDZ?

If you want to be in any way satisfied with how your storage performs, read and follow this advice.

 
Joined
Jun 15, 2022
Messages
674
The 9305 is not really the fact that is considerably faster but they say it runs cooler and is more efficient... the 9300 could work I think, there is AC in the room anyways
There is a significant heat difference difference between the 9208 and 9300, though I cannot comment on a 9305.

....there is no mission critical stuff on those VMs so it's not like everything rolls to a halt if something fails.
I'm not sure why so many new members want to use ZFS for unimportant block storage given it doesn't perform well in that roll compared to Ubuntu Server with LVM managing ext4. Don't get me wrong, TrueNAS is an excellent set of products, perhaps you could help me understand your reasoning.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I would start with a guess that LVM snapshots work differently and they aren't as nice as with a CoW system like ZFS.

I agree that if you don't want or need to care about your data, then ZFS might not be the droid you're looking for.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I feel the need to stress a few mentioned points:
* Key to performance for block storage is summarized as gobs of free space, and mirrors.
* The difference in controller performance will not have anywhere near the impact of poorly choosing vdev layout.

2x 960GB SSD for metadata stuff to make everything a bit more snappy
2x 480GB SSD for L2ARC
I also wanted to add a 16 or 32GB optane ssd for SLOG.
There is no point in mirroring L2ARC for redundancy. If it is needed for "performance reasons" you'd look into nvme and forget about regular SSDs. Actually, that's what I would do in this scenario.
The size of metadata drives may be stupidly overkill to no avail. I expect there to be very little 'small io files' from that workload. Correct if I'm wrong there.
I'd also look for tripple mirrors for the metadata vdev, partly because SSDs die typically out of nowhere, and that your entire pool goes down with it, to catastrophic failure, without remedy but probably with remorse. IMO, these do not benefit particularly from 'nvme performance'.
I'd also put in a hot-spare to the mirrors pool, to basically relief heart-rate-pressure when a drive eventually dies.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
There is no point in mirroring L2ARC for redundancy.
There's even no WAY to do that... multiple drives assigned to L2ARC will stripe.

In the case of this thread, I think I would generally recommend against it at all until you show that you're not hitting ARC more than 99%.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
My 2p's worth
"
2x 960GB SSD for metadata stuff to make everything a bit more snappy
2x 480GB SSD for L2ARC
"
This seems a bit redundant. L2ARC will cache metadata anyway and will "make everything a bit more snappy" anyway. (assuming no dedupe)

However - VMWare/NFS will use sync writes and I don't see any SLOG in your design. Unless you are planning on sync=disabled you will want / need a SLOG and probably redundant. This will want to be special in hardware specs. SLOG failure isn't a disaster UNLESS it happens during an unplanned reboot. The problem with your suggestion is that the 16/32GB Optane devices are (from memory) not much use. You would do better with a better Optane or even better would be an RMS300/16G (I am assuming 10Gb - the more normal 8G version is (IMHO) a bit tight for 10Gb)

If you want to go dedupe, for which you seem to mostly have the right kit then you will need decent metadata devices to put the metadata on. (You could rely on your 256GB of ARC but sizing here is is beyond my experience and into educated guessing territory). A few 100GB of Optane, triply redundant based on your 3*Z2 spec above would be good.

Lastly I would just add, mirrors and block storage are good. RAIDZ and block, not so good
 
Joined
Jun 15, 2022
Messages
674
I would start with a guess that LVM snapshots work differently and they aren't as nice as with a CoW system like ZFS.

I agree that if you don't want or need to care about your data, then ZFS might not be the droid you're looking for.
I think a CoW filesystem for VM storage has use as related to this thread, such as when building a VM and saving the build process to TrueNAS for archival purposes. For high speed at low cost VM storage I would guess Ubuntu with ext4 for general use, and for serious users like the Original Poster (20-30 concurrent VMs): Ubuntu iSCSI Storage Server, ESOS, or a hardware iSCSI adapter.

For mission-critical larger-budget projects, TrueNAS is certainly a strong contender.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Disclaimer: I didn't read the entire thread, just the OP.
I was also wondering about deduplication functionality but I have no idea what that would imply on the overall responsiveness of the system.
If you're worried about responsiveness ESPECIALLY given the following quote:
We normally have around 20 to 30 VMs active at any given time.
I would advise you to stay away from dedup. Dedup is a very CPU and RAM intensive feature and seeing that you're already short on cash, you probably won't have the spare capacity for this.

Also, you're not going to like this, but considering that your primary goal is to run snappy responsive VM's (read: IOPS), you really should be using striped mirrors instead of RAIDZ (I know, doesn't help your cash and storage issue) and probably SSD's over HDD's. And not cheap QLC SSD's either. Yes, cheap SSD's can actually perform WORSE than high-end HDD's. I learned this the hard way and one intensive IO in one VM will lock up all the other VM's.

Hate to break this to you, but cheap, fast, reliable, you can only pick 2 out of the 3. There is no Utopia where you can have your cake and eat it too.

These are my lessons learned for you from my experience of running 10-12 VM's at any given time (probably half as intensive as your requirement). The system I use to run this is actually in my signature (NAS1). I have a bulk storage array that doesn't really need performance as striped mirrors of 4x 6TB and VM's are stored on 2x enterprise Intel MLC SSD's. Prior to that, I used 2x cheap Inland Professional SSD's I got on Black Friday for $20 each. Let's just say it didn't go well and my VM's were CRAWLING on those cheap SSD's as soon as the cache runs out whenever I do some intensive file copy in one VM.
 
Last edited:
Top