I need a freenas server to replace my vsan

jefftse

Cadet
Joined
Aug 10, 2021
Messages
7
I can no longer trust vsan and looking to build a freenas server.

does ram speed matter much? ddr3 vs ddr4? I'm looking at these 36 bay supermicro barebone with ddr3 and they are a lot cheaper than ddr4.

i'm probably going to start with 10 10TB ironwolf with 1-2 ssd or nvme flash drives and 10 gb nic.

something like this and add drives and 10gb Nics.

thoughts?
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I can no longer trust vsan and looking to build a freenas server.

does ram speed matter much? ddr3 vs ddr4? I'm looking at these 36 bay supermicro barebone with ddr3 and they are a lot cheaper than ddr4.

i'm probably going to start with 10 10TB ironwolf with 1-2 ssd or nvme flash drives and 10 gb nic.

something like this and add drives and 10gb Nics.

thoughts?
You should spend a bit of time reading then post a detailed plan for review. If you're willing to invest a bit of time, you will end-up with a rock-solid block storage server to back your VMware environment.




 

jefftse

Cadet
Joined
Aug 10, 2021
Messages
7
You should spend a bit of time reading then post a detailed plan for review. If you're willing to invest a bit of time, you will end-up with a rock-solid block storage server to back your VMware environment.





Thanks for the links; however, that's not what i was asking. Yes i read them already
 

Tigersharke

BOfH in User's clothing
Administrator
Moderator
Joined
May 18, 2016
Messages
893
I think those links are often overlooked but, for others to give better advice, it would help to know what your use-case is for your new TrueNAS server. This would allow for prioritizing of any issues noted in the hardware, or other suggestions to improve overall performance for the tasks you intend for the server.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Disks (in round numbers)
10*10TB =
100TB Raw - No
90TB in RAIDZ1 - Don't, Just don't
80TB in RAIDZ2 - Not convinced
70TB in RAIDZ3 - Safer - but not for VM's as you are limited to the speed of 1 disk essentially
50TB in Mirrored vdevs which is about as fast as you are going to get on HDD's
You could do 3 mirrored RaidZ1 vdevs plus a spare for 30TB or 2 mirrored RaidZ2 vdevs plus 2 spares for 20TB - but if these were my disks and I had paid for them I would be feeling a bit ill.

Get an Optane and use it as a SLOG (do not use one of the really cheap ones - they are much slower). Does not need to be mirrored unless you are ultra paranoid as if it fails - then it just stops working and your Pool slows down. Set Sync=Always on the Pool. Note that there are corner cases where the Optane fails just as power goes out - but its not terribly likely

RAM - Yes, RAM is good. More please. As much as you can afford. ECC of course. The 128GB in your post is a good number

10Gb NIC (SolarFlare SFN5122F 10G PCI-e Adapter). Not one I know - however Chelsio or Intel are extremely good shouts. I would swap this out for an Intel or Chelsio which TrueNAS is known to work with. Its also worth paying a bit more to get a real one rather than a cheaper knockoff from a chinese back street.
Please note: https://www.truenas.com/community/threads/solarflare-10gb-lacp-and-dumping-core.20142/

CPU - I am fine with your choice. You will have 16 cores (32 with HT). Obviously its a 2013 CPU with DDR3 memory. I have seen something that says when you have that many real cores - it may be worth turning HT off. YMMV

Motherboard has 2 SATA 3 Ports and 8 SATA 2 Ports with no NVME but you have lots of PCIe 3 slots (4*16, 1*8, 1*4, but some of these are already used for the NIC and SAS Card) with both CPU's. However plan carefully what cards go where and what disks (and disk types) attach to what and where you put the SLOG (PCIe or SATA). I assume the LSI card is used to run the 36 bays leaving the onboard free for boot disks (small SSD's - do not use USB)
 

jefftse

Cadet
Joined
Aug 10, 2021
Messages
7
Disks (in round numbers)
10*10TB =
100TB Raw - No
90TB in RAIDZ1 - Don't, Just don't
80TB in RAIDZ2 - Not convinced
70TB in RAIDZ3 - Safer - but not for VM's as you are limited to the speed of 1 disk essentially
50TB in Mirrored vdevs which is about as fast as you are going to get on HDD's
You could do 3 mirrored RaidZ1 vdevs plus a spare for 30TB or 2 mirrored RaidZ2 vdevs plus 2 spares for 20TB - but if these were my disks and I had paid for them I would be feeling a bit ill.

Get an Optane and use it as a SLOG (do not use one of the really cheap ones - they are much slower). Does not need to be mirrored unless you are ultra paranoid as if it fails - then it just stops working and your Pool slows down. Set Sync=Always on the Pool. Note that there are corner cases where the Optane fails just as power goes out - but its not terribly likely

RAM - Yes, RAM is good. More please. As much as you can afford. ECC of course. The 128GB in your post is a good number

10Gb NIC (SolarFlare SFN5122F 10G PCI-e Adapter). Not one I know - however Chelsio or Intel are extremely good shouts. I would swap this out for an Intel or Chelsio which TrueNAS is known to work with. Its also worth paying a bit more to get a real one rather than a cheaper knockoff from a chinese back street.
Please note: https://www.truenas.com/community/threads/solarflare-10gb-lacp-and-dumping-core.20142/

CPU - I am fine with your choice. You will have 16 cores (32 with HT). Obviously its a 2013 CPU with DDR3 memory. I have seen something that says when you have that many real cores - it may be worth turning HT off. YMMV

Motherboard has 2 SATA 3 Ports and 8 SATA 2 Ports with no NVME but you have lots of PCIe 3 slots (4*16, 1*8, 1*4, but some of these are already used for the NIC and SAS Card) with both CPU's. However plan carefully what cards go where and what disks (and disk types) attach to what and where you put the SLOG (PCIe or SATA). I assume the LSI card is used to run the 36 bays leaving the onboard free for boot disks (small SSD's - do not use USB)


thank you! i will be doing raidz3. i cannot afford losing the data. i'm going to get a pcie nvme adapter card for SLOG since it has gen 3 pcie. just didn't know if ddr3 would be not enough for it. it is an older hardware but i think for nas should be fine.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I wouldn't do RAIDZ3. Its fine for SMB type use but you mentioned vsan = VMWare = Virtualisation. RAIDZn is very limited in speed. Mirrored vdevs are your sensible choice unless you want to use a bunch of SSD's for the guests.

I don't know how many guests you have, or what they do - but seriously consider an SSD Pool for them (you have slots a plenty) and then store the bulk data on the RAIDZ3 (which is fine in those circumstances)
 

jefftse

Cadet
Joined
Aug 10, 2021
Messages
7
I wouldn't do RAIDZ3. Its fine for SMB type use but you mentioned vsan = VMWare = Virtualisation. RAIDZn is very limited in speed. Mirrored vdevs are your sensible choice unless you want to use a bunch of SSD's for the guests.

I don't know how many guests you have, or what they do - but seriously consider an SSD Pool for them (you have slots a plenty) and then store the bulk data on the RAIDZ3 (which is fine in those circumstances)

about 100 guests. nothing heavy duty. mostly office work. I just want the best way to secure the data. Thinking using 2TB nvme cache should be enough?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Thats not going to work. If you do this you will fail. You will be on another forum somewhere saying that TrueNAS is crap, too slow / doesn't work

First: ROFL
100 Guests, a Single RAIDZ3 is not going to work. VM's need IOPS. The IOPS from that pool will be the equivalent of a single disk.
Imagine running 100 guests from a single hard disk - that's the equivalent of what you are trying to do here
"running in treacle" comes to mind.

Second: Cache (2TB nvme). There are two types of "cache"
1. Read Cache - this is first and foremost ARC Otherwise known as memory. You can add to this for a specific pool by adding an L2ARC - which is fast (SSD) acting as a slower extension to the memory.
2 Write Cache - simplistically, there isn't one. And you can't add one. What you can do is "fool" ZFS into thinking its committed writes to disk by adding a SLOG. This needs to be (IMHO) a decent Optane, size depends on your network speed but 20GB would be more than enough (for 10Gb) as it only needs to hold about 5 seconds of writes at any one time. Some people have worked out the actual numbers of GB required - I just can't remember - but 20GB will easily cover it

You are going to be seriously write constrained on this design. If you insist on HDD's then fill the case with 36 * 4TB HDD's and run as mirrored pairs which will give you the IOPS of 18HDD's which might work and 72TB (in round numbers). Still use a SLOG

You need to be using SSD's for your VM's and the HDD's for your bulk data. If you have no bulk data and just have VM's then ditch the HDD's and use just SSD's

As a reminder - you need (ideally) 2*small SSDs to boot from (BootPool). These are only used for boot. 32GB is more than ample
an Optane as a SLOG (not a cheap one - they aren't fast enough)
A main pool for your VM's that has a significant number of IOPS (and space for the VM's). IOPS, IOPS, IOPS, IOPS, IOPS
Another pool for bulk data (SMB Shares etc). My view is that SMB Shares do not belong on VM's for performance reasons and should be put directly on a NAS. Leave the VM's for services.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919

jefftse

Cadet
Joined
Aug 10, 2021
Messages
7
Thats not going to work. If you do this you will fail. You will be on another forum somewhere saying that TrueNAS is crap, too slow / doesn't work

First: ROFL
100 Guests, a Single RAIDZ3 is not going to work. VM's need IOPS. The IOPS from that pool will be the equivalent of a single disk.
Imagine running 100 guests from a single hard disk - that's the equivalent of what you are trying to do here
"running in treacle" comes to mind.

Second: Cache (2TB nvme). There are two types of "cache"
1. Read Cache - this is first and foremost ARC Otherwise known as memory. You can add to this for a specific pool by adding an L2ARC - which is fast (SSD) acting as a slower extension to the memory.
2 Write Cache - simplistically, there isn't one. And you can't add one. What you can do is "fool" ZFS into thinking its committed writes to disk by adding a SLOG. This needs to be (IMHO) a decent Optane, size depends on your network speed but 20GB would be more than enough (for 10Gb) as it only needs to hold about 5 seconds of writes at any one time. Some people have worked out the actual numbers of GB required - I just can't remember - but 20GB will easily cover it

You are going to be seriously write constrained on this design. If you insist on HDD's then fill the case with 36 * 4TB HDD's and run as mirrored pairs which will give you the IOPS of 18HDD's which might work and 72TB (in round numbers). Still use a SLOG

You need to be using SSD's for your VM's and HDD's for your bulk data. If you have no bulk data and just have VM's then ditch the HDD's and use just SSD's

As a reminder - you need (ideally) 2*small SSDs to boot from (BootPool). These are only used for boot. 32GB is more than ample
an Optane as a SLOG (not a cheap one - they aren't fast enough)
A main pool for your VM's that has a significant number of IOPS (and space for the VM's). IOPS, IOPS, IOPS, IOPS, IOPS
Another pool for bulk data (SMB Shares etc). My view is that SMB Shares do not belong on VM's for performance reasons and should be put directly on a NAS. Leave the VM's for services.

i agreed what you are saying. VM does need IOPS. I was thinking to create multiple pools and it will look like multiple hard drives if that makes sense. 4TB is fine. I can add more nas later if need to be.

Maybe I need to rethink about doing this. or i need to go back to hardware raid with individual servers but that won't give me the flexibility when i want to take any server offline for maintainence
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
it had failed me twice with a 4 servers configuration.

Unless you were using vSAN 5.5, I wouldn't consider that normal behavior.

Do you have a full list of hardware that was used/failed in the vSAN config? Do you plan to re-use that same hardware here?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
i agreed what you are saying. VM does need IOPS. I was thinking to create multiple pools and it will look like multiple hard drives if that makes sense. 4TB is fine. I can add more nas later if need to be.

Maybe I need to rethink about doing this. or i need to go back to hardware raid with individual servers but that won't give me the flexibility when i want to take any server offline for maintainence
Assuming your 10*10 disks then 5 separate pools, each with a mirrored pair (as efficient as you can get) is in total the same IOPS as the single pool of 5 vdevs. Just spread across the pools, rather than as one pool with all the IOPS combined. then you have to balance the pools usage yourself - that a lot of potential hassle for actually no gain.

I honestly think your are going to have to use SSD's - I am unsure how NAS/SAN providers managed large Virtual estates before SSD's became available other than by vast numbers of little disks which obviously just leads to a higher incidence of failure.

Just what are these 100 guests - and how come they need so much disk space. Most VM's are small (or at least should be IMHO) even projecting a 50% occupancy thats 25-35TB of actual guest space in use. I am guessing (and it is guessing) there are exchange servers / mail servers in the mix - which can get large (as well as vast amounts of random IO. Can you break down, or have another look at your actual requirement, the servers and say why there is so much of a requirement. If you are storing SMB files in guests then maybe you could rethink that and store them on NAS directly. For SMB files you can probably use large disks as you don't need so many IOPS

If hardware RAID worked for you (in IOPS terms) in individual guests then that might work for you again - but add a NAS with some swing storage on so you cam move VM's to the swing storage (SSD's FTW) and then migrate those guests to different storage and then different hosts for host maintenance purposes - but that an awful lot of migration potential just as a host craps itself and you need to urgently reboot - not a scenario I like.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I am unsure how NAS/SAN providers managed large Virtual estates before SSD's became available other than by vast numbers of little disks which obviously just leads to a higher incidence of failure.
15k 2.5" SAS drives
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
God I had forgotten about them. You ended up with racks of the things, despite each taking up so little room with 2 dimensional raid just to keep up with and allow for the failures. Horrible. SSD's are a massive improvement
 
Top