Need advice on All-Flash TrueNas/FreeNas setup and Dell PowerEdge R740XD or Supermicro SuperServer A+ SERVER 2113S-WN24RT

pratik.x13

Cadet
Joined
Dec 24, 2014
Messages
3
Hi I’m setting up a new NAS for production and I require an all-flash pool this is for a 12 to 14 TB space visualization studio working with 3d data and application like 3ds max blender Maya with 50+artist

Currently, we have this server with free nas
Server ConfigurationIntel
Xeon E5-2620 V3 @2.40GHz32Gb ram2x Intel I350-T4 Quad 1G Ethernet Server Adapter in LACP
OS FreeNAS 9.3 in 2x 120gb SSD Raid1
Volume Configuration
RAID-Z1 Single parity2TB x4 SATA NAS Hard Drives
250 GB SSD x2 read cache250 GB SSD write cache

Right now I’m checking these 3 options and planning to install TrueNAS CORE 12 need advice on any unforeseen issue with config and finalizing between these options

Option 1 All NVMe pool
Supermicro SuperServer A+ SERVER 2113S-WN24RT 24 NVME Drives
1x Rome 7402P UP 24C/48T + 32GB DDR4-3200 2Rx4 ECC REG DIMM Total 256gb ram
Micron 5300 PRO 240GB, SATA, 2.5", x2 Hot-swap for os
3.84TB Micron 9300 PRO Series U.2 total 5 drive zpool
10-Gigabit Ethernet Network Adapter X710-T4 (4x RJ45)

Option 2 all NVMe pool
Dell PowerEdge R740XD Server l
Chassis up to 24 x 2.5 Hard Drives including 24 NVME Drives, 2CPU Configuration,
2x Intel Xeon Silver 4210R 2.4G, 10C/20T,
32GB RDIMM, 3200MT/s, Dual Rank Total 128GB ram
Dell 3.84TB NVMe, Data Center Read Intensive Express Flash, 2.5in with Carrier SFF U.2 AG
Intel X550 Quad Port 10GbE BASE-T, rNDC1

Option 3 SSD SAS + NVMe read-write cache
Dell PowerEdge R740XD Server
Chassis up to 24 x 2.5 Hard Drives including 12 NVME Drives,
2 x CPU Configuration12Intel Xeon Silver 4210R 2.4G, 10C/20T,
32GB RDIMM, 3200MT/s, Dual Rank Total 128GB ram
8x Dell 1.92TB SSD SAS Read Intensive 12Gbps 512 2.5in Hot-plug AG Drive, 1 DWPD zpool
3x Dell 960GB NVMe, Data Center Read Intensive Express Flash 2 read-cache 1 write cache
HBA330 Controller Adapter,
Intel X550 Quad Port 10GbE BASE-T, rNDC1
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
I can't speak to the SuperMicro build but I do have an R740xd with 16 x 15.36TB Micron 9300 NVME drives. Based on my experience with this model as well as flash purchasing in general, I do have some suggestions.

  1. Skip the SAS and SATA SSDs entirely and go with all NVME. NVME drives, especially enterprise level, are either similarly priced or cheaper than comparable SAS and SATA based flash so , for this build at least, there's little to no reason to go with anything other than NVME.
  2. If your plan is to eventually use all 24 bays, make sure you get two of the 16X PCI Express expander cards from Dell or get two of the 9400 "tri-mode" internal Broadcom HBAs. The maximum number of PCI Express lanes that you can get in this chassis from Dell is 96. From the factory, Dell configures this back plane with four cables connected to a pair of dual port 16X PCI Express expander cards which give you a total of 32 lanes max for the NVME drives.
  3. Skip the read cache. The NVME drives are so fast, you won't see any appreciable benefit from almost anything else you might conceive of. Your mileage may vary depending on your workloads, but unless you are performing a ridiculous amount of synchronous writes, you can also skip the SLOG device and just write directly to the NVME drives. My implementation is being used for extremely high IO VM storage and so we do have a set of Intel Optane drives for our SLOG.
  4. TrueNAS core 12.0 is a must. The nvd driver in FreeNAS 11.3U5 gave us troubles when we tried to use more than 12 NVME drives.
  5. Another consideration is that the nvd driver, even in TrueNAS core 12.0, doesn't have full NVME hot swap support yet. It's not a show stopper, but until full hot swap support is added, I would suggest adding a hot spare so you can tolerate the failure of an NVME drive without rebooting the server.
  6. This one is more of a personal issue with Dell, more specifically, their storage prices. We currently pay just over $2400 each for the 15.36TB Micron 9300 NVME drives from CDW. Dell's price for that same drive, just with a Dell sticker\firmware is well over $10K. We can certainly get them down but we're still looking at $4500 to $5K. If you don't mind dealing with drive warranties yourself, skip the Dell drives and save some money.
  7. Add the internal BOSS boot card. They are relatively cheap and you won't eat two front bay slots for your OS drives.
***EDIT***

One final note: You might be tempted to place all of your NVME drives in a single vdev. That is fine up to a certain number of drives. However, the same rules apply to NVME vdevs\drive pools\RAID arrays that apply to everything else. The bigger the vdev, the greater chance that a series of failures could cause some serious data loss before you can resilver. In my case, our 16 NVME drives are running in a 2 x 8 RAIDZ2 pool with plans to add a 3rd group of 8 once additional capacity is required.
 
Last edited:

pratik.x13

Cadet
Joined
Dec 24, 2014
Messages
3
I can't speak to the SuperMicro build but I do have an R740xd with 16 x 15.36TB Micron 9300 NVME drives. Based on my experience with this model as well as flash purchasing in general, I do have some suggestions.

  1. Skip the SAS and SATA SSDs entirely and go with all NVME. NVME drives, especially enterprise level, are either similarly priced or cheaper than comparable SAS and SATA based flash so , for this build at least, there's little to no reason to go with anything other than NVME.
  2. If your plan is to eventually use all 24 bays, make sure you get two of the 16X PCI Express expander cards from Dell or get two of the 9400 "tri-mode" internal Broadcom HBAs. The maximum number of PCI Express lanes that you can get in this chassis from Dell is 96. From the factory, Dell configures this back plane with four cables connected to a pair of dual port 16X PCI Express expander cards which give you a total of 32 lanes max for the NVME drives.
  3. Skip the read cache. The NVME drives are so fast, you won't see any appreciable benefit from almost anything else you might conceive of. Your mileage may vary depending on your workloads, but unless you are performing a ridiculous amount of synchronous writes, you can also skip the SLOG device and just write directly to the NVME drives. My implementation is being used for extremely high IO VM storage and so we do have a set of Intel Optane drives for our SLOG.
  4. TrueNAS core 12.0 is a must. The nvd driver in FreeNAS 11.3U5 gave us troubles when we tried to use more than 12 NVME drives.
  5. Another consideration is that the nvd driver, even in TrueNAS core 12.0, doesn't have full NVME hot swap support yet. It's not a show stopper, but until full hot swap support is added, I would suggest adding a hot spare so you can tolerate the failure of an NVME drive without rebooting the server.
  6. This one is more of a personal issue with Dell, more specifically, their storage prices. We currently pay just over $2400 each for the 15.36TB Micron 9300 NVME drives from CDW. Dell's price for that same drive, just with a Dell sticker\firmware is well over $10K. We can certainly get them down but we're still looking at $4500 to $5K. If you don't mind dealing with drive warranties yourself, skip the Dell drives and save some money.
  7. Add the internal BOSS boot card. They are relatively cheap and you won't eat two front bay slots for your OS drives.
***EDIT***

One final note: You might be tempted to place all of your NVME drives in a single vdev. That is fine up to a certain number of drives. However, the same rules apply to NVME vdevs\drive pools\RAID arrays that apply to everything else. The bigger the vdev, the greater chance that a series of failures could cause some serious data loss before you can resilver. In my case, our 16 NVME drives are running in a 2 x 8 RAIDZ2 pool with plans to add a 3rd group of 8 once additional capacity is required.

Hi thanks for giving info on your build
That is giving me confidence now this config will work TrueNas

I'll add one more drive for hot spare (Thanks for that heads up for the hot-swap issue)

Internal BOSS boot card I have added now dell also recommended that in a raid 1

PCI Express expander cards are added in the build I think ill cross-check on that

And yes the dell drives are at a higher cost then Micron but I have to go with that for the payment option

Just added the full details of the build
It will be a great help if you can review it and call out for any problematic component

Components
1 3200MT/s RDIMMs
1 BOSS controller card + with 2 M.2 Sticks 480GB (RAID 1),FH
1 PowerEdge R740/R740XD Motherboard
2 Intel Xeon Silver 4210R 2.4G, 10C/20T, 9.6GT/s, 13.75M Cache, Turbo, HT (100W) DDR4-2400
1 iDRAC Group Manager, Enabled
1 Chassis up to 24 x 2.5 Hard Drives including 24 NVME Drives, Max of 8 SAS/SATA, GPU Capable
Configuration
1 PowerEdge 2U Standard Bezel
1 Riser Config 9, 3x8, 4 x16 slots
1 PowerEdge R740 Shipping Material
1 Quick Sync 2 (At-the-box mgmt)
1 Performance Optimized
8 32GB RDIMM, 3200MT/s, Dual Rank
1 iDRAC9,Enterprise
5 Dell 3.84TB NVMe, Data Center Read Intensive Express Flash, 2.5in with Carrier SFF U.2 AG
Drive
3 2.4TB 10K RPM SAS 12Gbps 512e 2.5in Hot-plug Hard Drive
1 HBA330 Controller Adapter, Low Profile
1 6 Performance Fans forR740/740XD
1 Dual, Hot-plug, Redundant Power Supply (1+1), 750W
2 Jumper Cord - C13/C14, 4M, 250V, 10A (India BIS)
2 Power Cord - C13, 1.8M, 250V, 10A (India)
1 Trusted Platform Module 1.2
1 GPU Ready Configuration Cable Install Kit
1 PE R740XD Luggage Tag
1 Intel X550 Quad Port 10GbE BASE-T, rNDC
1 No Systems Documentation, No OpenManage DVD Kit
1 Power Saving Dell Active Power Controller
1 HS Install Kit,GPU Config,No cable
1 ReadyRails Sliding Rails With Cable Management Arm
1 No RAID
Software
1 iDRAC,Factory Generated Password
1 No Operating System
1 UEFI BIOS Boot Mode with GPT Partition
1 OpenManage Enterprise Advanced
Service
1 ProDeploy Dell Server R Series 1U/2U - Deployment
1 ProDeploy Dell Server R Series 1U/2U - Deployment Verification
1 Basic Next Business Day 36 Months
1 ProSupport and Next Business Day Onsite Service Initial, 36 Month(s)
 

pratik.x13

Cadet
Joined
Dec 24, 2014
Messages
3
I can't speak to the SuperMicro build but I do have an R740xd with 16 x 15.36TB Micron 9300 NVME drives. Based on my experience with this model as well as flash purchasing in general, I do have some suggestions.

  1. Skip the SAS and SATA SSDs entirely and go with all NVME. NVME drives, especially enterprise level, are either similarly priced or cheaper than comparable SAS and SATA based flash so , for this build at least, there's little to no reason to go with anything other than NVME.
  2. If your plan is to eventually use all 24 bays, make sure you get two of the 16X PCI Express expander cards from Dell or get two of the 9400 "tri-mode" internal Broadcom HBAs. The maximum number of PCI Express lanes that you can get in this chassis from Dell is 96. From the factory, Dell configures this back plane with four cables connected to a pair of dual port 16X PCI Express expander cards which give you a total of 32 lanes max for the NVME drives.
  3. Skip the read cache. The NVME drives are so fast, you won't see any appreciable benefit from almost anything else you might conceive of. Your mileage may vary depending on your workloads, but unless you are performing a ridiculous amount of synchronous writes, you can also skip the SLOG device and just write directly to the NVME drives. My implementation is being used for extremely high IO VM storage and so we do have a set of Intel Optane drives for our SLOG.
  4. TrueNAS core 12.0 is a must. The nvd driver in FreeNAS 11.3U5 gave us troubles when we tried to use more than 12 NVME drives.
  5. Another consideration is that the nvd driver, even in TrueNAS core 12.0, doesn't have full NVME hot swap support yet. It's not a show stopper, but until full hot swap support is added, I would suggest adding a hot spare so you can tolerate the failure of an NVME drive without rebooting the server.
  6. This one is more of a personal issue with Dell, more specifically, their storage prices. We currently pay just over $2400 each for the 15.36TB Micron 9300 NVME drives from CDW. Dell's price for that same drive, just with a Dell sticker\firmware is well over $10K. We can certainly get them down but we're still looking at $4500 to $5K. If you don't mind dealing with drive warranties yourself, skip the Dell drives and save some money.
  7. Add the internal BOSS boot card. They are relatively cheap and you won't eat two front bay slots for your OS drives.
***EDIT***

One final note: You might be tempted to place all of your NVME drives in a single vdev. That is fine up to a certain number of drives. However, the same rules apply to NVME vdevs\drive pools\RAID arrays that apply to everything else. The bigger the vdev, the greater chance that a series of failures could cause some serious data loss before you can resilver. In my case, our 16 NVME drives are running in a 2 x 8 RAIDZ2 pool with plans to add a 3rd group of 8 once additional capacity is required.

In your build what CPU are you using & ram?
And in active use how much is the CPU is utilized?
I am targeting 12-14 TB usable space and future additional capacity with 14 TB more what pool setup would you recommend

Thanks
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Hi thanks for giving info on your build
That is giving me confidence now this config will work TrueNas

I'll add one more drive for hot spare (Thanks for that heads up for the hot-swap issue)

Internal BOSS boot card I have added now dell also recommended that in a raid 1

PCI Express expander cards are added in the build I think ill cross-check on that

And yes the dell drives are at a higher cost then Micron but I have to go with that for the payment option

Just added the full details of the build
It will be a great help if you can review it and call out for any problematic component

Components
1 3200MT/s RDIMMs
1 BOSS controller card + with 2 M.2 Sticks 480GB (RAID 1),FH
1 PowerEdge R740/R740XD Motherboard
2 Intel Xeon Silver 4210R 2.4G, 10C/20T, 9.6GT/s, 13.75M Cache, Turbo, HT (100W) DDR4-2400
1 iDRAC Group Manager, Enabled
1 Chassis up to 24 x 2.5 Hard Drives including 24 NVME Drives, Max of 8 SAS/SATA, GPU Capable
Configuration
1 PowerEdge 2U Standard Bezel
1 Riser Config 9, 3x8, 4 x16 slots
1 PowerEdge R740 Shipping Material
1 Quick Sync 2 (At-the-box mgmt)
1 Performance Optimized
8 32GB RDIMM, 3200MT/s, Dual Rank
1 iDRAC9,Enterprise
5 Dell 3.84TB NVMe, Data Center Read Intensive Express Flash, 2.5in with Carrier SFF U.2 AG
Drive
3 2.4TB 10K RPM SAS 12Gbps 512e 2.5in Hot-plug Hard Drive
1 HBA330 Controller Adapter, Low Profile
1 6 Performance Fans forR740/740XD
1 Dual, Hot-plug, Redundant Power Supply (1+1), 750W
2 Jumper Cord - C13/C14, 4M, 250V, 10A (India BIS)
2 Power Cord - C13, 1.8M, 250V, 10A (India)
1 Trusted Platform Module 1.2
1 GPU Ready Configuration Cable Install Kit
1 PE R740XD Luggage Tag
1 Intel X550 Quad Port 10GbE BASE-T, rNDC
1 No Systems Documentation, No OpenManage DVD Kit
1 Power Saving Dell Active Power Controller
1 HS Install Kit,GPU Config,No cable
1 ReadyRails Sliding Rails With Cable Management Arm
1 No RAID
Software
1 iDRAC,Factory Generated Password
1 No Operating System
1 UEFI BIOS Boot Mode with GPT Partition
1 OpenManage Enterprise Advanced
Service
1 ProDeploy Dell Server R Series 1U/2U - Deployment
1 ProDeploy Dell Server R Series 1U/2U - Deployment Verification
1 Basic Next Business Day 36 Months
1 ProSupport and Next Business Day Onsite Service Initial, 36 Month(s)
[/QUOTE
Hi thanks for giving info on your build
That is giving me confidence now this config will work TrueNas

I'll add one more drive for hot spare (Thanks for that heads up for the hot-swap issue)

Internal BOSS boot card I have added now dell also recommended that in a raid 1

PCI Express expander cards are added in the build I think ill cross-check on that

And yes the dell drives are at a higher cost then Micron but I have to go with that for the payment option

Just added the full details of the build
It will be a great help if you can review it and call out for any problematic component

Components
1 3200MT/s RDIMMs
1 BOSS controller card + with 2 M.2 Sticks 480GB (RAID 1),FH
1 PowerEdge R740/R740XD Motherboard
2 Intel Xeon Silver 4210R 2.4G, 10C/20T, 9.6GT/s, 13.75M Cache, Turbo, HT (100W) DDR4-2400
1 iDRAC Group Manager, Enabled
1 Chassis up to 24 x 2.5 Hard Drives including 24 NVME Drives, Max of 8 SAS/SATA, GPU Capable
Configuration
1 PowerEdge 2U Standard Bezel
1 Riser Config 9, 3x8, 4 x16 slots
1 PowerEdge R740 Shipping Material
1 Quick Sync 2 (At-the-box mgmt)
1 Performance Optimized
8 32GB RDIMM, 3200MT/s, Dual Rank
1 iDRAC9,Enterprise
5 Dell 3.84TB NVMe, Data Center Read Intensive Express Flash, 2.5in with Carrier SFF U.2 AG
Drive
3 2.4TB 10K RPM SAS 12Gbps 512e 2.5in Hot-plug Hard Drive
1 HBA330 Controller Adapter, Low Profile
1 6 Performance Fans forR740/740XD
1 Dual, Hot-plug, Redundant Power Supply (1+1), 750W
2 Jumper Cord - C13/C14, 4M, 250V, 10A (India BIS)
2 Power Cord - C13, 1.8M, 250V, 10A (India)
1 Trusted Platform Module 1.2
1 GPU Ready Configuration Cable Install Kit
1 PE R740XD Luggage Tag
1 Intel X550 Quad Port 10GbE BASE-T, rNDC
1 No Systems Documentation, No OpenManage DVD Kit
1 Power Saving Dell Active Power Controller
1 HS Install Kit,GPU Config,No cable
1 ReadyRails Sliding Rails With Cable Management Arm
1 No RAID
Software
1 iDRAC,Factory Generated Password
1 No Operating System
1 UEFI BIOS Boot Mode with GPT Partition
1 OpenManage Enterprise Advanced
Service
1 ProDeploy Dell Server R Series 1U/2U - Deployment
1 ProDeploy Dell Server R Series 1U/2U - Deployment Verification
1 Basic Next Business Day 36 Months
1 ProSupport and Next Business Day Onsite Service Initial, 36 Month(s)

If you feel you need the 10K RPM drives, I would add one more so that you can do a RAIDZ2, assuming you want more than 1 drive worth of capacity. The reason for that is because, when a drive fails, the most likely time for an additional failure is during the resilver. By adding a 4th drive, you add another measure of protection.
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
In your build what CPU are you using & ram?
And in active use how much is the CPU is utilized?
I am targeting 12-14 TB usable space and future additional capacity with 14 TB more what pool setup would you recommend

Thanks

We are using 2 x Intel Xeon Silver 4216 CPUs. CPU utilization is currently 10-15% max. If I'm honest, the system likely has way more CPU than it needs. When I built the design for this server, I couldn't find much information around FreeNAS\TrueNAS and large NVME, High IO systems so I had to guess. I went with these CPUs based off of my experience with our previous 60 and 102 bay JBOD FreeNAS designs. Since there is no RAID controller, and the CPU does almost all of the work, it's important to make sure the system has adequate CPU. In our case, when a disk in one of our previous systems would fail under high load, the increased CPU load from parity calculations was usually enough to max out the CPUs. Because of that, losing a disk was effectively losing access to the entire system. As a result, I went over the top on CPU power for the R740xd as we expected it to handle considerably higher IO than our other systems.

In terms of the disk pool design, capacity is important but you also need to take into account what the purpose and lifetime of this pool will be. Depending on your use case, the pool design can change drastically, even for an NVME pool.
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Tomorrow i will pickup a 1114S-WN10RT server with an AMD 7402p CPU, 512GB RAM and some nvme disks.
I also have a RMS-300 (for SLOG purpose) to use.
I can share some data when i have done some tests. Let me know if you have any test requests.
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Atm i only have 4 disks (PM983) in mirrored stripes, and only have a 10Gbit NIC (my dual 40G NIC has not yet arrived).
Using iSCSI to vmware 7 host, i just ran Crystaldiskmark inside the windows host.

Sync Standard on Pool, Sync Always on zvol, no SLOG.
iscsi-4xnvme-no-slog.jpg


Sync Standard on Pool, Sync Always on zvol, with SLOG (RMS-200)
iscsi-4xnvme-with-slog.jpg



Anything you'd like me to test, let me know.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
So jealous. Can't get more than 150 to 190MB/s writes out of my SSDs ( NVMe and sata that is)
 
Top