24 Bay Build - SAS3 or NVMe/SSD Drives?

GhengisT

Cadet
Joined
Feb 20, 2019
Messages
5
Hello,

First time posting (but a long time lurker). I've been tasked with building a new FreeNAS box for my company. Hoping to fill each of these 24-bays with about 980GB/1TB, but cannot decide if SAS3 or SSDs would be a better choice for our business needs. This will primarily serve as VMware ESXi storage (5.5) and Windows Server file-shares.

My backplane specs listed below have an option to use 20 drives as SAS3/SATA3 and the remaining four bays support NVMe/SAS3/SATA3, I believe I read something about creating an NVMe cache pool (may be overkill). Without intricate knowledge of ZFS, I'm looking for some informed advice on an ideal drive solution that will net around 20TB of storage and offer great performance for production virtual machines.

We've purchased the following hardware:

2U X10DRU-i+ 24 bay 2.5" SAS3 (4x NVMe Ports)
* BPN-SAS3-216EL1-N4 24-port 2U SAS3 12Gbps single-expander backplane, support up to 20x 2.5-inch SAS3/SATA3 HDD/SSD and 4x NVMe/SAS3/SATA3 storage devices
* Integrated dual 10Gbase Tport
* Integrated IPMI 2.0 Management
2x Intel Xeon E5-2680 V3 Dodeca (12) Core 2.5Ghz
256GB DDR4 (16 x 16GB - DDR4 - REG 2133 )
1x LSI 9300-8i HBA JBOD FREENASS UNRAID 12GBPS (with expander will drive all 24 drive)
2x AOC-STGN-i2S Dual 10GBE NIC
24x 2.5" Supermicro caddy
2x 1000Watt Power Supply PWS-1K02A-1R Titanium
 
Joined
Jul 3, 2015
Messages
926
solution that will net around 20TB of storage and offer great performance
Has to be mirrors then. 11 vdevs of two disk mirrors and have a couple of hot-spares.

You may benefit from using one of those NVMe bays for a SLOG (write cache) as its for VMware datastore so in that case you could still have 11 vdevs of two disk mirrors and just one hot-spare and one NVMe SLOG device.

Make sure you back it up still.
 
Joined
Jul 3, 2015
Messages
926
but cannot decide if SAS3 or SSDs would be a better choice for our business needs
Not sure what you mean by this but guess you are saying spinning SAS drives or SSD drives? In which case I guess it would depend on the drives and budget.
 

GhengisT

Cadet
Joined
Feb 20, 2019
Messages
5
Not sure what you mean by this but guess you are saying spinning SAS drives or SSD drives? In which case I guess it would depend on the drives and budget.

I was thinking in terms of cost/storage ratio and theoretical throughput- SAS3 being 12Gb/s and SATA3 with a max of 6Gb/s.

Budget wise, we're expecting to spend another $2000 on drives (less is always better).
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Welcome.

First major questions I want to see answered:
  • What's your budget?
  • What's the volume of data you plan to load on here (I know you said 20TB desired size, but how much needs to be migrated on initial load, and what is the projected growth rate?)
  • Describe the workload you plan to put on this in terms of applications, IOPS, bandwidth, desired latency, etc. Detail is good.
  • How are you going to connect to the hosts (NFS/iSCSI)
Off the top of my head, I'm going to say that a VMware datastore is a great candidate for an all-flash setup with a pair of hot-swappable NVMe U.2 SLOG devices - but your budget might not allow this.

Some quick notes:

This will primarily serve as VMware ESXi storage (5.5)

You should see if you can upgrade that VMware environment to something current. 5.5 is quite aged now, and VMFS6 will have a decent amount of improvements including automatic free space reclamation (not to mention bug fixes, security updates, other feature improvements)

I was thinking in terms of cost/storage ratio and theoretical throughput- SAS3 being 12Gb/s and SATA3 with a max of 6Gb/s.

You should be more concerned with the choice of "spinning disk" vs "SSD" - although as I stated above I'm heavily leaning towards SSDs. 22x 1.92TB would give you about 21T once mirrored, and an all-flash setup will handle being loaded more fully than spinning disks as you'll be more concerned with "free NAND pages" vs. "free space fragmentation"
 

GhengisT

Cadet
Joined
Feb 20, 2019
Messages
5
Welcome.

First major questions I want to see answered:
  • What's your budget?
  • What's the volume of data you plan to load on here (I know you said 20TB desired size, but how much needs to be migrated on initial load, and what is the projected growth rate?)
  • Describe the workload you plan to put on this in terms of applications, IOPS, bandwidth, desired latency, etc. Detail is good.
  • How are you going to connect to the hosts (NFS/iSCSI)
Off the top of my head, I'm going to say that a VMware datastore is a great candidate for an all-flash setup with a pair of hot-swappable NVMe U.2 SLOG devices - but your budget might not allow this.

Some quick notes:



You should see if you can upgrade that VMware environment to something current. 5.5 is quite aged now, and VMFS6 will have a decent amount of improvements including automatic free space reclamation (not to mention bug fixes, security updates, other feature improvements)



You should be more concerned with the choice of "spinning disk" vs "SSD" - although as I stated above I'm heavily leaning towards SSDs. 22x 1.92TB would give you about 21T once mirrored, and an all-flash setup will handle being loaded more fully than spinning disks as you'll be more concerned with "free NAND pages" vs. "free space fragmentation"


Thanks for the response HoneyBadger!

  • What's your budget?
    • Our budget is about $7000 and we've spent half of that on the hardware listed above.
  • What's the volume of data you plan to load on here (I know you said 20TB desired size, but how much needs to be migrated on initial load, and what is the projected growth rate?)
    • Initially we're probably going to transfer about 8TB. I don't expect it to grow beyond 12TB over the next 12-18 months.
  • Describe the workload you plan to put on this in terms of applications, IOPS, bandwidth, desired latency, etc. Detail is good.
    • We have a mix of virtual machines. MS-Server, MS-SQL, Linux, MySQL, Exchange, and a few Graylog databases. None of these are very IOPS intensive and latency up to 10ms-20ms would be acceptable.
    • In the future, we will be deploying several of these FreeNAS boxes to our remote datacenters that service our customers, and low-latency is more imperative. These environments are MS-Server, MS-SQL, IIS, .NET applications, MySQL, MariaDB. We have load-balancers that offset the load between web servers and IOPS on the SQL servers are < 20% at any given time.
  • How are you going to connect to the hosts (NFS/iSCSI)
    • iSCSI over four 10GBE links, uplinked to a pair of fiber switches. ESXi hosts would have dual uplinks to the fiber switches.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Our budget is about $7000 and we've spent half of that on the hardware listed above.
The good news is that all you need is drives; the bad news is that the remaining $3500 (I'm assuming USD) isn't nearly enough to go all-flash here.

Initially we're probably going to transfer about 8TB. I don't expect it to grow beyond 12TB over the next 12-18 months.
My rule of thumb when projecting growth is to figure out what growth rate I expect, and then double it. That would put you at 16TB total assuming you didn't already do that. If you size for 20TB usable, this puts you at 80% which is quite likely in the "too full" category to be running from spindles.

We have a mix of virtual machines. MS-Server, MS-SQL, Linux, MySQL, Exchange, and a few Graylog databases. None of these are very IOPS intensive and latency up to 10ms-20ms would be acceptable. In the future, we will be deploying several of these FreeNAS boxes to our remote datacenters that service our customers, and low-latency is more imperative. These environments are MS-Server, MS-SQL, IIS, .NET applications, MySQL, MariaDB. We have load-balancers that offset the load between web servers and IOPS on the SQL servers are < 20% at any given time.
10ms-20ms is an easy goal here for the majority of your IOPS; but under heavy load a cache miss can briefly spike beyond that. For your lower-latency builds, I would have to suggest you look at costing out an all-flash solution.

iSCSI over four 10GBE links, uplinked to a pair of fiber switches. ESXi hosts would have dual uplinks to the fiber switches.
Block storage (iSCSI) typically requires more resources than file (SMB/NFS) in order to obtain the same results, and you will also want to lean towards having more free space available in order to ensure consistent performance. You will also need to do some experimentation to get the network configuration you want as far as sending iSCSI traffic all four ports on the FreeNAS system - and you may find that certain configurations will result in traffic only returning back along two of four ports. Again, upgrading vSphere/ESXi is strongly recommended here; as of vSphere 6.5 you have the ability to route iSCSI traffic in combination with port binding this becomes necessary in your network environment.

One potential issue here is that drives beyond 2TB in the 2.5" form factor are either very expensive, or use shingled recording (SMR) - Seagate is notorious for having "lied by omission" on datasheets and whitepapers, and people end up buying drives that they believe are conventional recording, but in reality use a combination of RAM, NAND, and PMR sections of the platters to mask the flaws of SMR. I'm still not sure if the Seagate/Samsung ST4000LM016 is PMR - I didn't think it was, but then there's reports that it uses hybrid platters and "Multi-Tier Caching" (1) and I don't even know what to think. There's also the problem of finding inexpensive 2.5" 7200rpm drives - most are designed for laptops and spin at 5400rpm.

Second post coming up with a proposed drive setup and some caveats.

1: https://www.anandtech.com/show/9489/seagate-backup-plus-portable-4tb-usb-30-drive-review/3
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Well, I'm trying to build this up. The log and cache tier are easy enough to figure out:

Log: 2x Optane 900p U.2 280GB
USD$270ea - https://www.amazon.com/dp/B0773SRDVP/

Cache: 2x Intel DC S4500 480GB
USD$180ea - https://www.amazon.com/Intel-SSDSC2KB480G701-S4500-480gb-2-5/dp/B074QRXWB7/

That leaves you with only $2600 for drives - lose the cache tier and you've got close to $3000 again, but even with that extra money, there's no brand new 2TB 7200rpm 2.5" drives available in the $150 per unit price range - they're all 5400rpm. The cheapest 7200rpm I found was a Seagate "Enterprise Capacity" 2TB at the $260 ballpark, which puts you over budget (20x260=$5200) and more importantly starts to get you close to SSD pricing, which gives you a massive performance increase.

If you go with 5400rpm it's easily doable with something like the WD Blue Mobile at $80/drive, $1600 total. That might give you the 10ms-20ms performance, but it would probably be on the higher end of that range, and I wouldn't recommend it for the high performance/low-latency "remote datacenters" you mentioned previously.

If I keep digging I may find another option, but generally speaking, high-capacity high-performance 2.5" drives are a very niche market segment.
 
Last edited:

GhengisT

Cadet
Joined
Feb 20, 2019
Messages
5
Thanks for your help! We decided to go with 24 x 1TB Samsung 860 EVO drives. We'll be purchasing a pair of 16GB SATA DOMS for a mirrored boot drive. This project is primarily a proof of concept in our HQ to determine the viability of a data center solution. While we would like have a larger storage pool to work with, all of our ESXi hosts have local storage that we can continue to utilize.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
We decided to go with 24 x 1TB Samsung 860 EVO drives

I'd still strongly recommend going with 22x capacity tier, keeping two of those as spares, and using a (mirrored) Optane 900p SLOG setup. Since you're running ESXi you'll want to be using sync=always, and a SLOG will avoid the "double write" you'd do against your SSDs if you use in-pool ZIL. Modern TLC SSDs do have much better endurance than the first-generation ones, but if you can effectively double the write endurance and help protect your sustained write performance, I'd still advise an SLOG.
 

GhengisT

Cadet
Joined
Feb 20, 2019
Messages
5
Thanks for your help HoneyBadger. We were able to allocate a little more $$ towards this project and will be purchasing the Optane drives in March (and possibly the Intel drives you recommended).

The SuperMicro chassis arrived the other day and is currently up and running in our lab. Installed FreeNAS on two 16GB Sata Doms and now just waiting on our storage drives and the second dual-port 10GBE card to arrive. This is getting exciting!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm excited to see the results.

Since you've gone all-flash for the vdevs, there's not really any point in having the 2x cache drives (Intel S4500) as your "misses" from RAM will be serviced from flash in the capacity tier. The Optane drives though I would still consider valuable.

Bear in mind that you currently cannot remove vdevs from a pool (under certain circumstances you can, but I would recommend designing so that you don't need to) so only install 22 of those 24 SSDs, and leave the last two NVMe-capable bays open for the Optane drives.

There's likely a fair bit of tuning needed to extract maximum performance from this setup as well - all-flash is very fast to begin with, but you don't want to leave performance on the table. Once you've got the drives in play, let's see about getting a benchmarking/tuning thread up and running.
 

paulg

Cadet
Joined
Apr 11, 2020
Messages
5
Thanks for your help HoneyBadger. We were able to allocate a little more $$ towards this project and will be purchasing the Optane drives in March (and possibly the Intel drives you recommended).

The SuperMicro chassis arrived the other day and is currently up and running in our lab. Installed FreeNAS on two 16GB Sata Doms and now just waiting on our storage drives and the second dual-port 10GBE card to arrive. This is getting exciting!
how ist going Ghengis? I am building an NVME nas as well so I am waiting to hear the results
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
run the sas drives. let an slog and l2arc. I use the p3700 nvme. use mirror vdevs.

I'm running basically the same setup as you, but with 7200rpm sata drives.

I'm currently running 65 terminal servers hosting around 500 users. works awesome.

We can do storage vmotion and saturate a 10gb card no problem.

There is some extra tweaking to do though to get it to work well.
 

paulg

Cadet
Joined
Apr 11, 2020
Messages
5
run the sas drives. let an slog and l2arc. I use the p3700 nvme. use mirror vdevs.

I'm running basically the same setup as you, but with 7200rpm sata drives.

I'm currently running 65 terminal servers hosting around 500 users. works awesome.

We can do storage vmotion and saturate a 10gb card no problem.

There is some extra tweaking to do though to get it to work well.

I am a newbie at freenas, do you know if the current version supports nvme? have you done any work with video servers?
 
Top