Finding an M.2 NVME SSD that fits the home/SMB NAS budget, performance and QA requirements

nickwalt

Cadet
Joined
Oct 2, 2023
Messages
6
I've been reading across the TrueNAS, ServeTheHome and Level1Techs forums, as well as Reddit and watching YouTube, to find conversations that move the cheap NVME discussion forward for those that want either a pure NVME solution (for reasons that may cover a mix of power use, server design, environmental constraints and performance targets) or a mix that includes large, slow SATA HDDs.

Unfortunately the picture remains murky. For the most part people tend to stay in their chosen lanes whether from being biased through limited experience with brand and model, technology or a little resistance towards other options that may expand on understanding and benefit.

I find that the underlying factors to all of these discussions are threefold:
- defining requirement and framing the use case
- budget
- available technology and product

In my case I have decided to build a virtualisation platform that incorporates a virtualised NAS with full access and control of dedicated storage via PCIE and SATA passthrough. It is clear to me that my requirements will benefit from using three different kinds of storage:
- very fast high quality consumer nvme M.2 SSDs for the virtualisation host vm store
- cheap and slow but reliable, good quality nvme M.2 SSDs for the NAS
- cheap and even slower but reliable, good quality sata HDDs for NAS backup

The virtualisation platform will likely be based on VMware ESXi because I'll have access to vSphere and vCenter through a VMUG Advantage subscription. On this platform TrueNAS operates purely as a general purpose NAS unrelated to the host hypervisor (no returning an ARC accelerated ZFS volume back to host VMs). The direct attached drives need to work with ESXi so that limits my selection to Samsung, Intel, Western Digital, Kingston and a few others that get detected fine (if not validated by VMware).

The server is an AMD Epyc 7452 with 32 cores and 128GB DDR4 3200 on a motherboard that has 5 x 16x PCIE 4.0 slots and 2 x 8x PCIE 4.0 slots, which means I can use cheap PCIE cards that bifurcate the 16x slots into 4 x 4x to feed 4 x M.2 SSDs directly attached to the card - and install 5 cards to pass 20 separate NVME drives to TrueNAS. However, in this initial setup I will be using only two 2TB SSDs mounted in this way and a single 4TB HDD for backup. From this HDD a backup of critical data will go to BackBlaze. If I can, some data will go to Proton Drive if this is possible.

The selection of the fast SSDs is relatively easy. Given that this is not an enterprise use case, where corporations typically demand every possible ROI and smash their hardware, high quality consumer grade SSDs will be an excellent choice - as long as the quality is there to hold the products to their advertised specifications. This last point is where the problems arise with consumer grade hardware.

I have been looking at the cheaper PCIE 3.0 and some of the cheaper 4.0 drives, like Team Group, Silicon Power, and cheap models from Samsung, Intel, Western Digital, Kingston, and others, and many of the latest products have adequate performance for a NAS serving over 1Gbit ethernet or WiFi, or even 10Gbit ethernet when used with ARC and maybe a used enterprise drive providing L2ARC.

However, after reading about the failure rates of Team Group and Silicon Power SSDs on some forums it is clear that while the specifications for performance and durability are fine for the NAS drives the QA has failed. There is a lot of discussion on this forum about how these cheap NVME drives don't have the durability for NAS but it seems like more of a QA issue. Which raises the question: which brands, series and models do have the QA to assure that their products are up to spec?

In all of the articles and discussions about SSDs across various forums and review sites not much is said about this. Most discussions about the cheaper drives talk about low endurance or terrible performance once the cache is exhausted in sustained throughput. However, for my use case, certainly, and probably for most homelabs that use a NAS for file serving and streaming, the endurance levels (if up to spec) and performance is fine. More than adequate for 1Gbe or WiFi.

After looking for a good price to performance and quality/endurance ratio in a cheap drive I found the Intel 670p 2TB NVME M.2 SSD. This review paints and interesting picture of this drive:

It is only QLC and has a TBW of 740 but that doesn't concern me because the specification puts it is far in excess of the demand that will be placed on it in this NAS use case. What I am interested in more is the likely hood that Intel puts more into the QA for this range of products which should translate into a true to specification drive. If that is the reality then this drive might be an excellent candidate for a modest home NAS based on NVME that serves network clients (the Network part of NAS).

What I find interesting in the StorageReview benchmarking and analysis is that the 670p has a well defined behaviour, especially when compared with the Corsair and Sabrent drives:

StorageReview-Intel-670p-2TB-RndRead-4K.png

StorageReview-Intel-670p-2TB-VDI-Boot.png

StorageReview-Intel-670p-2TB-VDI-Monday-Login.png

StorageReview-Intel-670p-2TB-VDI-Initial-Login.png


These are just synthetic tests but they highlight qualities in the Intel which are desirable for a plodding-along NAS workhorse.

The final essential factor in the selection criteria is the price of the 670p. In Australia right now it is $150 AUD which makes it less than twice the cost of a Seagate IronWolf 4TB 3.5" heavyweight HDD. It runs cool, fast enough especially behind ARC and L2ARC and in front of a single/mirrored IronWolf 4TB.

It would be great to hear about other cheap PCIE 3.0 or 4.0 plodders that have the QA to back up their specifications and form the backbone of a solid home or SMB NAS. Cheers.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
Amazon is having their prime sale on now, not sure if anything in your area on Amazon AUS ?, you can get decent Samsung, Crucial NVMe's for cheap. The "cheap cheap" NVMe's have no DRAM cache and tend to tank real fast when under any work load.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I can't comment about most of your questions.

But, you mention passing through SATA drives to your NAS VM. This is a no-no. You need to pass the entire SATA PCIe controller through, for reliable ZFS operation. That also means passing through your PCIe lanes / ports with the NVMe drives too. See this;
 

nickwalt

Cadet
Joined
Oct 2, 2023
Messages
6
I can't comment about most of your questions.

But, you mention passing through SATA drives to your NAS VM. This is a no-no. You need to pass the entire SATA PCIe controller through, for reliable ZFS operation. That also means passing through your PCIe lanes / ports with the NVMe drives too. See this;
That is a great resource, thanks. I'm not sure what the SATA controller is on the Epyc motherboard (Supermicro H12SSL-i) - it may be the CPU itself - but will be sure to pass through the entire controller for TrueNAS to manage/load drivers.

One of the great things about the Epyc platform is its simple PCIE design and bios stability. All lanes connect to the CPU and there is no motherboard chipset to complicate things. Before I decided to commit to enterprise hardware I read about BIOS updates on consumer boards changing the underlying IOMMU assignments and groupings.

It would be great to hear people's thoughts about cheap NVME drives for NAS duty.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
That is a great resource, thanks. I'm not sure what the SATA controller is on the Epyc motherboard (Supermicro H12SSL-i) - it may be the CPU itself - but will be sure to pass through the entire controller for TrueNAS to manage/load drivers.

One of the great things about the Epyc platform is its simple PCIE design and bios stability. All lanes connect to the CPU and there is no motherboard chipset to complicate things. Before I decided to commit to enterprise hardware I read about BIOS updates on consumer boards changing the underlying IOMMU assignments and groupings.

It would be great to hear people's thoughts about cheap NVME drives for NAS duty.
One thing, some 8 or 16 lane PCIe slots may not be able to bifurcate down to 4 PCIe lanes. Don't know about the AMD Epyc.

If a slot can't bifurcate, then a cheap PCIe to NVMe card will only support 1 NVMe drive. Not 2, (for a 8 lane PCIe slot), or 4, (for a 16 lane PCIe slot). In which case, a PCIe card with a PCIe switch will be required to support more than 1 NVMe drive per card.

As I said, I don't know about the AMD Epyc.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
For M.2 NVMe I have had good results with Samsung 970 Evo Plus. I have read very mixed reviews about newer Samsung models. The "Plus" is important.
 

nickwalt

Cadet
Joined
Oct 2, 2023
Messages
6
The Epyc platform is incredible. 128 PCIE 4.0 lanes, connecting all devices directly with the CPU. This block diagram shows a Supermicro ATX motherboard. All devices occupy their own IOMMU groups, no sharing and all slots can be bifurcated.

1697034964269.png
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
For M.2 NVMe I have had good results with Samsung 970 Evo Plus. I have read very mixed reviews about newer Samsung models. The "Plus" is important.
Ya, 990 PROs can be slower than the 980 PRO's, 980 PRO's can have performance issues under *nix, as well they had a firmware bug that kills your NVMe, but was patched.

Kingston KC3000 is my go to now for high performance NVMe's, or WD 850 series.

Budget, 970 is still a solid drive and can be had cheap these days.
 

nickwalt

Cadet
Joined
Oct 2, 2023
Messages
6
Samsung drives, like Intel and at least the Kingston KC3000 said to work well with ESXi. Homelabbers shared their experiences:

I bought a Kingston KC3000 to host ESXi boot and osdata/vmstore (all same drive for now).

The 670ps are going to TrueNAS. Two 670p 2TB drives are cheaper than a single Ironwolf Pro 4TB HDD but same warranty and better performance. Possibly comparable longevity.

Interestingly, the Samsung 970 EVO Plus is the only 970 available here and the 2TB goes for $220 AUD. KC3000 2TB is $200 and Intel 670p 2TB is $150.

It is a very different, limited market here in Australia.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It would be great to hear people's thoughts about cheap NVME drives for NAS duty.

970 Evo Plus. We've had field faults with the 980 Pro's (2000 miles to "fix" an SSD isn't worth it).

One thing to remember is that the whole NVMe "thing" is a bit ridiculous to begin with. A mirror pair of 970 Evo Plus read at up to 6000MBytes/sec and write at up to 3000MBytes/sec, or roughly 24000Mbit/sec. Which will handle a 25Gbps SFP28 link. Well, the SSD's are obviously not as fast as their rated speeds, which were calculated based on their observed speed once loaded on a hypersonic rocket and shot downwards towards the earth from the ISS. But the point here is, if you are looking into NVMe, who are you kidding? You will never see the full potential out of something like 8x 970 Evo Plus SSD's in a mirror config. The only reasons to use NVMe are lower latency, which may help on certain workloads, and the fact that they may be cheaper than SATA. NVMe sucks because it isn't particularly expandable; you're not allowed to add half a dozen disk shelves at a reasonable cost. But it's very easy to make large SATA arrays. They won't quite have the lowest latency but ZFS covers for a lot of that anyways.

With any consumer SSD, be aware that it is unlikely to deliver its rated speeds as lied about in the specs. My rule of thumb is to plan for maybe a tenth of it under heavy real world workloads. Please note that I run many hundreds of consumer SATA SSD's as hypervisor storage with great success. It's just a matter of knowing what you're buying. Even the NVMe isn't really that impressive. Also be aware of your endurance. There are stories here on the forums about how we here bought a bunch of Intel 535's with the deliberate intent of burning through their endurance (and burning the SSD's out) back when they were about $200 for a 480GB. You can win the game if you know what the deal is.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
980 PRO's Linux performance

Good points @jgreco and dead on! everyone wanting to jump on PCIe 5 NVMe's in other forums and upgarde their platforms for them, all for the e-peen bragging rights of something they can not really sustain or even use in day to day.

I got the KC3000 cause the price was right and reviews had them solid, i already own 2 x 980 PRO's and since i run linux as my main OS, i saw some of the random performance issues, so into my TrueNAS they went for a mirror for my VMs and for that job, they work fine! I tend to focus more on speeds once the DRAM is emptied and SLC cache is drained and how much each has, to try and keep relative good performance for those large files.
 

nickwalt

Cadet
Joined
Oct 2, 2023
Messages
6
Great points. I guess we could say that at a certain point both homelab'ers and business/enterprise will do some degree of cost/risk/benefit analysis.

In my case the availability of NGFF SATA (M.2) and regular 2.5" SATA drives is virtually non-existent in consumer stores and pricing wasn't competitive for what was available. This is why the 670p is interesting at that price point, especially for a mirror setup - not paying too much for unnecessary speed.

I considered two Ironwolf HDDs but their size and less warranty wasn't competitive for my small requirements.

Great discussion here.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One thing that would be nice to see, is a NVMe expander, potentially with U.2 backplane. You could take a 16x lane PCIe 4.0 slot and wire up 8 or more NVMe drives to it. With a U.2 backplane and NVMe M.2 adapters, you get both adequate cooling around the NVMe M.2 cards, and the ability to have many more NVMe slots than from a single PCIe card.

Their are some single PCIe slot cards with more than 4 NVMe M.2 drive bays on them. I've seen ones with 8 NVMe M.2 drive bays, and more. But, cooling becomes an issue and of course, sometimes a neighboring slot may be taken up with the extra NVMe M.2 drives.

You would not get fully bandwidth from all at once. But, as been clearing shown, most of the time that is not needed. The exceptions of course are from ZFS' write transaction groups, and scrubbing / re-silvers.


In my opinion, we need to get out of the mindset of 4 dedicated PCIe lanes for NVMe M.2 drives, when we are using them for simple storage. Yes, on a gaming rig, having 2 x 4 lane PCIe 4.0 NVMe M.2 drives in a hardware RAID-0 stripe may make sense. But, not for general purpose NAS storage. (Again, exceptions exist, like for VM iSCSI storage...) I mean in the bad old days, we had 2 or 4 IDE slots, (each pair shared bandwidth). Using a SATA or SAS controller does an abstraction from host side, to storage side, at the potential cost of over-subscribing bandwidth when using too many storage device ports.

Heck, even a single PCIe 3.x lane, (8Gbits/ps. per lane), is faster than SATA III. So having 16 x NVMe PCIe 4.0, (16Gbits/ps, per lane), on a single 16 lane PCIe 4.0 slot makes some sense, from a general purpose storage perspective. And PCIe 5.0 has been implemented by some manufacturers, with PCIe 6.0 being released just last year, (2022 for future readers).
 
Last edited:

nickwalt

Cadet
Joined
Oct 2, 2023
Messages
6
100% on the need for more nvme options that cater for home and small business NAS builders.

Here in Australia it is getting to the point where only HDDs are sold with SATA interfaces.

M.2 SSDs are mostly all NVME. The selection of 2.5" SATA SSDs is almost zero and prices are not competitive with M.2 NVME.

Before I understood that bifurcation could only divide a PCIE 4.0 x16 slot down to 4x4 (x4x4x4x4) it seemed logical want to divide the bandwidth to 8x2 or 16x1 for slower NVME-based NAS SSDs (especially PCIE 3.0 SSDs). Bifurcation makes even more sense for PCIE 5.0 bandwidth. It is great for bare-metal installations but even more compelling for virtualized installations because PCIE NVME devices present their own individual controllers and therefore can each be passed through to a VM independently. A cost effective and simpler way to present JBODs to ZFS without involving controllers that require drivers and validation to a host such as ESXi. ESXi 8 understands and talks native NVME:

I can imagine a day when ZFS is completely NVME native and optimised fully for all the goodness that NVME offers.

After not finding a clearly trustworthy cheap and reliable workhorse amongst the less well known brands - and even some of the more well known brands - the Intel 670p was a standout. For some reason I believe that Intel has developed a product that is as close to specification as is available in a consumer drive.
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
970 Evo Plus. We've had field faults with the 980 Pro's (2000 miles to "fix" an SSD isn't worth it).
It appears that Samsung silently changed the controller for the Evo 970 Plus in such a way that the sustained write speed falls below 1 GB/s, once the SLC cache is saturated.

I am currently looking to add a pair of 2 TB SSDs to may TrueNAS box. And since the 980 Pro is only 10 Euros more expensive than the 970 Evo Plus, I was wandering, if the firmware issues are a thing of the past.

Any thoughts? Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It appears that Samsung silently changed the controller for the Evo 970 Plus in such a way that the sustained write speed falls below 1 GB/s, once the SLC cache is saturated.

That happened a long time ago, around the time the 980 came out. I believe it was due to silicon shortages at the time.

And since the 980 Pro is only 10 Euros more expensive than the 970 Evo Plus, I was wandering, if the firmware issues are a thing of the past.

Unknown to me. I stopped buying the 980 Pro's since all I really need is competent flash storage. I would expect that any firmware issues would have been beaten out of the controllers in the more-than-two-years since I last bought one, but who knows.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
For storage, I'd rather look at U.2/U.3 drives. M.2 has an obvious issue with size. But it will take years before we have statistically significant reports about the reliability of "cheap capacity" NVMe drives such as the Micron 6000 or Solidigm QLC drives.

One thing that would be nice to see, is a NVMe expander, potentially with U.2 backplane. You could take a 16x lane PCIe 4.0 slot and wire up 8 or more NVMe drives to it.
That's called a PCIe switch. Here is the PCIe 3.0 version of your U.2 order:
I haven't found the PCIe 4.0 version yet, though that would only be relevant in practice if one were to go down to one lane worth of bandwidth per drive in the array.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
For storage, I'd rather look at U.2/U.3 drives. M.2 has an obvious issue with size. But it will take years before we have statistically significant reports about the reliability of "cheap capacity" NVMe drives such as the Micron 6000 or Solidigm QLC drives.


That's called a PCIe switch. Here is the PCIe 3.0 version of your U.2 order:
I haven't found the PCIe 4.0 version yet, though that would only be relevant in practice if one were to go down to one lane worth of bandwidth per drive in the array.
Yes, I knew it was a PCIe switch.

What I meant was a PCIe switch WITH the ability to have NVMe drives in M.2 form factor, mounted somewhere.

Part of the intent is to allow each NVMe drive to have 4 lanes. Even if their are 8 or more NVMe drives on a 16 lane PCIe slot. Yes, it is over subscribed but for straight forward solid state storage, that should be acceptable. If using only a few NVMe, they get full bandwidth. If attempting a scrub, or massive read or write, then well, they don't get full bandwidth...
 
Top