8TB SSDs - Are the Samsung QVOs viable for TrueNAS use?

HenchRat

Dabbler
Joined
Nov 27, 2020
Messages
38
I am in the market for between 6 and 8 8TB SAS or SATA SSDs for my TrueNAS, which is currently populated with 6 4TB EVO 860 SSDs. I don't NEED the performance this offers day to day, but I surely do LIKE it, and the lower power consumption and noise is nice. Once or twice a month, I'm processing (reading, copying, writing metadata) a couple terabytes of data over a 10 gig link to a server.

I know that NVME drives would be cheaper per unit, but SAS/SATA's what I've got. Further, the drives need to be no thicker than 7mm Z axis, due to the size of the bays I have, which seems to eliminate many of the less costly HGST units on eBay from consideration.

It seems like the options available are the Samsung PM883s at about $1200 each, the Samsung QVOs at about $750 each, and the Micron 5100 ECO, which seems to have abysmal write performance across the board.

Are the Samsung QVOs a decent bet for NAS use? Reviews strongly suggest that write performance tanks after about 15GB continuous, and I'm wary of both that and the endurance specifications, but that price is very attractive.

It seems like my only sensible option is the PM883s, but wonder if I'm being too conservative.
 

Belphegor

Dabbler
Joined
Mar 21, 2020
Messages
11
The Samsung QVO drives (once the pseudo SLC cache has been exhausted) and the Micron 5100 ECO (AFAIK no pseudo SLC write cache) are ideal for read access of bulk data but will be slower than other SSDs for write access. If this bothers you, it would be best to skip QLC SSDs altogether and stick to TLC drives. However, those are likely to be much more expensive than the QLC drives on the market.

Only you can decide if the added costs are worth the expense, given your usual workload. The Samsung enterprise drive you mention has one more feature not present on the consumer drives: power loss data protection.
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
972
Keep in mind, the major storage players are shifting technology at this point. The 4th gen enterprise NVMe drives (2.5" / 70mm x 12.5mm units @ 15+ Tb... Not kidding...) are so fast, the traditional hardware RAID controllers are a bottleneck. They are all moving to RAID on silicon to accommodate. What happens in ZFS software space, I don't know. Just understand, you're going to be hitting the limits of your attachment technology, and possibly some hitting software performance limitations. Layout your pool with those limits in mind.
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
The Samsung QVO drives (once the pseudo SLC cache has been exhausted) and the Micron 5100 ECO (AFAIK no pseudo SLC write cache) are ideal for read access of bulk data but will be slower than other SSDs for write access. If this bothers you, it would be best to skip QLC SSDs altogether and stick to TLC drives. However, those are likely to be much more expensive than the QLC drives on the market.

Only you can decide if the added costs are worth the expense, given your usual workload. The Samsung enterprise drive you mention has one more feature not present on the consumer drives: power loss data protection.

We use a considerable number of the 7.68TB Micron 5200 ECO drives for approximately 2PB of storage in a VMWare vSAN implementation. We have not seen any issues with them, but the comments about write speed are correct. Additionally, their write endurance is also significantly less than other types.

Having said that, Micron did not build an ECO version of their newer 5300 series which is only available in Pro and Max.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Are the Samsung QVOs a decent bet for NAS use?
Think your use-case through.

If you're going to have a pool of RAIDZ2 VDEVS and your data remains static to a large extent once written, they would be a reasonable option for mostly read operations.

If you're running a bunch of VMs with constant writes and re-writes, maybe not so great.
 

HenchRat

Dabbler
Joined
Nov 27, 2020
Messages
38
I thought that perhaps the write speed issues on the Micron ECO or Samsung QVO would be mitigated somewhat by parallel writes across multiple drives. That is, if I had 40 gig of data to write to a 6 disk RAIDz2, I'd be looking at about 10 gig of writes per disk, and I might be able to create a new vdev of, say, 8 disks to further spread that write load.

It sounds like I would see a marked decrease in write performance over the EVOs I have now, and I'd prefer to avoid that even while I really don't have an empirical sense of my actual write demand. While my workload isn't, day to day, write heavy, it is somewhat write heavy month to month as some of that data is reprocessed (read, modified, and re-written).

This pool is also a target for hourly incremental backups from several client devices, but typical hourly writes are on the order of 30GB total, which doesn't seem to be an issue.

I have VMs on a separate mirrored vdev, so that's not an issue, but thank you for bringing it up.

In the end, the QVOs are only compelling for their economy over the PM833s. I'm not sure enough that the write performance will be acceptable, as I've gotten used to the performance of the EVOs, and don't have a clear sense of whether my current performance is bottlenecked at the 10 gig link or at the vdev level, if at all.

fyreside424, what kind of life are you getting from them in your environment?
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
We currently operate approximately 315 of the 7.68TB Micron 5200 ECO and 5300 Pro drives, plus another 50 or so 1.92TB 5200 and 5300 Max drives. They have been added in groups over the last 2 years. To date, we have had 3 full failures of the ECO drives and 3 failures of the Pro drives with no failures of the Max drives.

While QVO based drives have worse write performance, especially sequential write performance, by the time you reach 960GB+ capacity, most, if not all enterprise drives will be capable of saturating a SATA bus, even with writes. For our application, that's why we went with the ECO drives initially. Our testing showed very little performance difference at 7.68TB and the ECO drives were nearly 30% cheaper. The write endurance is rated at 8.4PBW for those drives and we did not see ourselves exceeding that within a 3-5 year period. For reference, if you convert, it works out to about .6 DWPD when spaced out over the 5 year warranty.

As with all things, your use case matters. If you expect to write a considerable portion of the drive's capacity every day or your endurance requirements are longer than 5 years, I would suggest not using any QVO based drive, Micron or Samsung.

We ended up switching away from ECO drives because Micron discontinued the 5200 series and the 5300 series did not have a QVO based model. Additionally, the pricing had dropped so that the 5300 Pro drives were similar in price to the 5200 ECO drives. Going forward we have discontinued the purchasing of new SATA flash altogether as the Micron 9300 NVME drives are now CHEAPER per TB than the Micron SATA drives, removing the major reason for purchasing them to begin with.
 

HenchRat

Dabbler
Joined
Nov 27, 2020
Messages
38
Thanks. That is very helpful, and sounds like endurance isn't an issue for the Micron ECOs. Can you expand upon your comment that "by the time you reach 960GB+ capacity, most, if not all enterprise drives will be capable of saturating a SATA bus, even with writes."?

I don't quite understand how disk capacity relates to write speed, except that larger disks that are less full have more wide open space for sequential writes, but that doesn't seem to be what you mean.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Thanks. That is very helpful, and sounds like endurance isn't an issue for the Micron ECOs. Can you expand upon your comment that "by the time you reach 960GB+ capacity, most, if not all enterprise drives will be capable of saturating a SATA bus, even with writes."?

I don't quite understand how disk capacity relates to write speed, except that larger disks that are less full have more wide open space for sequential writes, but that doesn't seem to be what you mean.

Consider that the disks are made up of some number of component flash memory chips tied to a controller that presents their storage cells to your computer as LBA's via SATA... The controllers are engineered to be reuseable bits of technology too. To that end, they have multiple channels that allow them to multiplex the storage together, and multitask. While ch1 is busy with a task, ch2 is free to accept another, etc... A smaller drive may only use one or two components, where a larger drive may use many more, openeing up much more capability for parallellism. The suggestion then is that by the time you get to 960Gb with the current generation of flash, you will have enough of those channels that the disk controller can saturate the SATA bus.

This was more obvious on previous generations of SSD's. From memory, the Intel S35x0 exhibited ~50+% improvement in performance between the 120Gb and 480 Gb models.
 

HenchRat

Dabbler
Joined
Nov 27, 2020
Messages
38
Consider that the disks are made up of some number of component flash memory chips tied to a controller that presents their storage cells to your computer as LBA's via SATA... The controllers are engineered to be reuseable bits of technology too. To that end, they have multiple channels that allow them to multiplex the storage together, and multitask. While ch1 is busy with a task, ch2 is free to accept another, etc... A smaller drive may only use one or two components, where a larger drive may use many more, openeing up much more capability for parallellism. The suggestion then is that by the time you get to 960Gb with the current generation of flash, you will have enough of those channels that the disk controller can saturate the SATA bus.

This was more obvious on previous generations of SSD's. From memory, the Intel S35x0 exhibited ~50+% improvement in performance between the 120Gb and 480 Gb models.

That makes sense and your explanation was very clear. What I still don't quite track is how that relates to the specification for write speed for a given drive. Wouldn't the parallelism you refer to already be accounted for in coming up with the write performance numbers for a given drive?

For example, if a Brand X SSD has a write speed of 400MB/s, wouldn't that already account for any performance benefit found through multiplexing flash chips to the on-disk controller? And wouldn't it require 2 drives to saturate that 6Gb/sec SATA bus at full tilt boogie?

I guess this sort of suggests an answer to my question about parallel writes across multiple drives. If I have 6 drives with a max cache-exhausted write speed of 170MB/sec (870 QVO), I'd need 4.4 drives to saturate the 6Gb/sec SATA bus, assuming that the write workload is more or less distributed across all drives in a vdev. Assuming 6 of those drives in a 6 disk RAIDz2, I should be tickling the upper limits of the SATA bus even in a worst case write scenario.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
That makes sense and your explanation was very clear. What I still don't quite track is how that relates to the specification for write speed for a given drive. Wouldn't the parallelism you refer to already be accounted for in coming up with the write performance numbers for a given drive?

I'm not the original author, but the "saturation even for write" is likely just a nod to the fact that writing to flash memory is slower than reading. This is because the erase / remap game performed by the controller that happens behind the scenes. I generally avoid taking anything the marketing guys say as gospel. Many of them will publish a peak write speed for the controller, failing to note the variation between individual models. If you use the actual technical data specific to the model, yes.

Keep in mind you can have bottlenecks at other points that can wreck you assumptions.That's why building performant systems is so much detail work.
 
Top