WD Red (CMR, non-pro) vs WD Ultrastar DC (SATA)

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Hi!

I’m planning to set up a new pool and started researching what disks to get. I was somewhat surprised to find that the WD Red and WD Ultrastar DC are offered at almost the same price where I live (8 and 12 TB versions). Actually, the Ultrastars are slightly cheaper.

This got me wondering which ones to get. All are, to my best understanding, CMR drives. The Reds are 5400 rpm which I generally prefer, but the Ultrastars are marketed as higher reliability and have very good noice, heat and power consumption stats for 7200 RPMs. Also, the 12 TB Ultrastars are helium drives.

I plan to set up either 5x8 TB or 4x12 TB in Raid-Z2 for a total of 24 TB storage.

Based on your experience ...

1) would you go for Reds or Ultrastars and why?
2) if going for the Ultrastars, is it worth going for the 4x12 TB helium setup if it costs 20% more than the 5x8 TB non-helium setup? (i.e. I could get a 6x8 TB setup with 32 TB of usable storage for the same money)

Thank you in advance!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
This got me wondering which ones to get. All are, to my best understanding, CMR drives.
Careful making sweeping statements like that.. WD RED is most certainly SMR for sizes between 2 and 6 TB, but in the case you're talking about (8-12 TB) you're right, all WD REDs in that size range are currently CMR.

Personally I prefer the WD RED option due to the slower spin speed and lower heat load/power consumption.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Thank you for your input sretalla.

Personally I prefer the WD RED option due to the slower spin speed and lower heat load/power consumption.

I would normally be leaning the same way for the same reasons. Just got curious when I saw the UltraStar disks for less money (they are usually more expensive). I also understand that the UltraStar power consumption, heat and noice is significantly better than other 7200 disks; actually to the extent that they are comparable to the WD Reds. That is, however, based on the specifications. I'd love to hear if somebody has real-world experience to share.

Careful making sweeping statements like that.. WD RED is most certainly SMR for sizes between 2 and 6 TB, but in the case you're talking about (8-12 TB) you're right, all WD REDs in that size range are currently CMR.

Not really a sweeping statement, was it? The post is about four specific drives and all four are CMR based on the specifications from WD. I'm aware that the 2-6 TB Red (non-Pro) with serial numbers EFAX are SMR disks, but those were not among the disks discussed.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Not really a sweeping statement, was it? The post is about four specific drives and all four are CMR based on the specifications from WD. I'm aware that the 2-6 TB Red (non-Pro) with serial numbers EFAX are SMR disks, but those were not among the disks discussed.
It wasn't a personal shot, just wanted to make sure nobody took it in isolation as a hint to go buy a RED of any size with confidence. It's clear if your whole post is taken into account, I agree, and you clearly understand it.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
When I was in your shoes, I went with drives that offered the lowest cost per TB and which also consumed little power. A few years ago, that was the 8-10TB sweet spot. @farmerpling2 put together a great resource guide with lots of drives to compare amongst, I'd analyze your choices there in the context of how much each setup will cost you to own and operate.

I'm a big fan of Helium drives and the power consumption difference between 7200 and 5900-RPM drives is not that great. The lower power consumption also makes these sorts of drives more viable in tight environments.

It looks like you have 8 slots available in the external SAS enclosure and another 6 inside the 304, for a total of 14 slots. Your motherboard can host 12 SATA drives, so if you can consolidate the drives into the 304, you can drop the HBA and the external SAS enclosure.

I'd use fewer, larger drives, if possible. That way, your present 304 will last for years to come, even if it means rebuilding the pool from time to time to add additional drives. (ending up with a more parity-efficient Z2 6-disk array in the future). As the 304 fills up, you can always consider going to a SATADOM to keep the big slots open for HDDs.

Anyhow, dropping the HBA and the external enclosure should reduce your power needs and clutter appreciably. It's entirely possible that the new setup will consume 50W less than the present one. In the future, that empty PCIe slot might make a good recipient for a 10GbE Chelsio network card.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
It wasn't a personal shot, just wanted to make sure nobody took it in isolation as a hint to go buy a RED of any size with confidence. It's clear if your whole post is taken into account, I agree, and you clearly understand it.

Thanks for clarifying and sorry for taking it the wrong way.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Thanks for a really thorough post @Constantin and for linking to @farmerpling2 's Excel sheet. Really appreciate it.

You are of course right about dropping the enclosure. It would be a Good Thing (TM) from nearly every perspective. It's just that I love the convenience of being able to replace disks without having to open up the 304-box. Gotta think about it for a moment.

You also make a good case for spending the 20% extra on the 12 TB helium drives. Expanding with 2 more disks later will move me from 24 to 48 TB which should last me for a few years. My only worry would be around the practicalities and risks of rebuilding the array. I'd really have to trust my backups. But this would of course be the same problem with an 8 TB drive setup, so not really an argument in either direction.

On the topic of the SATADOM, I decided against it due to the ridiculous cost of these units where I live. My plan is use velcro to mount the SSD boot drive in the case when I need the full-size spots for data drives. Using velcro was actually a (really great) recommendation from another member at this forum (just can't remember who at this point).

Wonder if I'm ever going to need 10 Gbit in the foreseeable future*. I'm just starting to see the limitations of my current 100 Mbit core switch (a really nice fanless ProCurve 2610-24). This may be the topic of another thread, but I really don't understand the use case for more than 1 Gbit in a pure home environment at this point. Would be really interesting to hear why others have gone this route and whether it has had any actual impact (I've seen quite a few cases on this forum where it's been more of a cool-to-have feature than an actual performance requirement based on the use cases presented).

* is this a nobody's-going-to-need-more-than-64k-memory type of quote?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
My only worry would be around the practicalities and risks of rebuilding the array.

Yeah. I just did that, took a good two weeks between badblocks, replicate one way, replicate the other way.

Personally I’d build 6-wide with shucked 8TB disks, and then you can replace all the 8TB with 16TB in a few years time, if you like and you need the space. No need to rely on backups that way. You can always replace all disks in a raidz, one by one, and get the greater capacity when all disks have been replaced.

raidz expansion is still too much of a “who knows when” feature.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I really don't understand the use case for more than 1 Gbit in a pure home environment

There are a couple use cases, but they’re a bit niche and the cost to achieve them arguably too high.

PCs that have only a boot drive, with games and apps loaded from NAS. Advantage: Snapshot protection against ransomware, accidental delete, and such. Disadvantage: Dog slow over 1Gbit.

PCs that do video and to a lesser extent photo editing work directly on the NAS file, instead of copying it to a local SSD first. No need to copy back and forth to scratch space, but also dog slow over 1GBit.

Side note: Nonsensical, all my dogs run way faster than me. Who knows why the phrase “dog slow” exists.

In those use cases, just putting a 2TB or so SSD into the PC for app / scratch storage is fast and way cheaper than building a 10Gbit network and storage that can service it. Also continues working when NAS is under maintenance.

If I were to play around with it, I would likely:
- Wait until the microtik mGig/SFP+ switch has become more affordable
- Add a 4TB SSD mirror to my NAS (affordable at this point as well, we are talking 2-3 years in the future). This would be used for app storage. I don’t have the need to edit video files.
- Add 2.5/5 mGig cards to the PCs
- Run over my existing Cat5e to the basement, and TrueNAS connects via SFP+. No desire to recable the house with Cat6a, that’ll cost a fortune in labor.

I very likely won’t do anything of the sort though. I might eventually run 2.5 mGig in PCs just for the convenience of faster backups - when the switch for that is in the 150 range. But local storage is now already at 5GB/s sequential and expected to reach 8GB/s sequential, and it’ll be nice to have that speed for apps: Nicer than having them centralized, in my opinion.
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I've found the lower latency re: 10GbE to be a major impact when running rsync backups. The actual data transfer rate is still VDEV/HDD limited but all the back and forth checking of files to see if one has been updated is noticeably quicker with 10GbE. Hence also the positive impact of a hot L2ARC on rsync backup performance.

Cost does not have to be great either. Thunderbolt 3 - 10GbE adapters are now below $200 (I'd go with the SonnetTech SFP+ unit for the sake of flexibility), pre-confectioned fiber, used transceivers, and Chelsio network cards are dirt cheap on eBay. Or go DAC if the server is nearby. I'd avoid CAT6 and copper 10GbE until the price point AND the heat issues are resolved. (I have a Aquantia-based copper NBase-T transceiver here)
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
The cost of cat6 or fiber is the cost of running it running it through my home. Because US construction, which means no "cable channels" (cables are just tacked to wood beams), this means using a set of tools and skills I don't have. The labor is in the thousands.

If I were to ever build a home from scratch, I'd insist there are conduits so I can fetch my own fiber.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
... that or structured cable. Same general idea. The benefit of conduit is that broken fiber can be repaired.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Thanks @Yorick and @Constantin for sharing some of the use cases for a 10GbE network in a home setting.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Personally I’d build 6-wide with shucked 8TB disks, and then you can replace all the 8TB with 16TB in a few years time, if you like and you need the space. No need to rely on backups that way. You can always replace all disks in a raidz, one by one, and get the greater capacity when all disks have been replaced.

I'm afraid that the prices for WD My Book Duo is so high where I live (northern Europe) that it would be cheaper to buy the disks separately :-(

raidz expansion is still too much of a “who knows when” feature.

I've been following this feature closely since it was officially announced, but at the current pace I do not hold much hope. With a little luck, the pace will pick up when the ZFS 2.0 effort has landed. Crossing my fingers ...
 
Top