Opinion/Anyone Using WD Ultrastar HC530 14TB

NASbox

Guru
Joined
May 8, 2012
Messages
650
I'm thinking of buying a Western Digital Ultrastar DC HC530 14 TB (0F31284) since it seems to be available locally for a decent price.
(I'm wondering if there is a reason relating to technical issues?)



Use case: I backup my main pool with a single removable VDEV (Hot Swap SATA) by doing a ZFS send on selected datasets.

Description/OpEd Article:
https://www.anandtech.com/show/1266...es-ultrastar-dc-hc530-14-tb-pmr-with-tdmr-hdd

Datasheet:
https://documents.westerndigital.co...c500-series/data-sheet-ultrastar-dc-hc530.pdf

IIUC this drive would suck for random access, but should be OK (infact fairly fast) for large block sequential workloads.
I'm thinking that it would likely be OK for my backup use case (glorified home made data cartridge that is way cheaper than commercial data cartridges), but it may cause trouble if it was put into a multi drive vdev (such as a home NAS server).

I'd appreciate if anyone with some experience in the finer detais of hard drive specs can confirm if my understanding is correct and/or add any personal experience/insights. Thanks.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
There's no technological reason (it's CMR, not SMR) and it's not suffering from any overt firmware issues like the early runs of the large (10TB+) IronWolf drives where they needed the upgrade to avoid SATA command timeouts.

HGST has also historically been a good buy both in terms of stability and performance - although as you correctly identify, all HDDs will suck at doing random access, just some will suck more than others.

There's a few users with HC530 drives and various use cases and they seem happy.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
HGST has also historically been a good buy both in terms of stability and performance - although as you correctly identify, all HDDs will suck at doing random access, just some will suck more than others.

Thanks for the intel... Given the 14TB HC530 or one of the WD Reds
https://documents.westerndigital.co...uct-brief-western-digital-wd-red-plus-hdd.pdf
any idea as to performance in typical home NAS use?
(Light NFS share, CIFS Share, a bit of media streaming, system backups) Noting too intense.
Is a user likely to notice a difference (HC530 is 7200 RPM vs 5400RPM for WD), but then there is seek/access time/caching hits etc.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Others use these drives without issue. Well the only issue and I think you can say this about many drives is they may not power up if your SATA power connector is not to the current standard (one of the pins is 3.3VDC).

any idea as to performance in typical home NAS use?
(Light NFS share, CIFS Share, a bit of media streaming, system backups) Noting too intense.
Is a user likely to notice a difference (HC530 is 7200 RPM vs 5400RPM for WD), but then there is seek/access time/caching hits etc.
They are pretty darn fast drives. Would you notice a real world performance increase, probably not, but I would expect them to run hotter, however I have not looked up the specs so if they pull more watts/amps then they will run hotter.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Others use these drives without issue. Well the only issue and I think you can say this about many drives is they may not power up if your SATA power connector is not to the current standard (one of the pins is 3.3VDC).
Can you please say more about this? Is this a control signal or a power rail? I looked at both drives under power and they only reference +5/+12V DC. I remember something about a power up issue with shucking USB drives, but I didn't go into detail since I didn't need to.

They are pretty darn fast drives. Would you notice a real world performance increase, probably not, but I would expect them to run hotter, however I have not looked up the specs so if they pull more watts/amps then they will run hotter.
I was actually wondering if there would be a performance bottlekneck with random I/O.... The OP ed piece indicated something about poor random performance....
From the data sheet:
HC530 SATA models: 8KB Queue Depth = 1 @ 40 IOPS, Queue Depth = 4 @ Max IOS
The WD Red data sheet is silent as to IOPs
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I'm thinking of buying a Western Digital Ultrastar DC HC530 14 TB (0F31284) since it seems to be available locally for a decent price.
(I'm wondering if there is a reason relating to technical issues?)



Use case: I backup my main pool with a single removable VDEV (Hot Swap SATA) by doing a ZFS send on selected datasets.

Description/OpEd Article:
https://www.anandtech.com/show/1266...es-ultrastar-dc-hc530-14-tb-pmr-with-tdmr-hdd

Datasheet:
https://documents.westerndigital.co...c500-series/data-sheet-ultrastar-dc-hc530.pdf

IIUC this drive would suck for random access, but should be OK (infact fairly fast) for large block sequential workloads.
I'm thinking that it would likely be OK for my backup use case (glorified home made data cartridge that is way cheaper than commercial data cartridges), but it may cause trouble if it was put into a multi drive vdev (such as a home NAS server).

I'd appreciate if anyone with some experience in the finer detais of hard drive specs can confirm if my understanding is correct and/or add any personal experience/insights. Thanks.
I use a pair of 12TB HC-520 drives -- very similar to the model you've selected -- to do exactly what you're contemplating: backup my pool to a single disk. I rotate the pair weekly, with one disk online and the other stored away in my fire-proof safe. I've set up scripts and replication to backup data in the early morning hours, with a scrub running Friday night so that I can swap the drives on Saturdays.

I have no complaints about the performance of these drives.

To make life simpler, I used this datasheet to find the part number for a model without the 'Power Disable' feature. In my case, this was part number OF30141. For 14TB, you'll want either OF31278 (4Kn) or OF31284 (512e).

 

NASbox

Guru
Joined
May 8, 2012
Messages
650
I use a pair of 12TB HC-520 drives -- very similar to the model you've selected -- to do exactly what you're contemplating: backup my pool to a single disk. I rotate the pair weekly, with one disk online and the other stored away in my fire-proof safe. I've set up scripts and replication to backup data in the early morning hours, with a scrub running Friday night so that I can swap the drives on Saturdays.
How are you doing your backups? Are you making a snapshot and then doing a zfs send on the delta between the last backup? That's what I am doing. If you doing it some other way, I'd love to know what you are doing if you would be OK sharing.

I have no complaints about the performance of these drives.

Is there any easy way to benchmark the drive on FreeNAS? I just picked up a drive today and am going to mount it in my removable carrier and burn it in -- which I expect will take about a week. When I'm done, the drive won't have anything on it, so I have a rare opportunity to run any kind of test on it without worrying about data. Anything I can do from the command line?
Are these drives doing mainly sequential I/O ... i.e. doing a stright copy of a delta snapshot, or is there a lot of random I/O i.e. mirroring dataset(s) in real time? Based on the specs, they kick ass for sequential IO for a reasonably priced hard drive.

To make life simpler, I used this datasheet to find the part number for a model without the 'Power Disable' feature. In my case, this was part number OF30141. For 14TB, you'll want either OF31278 (4Kn) or OF31284 (512e).

@Spearfoot, thanks for including this link--I had heard about problems with drives not spinning up, but didn't know exactly why.

This explains everything very clearly, and it's good to know that the simple fix is just to make sure that pin 3 is disconnected/floating.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
How are you doing your backups? Are you making a snapshot and then doing a zfs send on the delta between the last backup? That's what I am doing. If you doing it some other way, I'd love to know what you are doing if you would be OK sharing.
Hmmm... it's complicated.

I have 4 FreeNAS systems (see 'my systems' below); all are FreeNAS-on-ESXi 'All-in-One' systems. I use replication tasks to back up datasets between these servers, with everything landing on the main pool in BANDIT (the 'primary' server). I run nightly rsync jobs (based on scripts) to backup all of the BANDIT datasets to a single pool on the 12TB HC-520 backup drive. The backup pool also has datasets matching the NFS datastores on the other AIO systems: boomer_nfs, brutus_nfs, and bacon_nfs. On Friday nights I run a scheduled script to shut down the NFS-based virtual machines, copy these NFS datastores to the backup disk using rsync, restart the virtual machines, and then kick off a scrub of the backup disk. The end result of all this madness is that, on Saturday morning, the backup drive contains up-to-date datasets matching those on all of the servers, including datasets unique to each server, is freshly scrubbed, and is ready for me to pull and swap with last Saturday's backup disk from my safe. Then the the whole cycle starts over.

It took some work to set up the pool correctly on both backup disks. I wrote some one-off scripts to copy the data over using snapshots and replication. I have about 4TB of data, which is small potatoes compared to real data hoarders, but it took a while to copy. The replication and nightly rsync tasks don't involve much data, seldom more than a 100GB or so.

Here's one of the one-off scripts I used when setting up the backup disks (pool name is 'dozer'):
Code:
#!/bin/sh

. /mnt/tank/systems/scripts/host.config

# Replicate selected datasets to dozer
 
logfile=${logdir}/replicate-to-dozer.log

tank_datasets="archives backups bandit devtools domains hardware media music opsys photo systools web"

rm ${logfile}

for dataset in $tank_datasets; do
  echo "+---------------------------------------------------------------------------------" | tee -a ${logfile}
  echo "+ $(date): Replicate dataset $dataset from tank to dozer" | tee -a ${logfile}
  echo "+---------------------------------------------------------------------------------" | tee -a ${logfile}
  zfs snapshot -r tank/$dataset@backsnap_$dataset | tee -a ${logfile}
  zfs send -v -p tank/$dataset@backsnap_$dataset | pv | zfs receive -v -F dozer/$dataset | tee -a ${logfile}
  zfs destroy tank/$dataset@backsnap_$dataset | tee -a ${logfile}
  zfs destroy dozer/$dataset@backsnap_$dataset | tee -a ${logfile}
done

 
Top