is SSHD worth, size of HD, about ZFS, and ...

Status
Not open for further replies.

ihwBunny

Dabbler
Joined
Nov 3, 2015
Messages
13
Whatever you have (in total) should be, in my opinion, at most 25-30% of usable storage you are to deploy.

For example, with six 4TB drives you are getting only around 12 TiB usable... Do you have more than 3TiB in total? I think yes. I would then consider either 8 drives 4TB each or 6 drives 6TB each.

The above assumes RAID-Z2. The difference between TB and TiB is difficult to get accustomed to, but one has to.

I just posted today elsewhere in the forum, that there are downsides to mixing drive types...

I haven't ever worked in a data center. But I heard about don't put a same batch of HDs in same box, you may get trouble if it's a bad batch, may fail at same time, or something like that, and different brands maybe a good idea.

you mention about HD troubleshoot, it that about same as what joeschmuck said in the same thread, to fix logicboard or motor failures. other than that, cannot tell any other HD issues can be fixed. but this' a factor should be considered, especially for small quantities, like 4 or 5 HDs, and not hard for handy people. I am handy or say tend to be handy, not afraid to open and replace parts, like computer hardwares troubleshooting.
 
Last edited:

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
The thread I linked to has some discussion about trying to micromanage some risks when making purchase decisions. Various opinions are presented, it is worth reading.

When a large storage unit is deployed in a data center, it is often with a single disk type. Risk is managed by monitoring, rapid replacement of faulty devices and analysis of failure trends. It is true that sometimes different brands are used, but then usually these are disks made specially for let's say IBM and despite coming from different manufactures they have very similar characteristics (trivializing, they are custom made for IBM). Buying retail disks does not offer such luxury. Disks from Western Digital and Seagate might have different performance characteristics. When such a mixture is deployed in RAID-Z2 (or Z1 or Z3) performance can suffer, as it would depend on the worst performing disk for the activity (read, write, seek).

If one values their data, they keep a spare disk, either spinning in the system or already tested on a shelf. And then there are tough questions: who would replace a failed disk when you are gone for a week or longer, who would monitor the system health when you are spending a weekend without Internet access, etc.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

ihwBunny

Dabbler
Joined
Nov 3, 2015
Messages
13
The thread I linked to has some discussion about trying to micromanage some risks when making purchase decisions. Various opinions are presented, it is worth reading.

When a large storage unit is deployed in a data center, it is often with a single disk type. Risk is managed by monitoring, rapid replacement of faulty devices and analysis of failure trends. It is true that sometimes different brands are used, but then usually these are disks made specially for let's say IBM and despite coming from different manufactures they have very similar characteristics (trivializing, they are custom made for IBM). Buying retail disks does not offer such luxury. Disks from Western Digital and Seagate might have different performance characteristics. When such a mixture is deployed in RAID-Z2 (or Z1 or Z3) performance can suffer, as it would depend on the worst performing disk for the activity (read, write, seek).

If one values their data, they keep a spare disk, either spinning in the system or already tested on a shelf. And then there are tough questions: who would replace a failed disk when you are gone for a week or longer, who would monitor the system health when you are spending a weekend without Internet access, etc.

Thanks so much for the clear explanations.
So stick with one model, that's best for my home NAS then. Someone said to keep a cold spare for a quick failure fix. Is it a smart choice? What you guys do in enterprise environments?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Or a cold spare?
Unless cold spares gain some measure of intelligence and locomotion, they're not going to replace the failing drives by themselves. ;)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Or a cold spare?
If you had enough of resources and plenty of time, you would have built a system that would only succumb to a nuclear attack or direct hit by an asteroid :D . There is always a trade-off between data loss and required investment.

About having a hot-spare:
* Do you have a SATA port available?
* Does the case have a place for one more disk?
* Would the power supply handle additional load (+30W at each cold start)?
* Can you still pay your electric bill when running one more disk?
* Would adding one more disk (running idle) push the noise level beyond the acceptable limit?
* Would adding one more disk (running idle, but also changing the air flow pattern) be beyond system cooling capacity?
* Can you afford buying one more disk?
* Would you remember to monitor its health via S.M.A.R.T?

Some of the above questions sound strange to a home user ;)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Unless cold spares gain some measure of intelligence and locomotion, they're not going to replace the failing drives by themselves. ;)
  • No spares. Depending on local store availability and holidays observed by mail order, having no spares might mean that a replacement disk is acquired only after a couple of days. Possibility of dead on arrival (DOA) syndrome needs to be taken into account.

  • Cold spare. Requires some measure of intelligence and smart hands (at levels comparable to following a recipe for roasting a turkey in an oven with a probe...). It should be tested (to eliminate DOA) after the purchase. If one can afford it, reduces chances of data loss as compared to having no spares at all.

  • Hot spare. Allows for holidays and weekend trips. See my previous post for some points one might want to consider. If one can afford it, reduces chances of data loss as compared to having no spares at all or having a cold spare. Obviously no DOA be definition. However, a need to replace it before it becomes a part of the storage pool needs to be taken into account (consider having a cold spare for it :D).
 
Last edited:
Status
Not open for further replies.
Top