Low power MB & CPU

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
LOL, modern hard drives no longer land the head on the platter when spinning down. Instead they rest the heads over the edge of the platter avoiding damaging them.

Hard drives haven't landed heads on platters for many years. Red herring.

LOL, modern hard drives no longer land the head on the platter when spinning down. Instead they rest the heads over the edge of the platter avoiding damaging them. They are designed for over 300,000 spin cycles.

No, they are /rated/ for over 300,000 spin cycles, but that does not mean they will survive 300,000 spin cycles or even half of that. The problem you are going to run into is that statistically, increased load cycle count correlates with a higher rate of failure in drives. This is bad for array operations. Additionally, you do not get staggered spinup when spindown is configured, so the PSU has to be able to endure the simultaneous spinup current for however many drives you have. This is generally not good. Finally, protocols such as iSCSI have short timeout values that can wreak havoc when an array appears to be nonresponsive.

Storage array vendors generally advise that spinning down an array shouldn't be done frequently, if at all. Perhaps it would be okay if you were only going to bring it online once a day for backups or something like that, but even there it is not advisable.

There are other risks that have been mitigated.

Oh yeah? What risks are those and how have they been mitigated? Curious professionals want to know.
 
Joined
Jun 15, 2022
Messages
674
I haven't had to issue a Park command in... yeah, it's been decades.

Spin 'em down, for the home user the drives will last far longer. For the small office do it on a schedule. For a 24/7 datacenter keep 'em humming.

To @jgreco's points, I've been reading the spinup issues you mention have been addressed, though I haven't had experience with that yet and will be looking into it in the near future.
---
Enterprise Drives can spin for about 5 years/50,000 hrs before the bearings are starting to wear out. The ramps are also good for at least 10,000 load/unload cycles, which in 5 years is 5 per day, so don't go nuts on spinning them down. Use smartctl to check the number of the load/unloads the drives are rated for (or look up the specs, that works too).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The ramps are also good for at least 10,000 load/unload cycles, which in 5 years is 5 per day, so don't go nuts on spinning them down.

That sort of duty cycle is much more sustainable.

Use smartctl to check the number of the load/unloads the drives are rated for (or look up the specs, that works too).

I wouldn't trust that. Remember that the whole aggressive head parking and spindown stuff was precipitated by ENERGY STAR, which pressured the industry to reduce energy consumption, often thoughtlessly. The industry, in reaction, was happy to implement stupid stuff that would wear out equipment sooner and cause them to sell more units. All they really needed to do was to survive the warranty period without it failing, and even then, mostly just in a Windows desktop environment.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Drives being able to withstand spindown doesn't mean they are designed to do so.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
I haven't had to issue a Park command in... yeah, it's been decades.

Spin 'em down, for the home user the drives will last far longer. For the small office do it on a schedule. For a 24/7 datacenter keep 'em humming.

To @jgreco's points, I've been reading the spinup issues you mention have been addressed, though I haven't had experience with that yet and will be looking into it in the near future.
---
Enterprise Drives can spin for about 5 years/50,000 hrs before the bearings are starting to wear out. The ramps are also good for at least 10,000 load/unload cycles, which in 5 years is 5 per day, so don't go nuts on spinning them down. Use smartctl to check the number of the load/unloads the drives are rated for (or look up the specs, that works too).
Even for the home user it can be on a schedule.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm now starting to lookim for a low-power system for storing 4K video from 10+ security cameras in a remote location, so 100% reliability is required and mirrored OS drives makes sense, though isn't life-ending.
Nothing really wrong with using SATA SSDs as boot drives, thus leaving the M2 slot for a SLOG drive if that's what you decide.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
Just a follow up question :

Would a A2SDi-2C-HLN4F (a C3338 instead of a C3558) with only 2 cores be sufficient as a backup station (8x 4TB in a RaidZ2 pool with LZ4 compression + a usb SSD as truenas drive?

a) saturating 1Gbe while running 1 backup process using SMB
b) running scrub rather close to HDD speed?

In the past I read that SMB was about using 1 CPU core max..

A suggestion for compatible memory?
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Just a follow up question :

Would a A2SDi-4C-HLN4F (a C3358 instead of a C3558) with only 2 cores be sufficient as a backup station (8x 4TB in a RaidZ2 pool with LZ4 compression + a usb SSD as truenas drive?

a) saturating 1Gbe while running 1 backup process using SMB
b) running scrub rather close to HDD speed?

In the past I read that SMB was about using 1 CPU core max..

A suggestion for compatible memory?
Do you mean C3338?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Would a A2SDi-2C-HLN4F (a C3338 instead of a C3558) with only 2 cores be sufficient as a backup station (8x 4TB in a RaidZ2 pool with LZ4 compression + a usb SSD as truenas drive?
Yes, but consider a cheap M.2 NVMe drive as boot drive instead of USB.
a) saturating 1Gbe while running 1 backup process using SMB
Likely.
b) running scrub rather close to HDD speed?
Not sure about this one, since a scrub uses all the CPU power it can, but on a backup server you need not worry about the time a scrub takes to complete or any performance impact of the scrub on other activities… since there are basically none on a backup server.
A suggestion for compatible memory?
Any second-hand/refurbished DDR4 RDIMM should do, 2133 MHz or higher.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
A quick follow up.

i did some tests with an old backup freenas station (6 disks RAIDZ2 and freenas 9x). After a year gathering dust it came back quite nicely and a scrub also ran ok.

The scrub seem to use only 1 core and not using it to the max. Disks ran "only" at 60MB/s, but that can be a SATA controller limitation. (edit: read each disk runs at 60MB/s, so 360MB/s combined)

As I understand it, it's of no use to have more than 2 core's in my usage scenario: 8 disks RaidZ2 for backup using 1Gbe and scrubs .
 
Last edited:

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
The scrub seem to use only 1 core and not using it to the max. Disks ran "only" at 60MB/s, but that can be a SATA controller limitation. (edit: read each disk runs at 60MB/s, so 360MB/s combined)
I'd imagine the controller should be able to do at least 600 MiB/s total throughput, if not more unless it's a SATA 2. That number seems really low.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
Not necessarily. Seek activity or CPU limits can cause this sort of thing; check with "gstat" and "top" to see how busy the drives and/or CPU are.
gstat gives high busy % for 2 of the 6 drives (80-90%)
top gives CPU busy % under 50% (about 40-45 %)

--> seems to be disk limited
 
Top