scrubbertux
Cadet
- Joined
- Feb 1, 2021
- Messages
- 3
I'm currently trying to build a new Storage Server based on TrueNas and having some trouble because the "load_cycle_count" of the drives gets increased every 3 minutes, which would lead to a load_cycle_count of 480 per day or 175200 per year. I don't think this is intended behaviour.
But first things first,
I'm using the following hardware & setup:
The 4 16TB hdds are directly connected to the motherboard.
What I did so far:
- Changed the hdds format to 4kNative via SeaChest Utilities
- Executed Spearfoot's BurnIn script. Took 8 days and completed without any complications and 0 errors for all 4 drives.
- Note: The "load_cycle_count" of the Drives directly after the BurnIn which completed today morning was around 60 now it is arround 300 already :(
- Note2: I didn't create any pool or changed any settings so far, just a plain install and the BurnIn test.
What seems to happen:
The drives unload their heads when there is no access within 2 Minutes. These is the default setting also explained in the very detailed Exos-Datasheet, which can be found here: https://www.seagate.com/www-content/product-content/enterprise-hdd-fam/exos-x-16/en-us/docs/100845789j.pdf
There seem to be 4 states which have the following results and timings:
Reading from the SeaChest Utilities I know that only idle_a and idle_b states are enabled by default with the factory settings. idle_c and Standby_z are disabled.
The problem now is, that the drives unload their heads every 2 minutes after the last operation going to state_b (you can hear that clearly) and then something "wakes up" the drive every 3 minutes and it goes back to state_a which gives a loud *klick* and increases the load_cycle_count counter...
Only disabling S.M.A.R.T completely for every drive allows the drives to stay in state_b. So my guess is, S.M.A.R.T readings are the cause of the described problem.
What I tried already without any change:
I absolutely do not want to SpinDown the drives, because that would decrease lifetime, but I think this current behavior will also decrease the lifetime very fast...
What I don't understand ist the fact, that I'm doing nothing special nor did I change any default settings. Neither in TrueNAS nor on the drives. Just using enterprise discs with enterprise hardware and setup without any changes or modifications.
At the moment I only see two options here:
I would prefer to have the drives go to idle_b state when they are not used, but I also would be OK with having them stay in idle_a.
However I don't know if staying in idle_a all the time is good for the drive as the manufacturers default has idle_b enabled...
If you need any further infos, just let me know! I'm happy for any suggestions or explanations.
But first things first,
I'm using the following hardware & setup:
- CPU: i3 9100
- MB: Supermicro X11SCH-LN4
- Drives: 4x Seagate Exos x16 16TB (ST16000NM001G), 1xTranscend 128GB m2 SSD
The 4 16TB hdds are directly connected to the motherboard.
What I did so far:
- Changed the hdds format to 4kNative via SeaChest Utilities
- Executed Spearfoot's BurnIn script. Took 8 days and completed without any complications and 0 errors for all 4 drives.
- Note: The "load_cycle_count" of the Drives directly after the BurnIn which completed today morning was around 60 now it is arround 300 already :(
- Note2: I didn't create any pool or changed any settings so far, just a plain install and the BurnIn test.
What seems to happen:
The drives unload their heads when there is no access within 2 Minutes. These is the default setting also explained in the very detailed Exos-Datasheet, which can be found here: https://www.seagate.com/www-content/product-content/enterprise-hdd-fam/exos-x-16/en-us/docs/100845789j.pdf
There seem to be 4 states which have the following results and timings:
Power Condition Name | Description | Manufacturer Default Timer Values |
Idle_a | Reduced electronics | 100ms |
Idle_b | Heads unloaded. Disks spinning at full RPM | 2 min |
Idle_c | Heads unloaded. Disks spinning at reduced RPM | 4 min |
Standby_z | Heads unloaded. Motor stopped (disks not spinning) | 15 min |
Reading from the SeaChest Utilities I know that only idle_a and idle_b states are enabled by default with the factory settings. idle_c and Standby_z are disabled.
The problem now is, that the drives unload their heads every 2 minutes after the last operation going to state_b (you can hear that clearly) and then something "wakes up" the drive every 3 minutes and it goes back to state_a which gives a loud *klick* and increases the load_cycle_count counter...
Only disabling S.M.A.R.T completely for every drive allows the drives to stay in state_b. So my guess is, S.M.A.R.T readings are the cause of the described problem.
What I tried already without any change:
- I tried to set all different APM Settings for the drive, but that doesn't help cause the drives don't support APM, only EPC
- I also tried different HDD Standby settings, also using the "Force HDD Standby" Checkbox, but no effect
- I'm aware of this SpinDown-script, but I don't want to do spindowns, so that doesn't help me
- I'm aware of this solved Bug, addressing an issue closely related. But "Force HDD Standby" isn't usable without SpinDown, and I don't want to use spindown...
I absolutely do not want to SpinDown the drives, because that would decrease lifetime, but I think this current behavior will also decrease the lifetime very fast...
What I don't understand ist the fact, that I'm doing nothing special nor did I change any default settings. Neither in TrueNAS nor on the drives. Just using enterprise discs with enterprise hardware and setup without any changes or modifications.
At the moment I only see two options here:
- Disable EPC/state_b completly for the drives (but is it usual you need to do this?)
- Disable S.M.A.R.T, but then I can't do regular disc health checks (this should also not be the standard, right?)
I would prefer to have the drives go to idle_b state when they are not used, but I also would be OK with having them stay in idle_a.
However I don't know if staying in idle_a all the time is good for the drive as the manufacturers default has idle_b enabled...
If you need any further infos, just let me know! I'm happy for any suggestions or explanations.