Some doubts about WD Green and capacity percentage of volumes

Status
Not open for further replies.

tmhw2024

Dabbler
Joined
May 16, 2014
Messages
28
Hello guys, I'm an happy user of FreeNAS and I just upgraded my version from 9.0 to 9.2.1.5 and I saw that now there is kind a control for the percentage of occupied space in disks.
I have one drive occupied for the 95%, where is the problem? I should not fill so much the disk?

And another question is about my wd green disks. I have installed FreeNAS on a USB stick but I read in this post http://forums.freenas.org/index.php?threads/wd-green-load-cycle-question.16912/ something about the load cycle.

My question is, is there something that I have to set?
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
I can't find a place in the GUI that displays a "kind a control for the percentage of occupied space in disks." Can you tell me where in the WebGUI you found this and/or attach a screenshot?

I don't know about your WD Green load cycle question.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Allow me to answer your questions with a question:

Did you do any research? All the basic documentation and plenty of threads here have mentioned the fact that under no circumstances are you to fill a ZFS pool, with the common recommendation being no more than 80% used space for typical home/office use.

As for the greens, what is your doubt? Did you read the thread? From what I recall everything's explained there.
WD Greens park their heads a lot by default. This can be changed so that they park their heads less, reducing wear and tear. This change is recommended for server usage. Your USB flash drive has absolutely nothing to do with the WD greens.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
Ericloewe - Everything going ok today? Your reply seemed a little... heated.

I think that jhacked may have been bitten by the change FreeNAS made in v9.2.1.2 in how it calculates the SIZE column in the "WebGUI -> storage -> Active Volumes" section . (see https://bugs.freenas.org/issues/4419 for more discussion on the change).

I just wanted to double-check that was the actual problem he saw before pointing him down that trail though.
 

tmhw2024

Dabbler
Joined
May 16, 2014
Messages
28
@eraser infact there is no place in the GUI that display "kind a control for the percentage of occupied space in disks." and no one said that it is written in the GUI. I exactly meant what Ericloewew said, and now I have some alerts about the occupied space.

@Ericloewe I obviously read the post but it was hard to understand (I'm not english and I don't understand certains tech terms). So do you advice me to set this thing considering that my NAS is always on?
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
Sounds good. The alerts/displays I remember seeing all reference ZFS "Volumes" not individual "disks". But I might be remembering wrong.
 

tmhw2024

Dabbler
Joined
May 16, 2014
Messages
28
Immagine.png
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
@eraser infact there is no place in the GUI that display "kind a control for the percentage of occupied space in disks." and no one said that it is written in the GUI. I exactly meant what Ericloewew said, and now I have some alerts about the occupied space.

@Ericloewe I obviously read the post but it were hard to understand (I'm not english and I don't understand certains tech terms). So do you advice me to set this thing considering that my NAS is always on?

You have to download the utility that changes the drives' settings. You have a couple of options, I suggest you look around for a simple package (there used to be one) that includes it and a few similar things.
Setting the delay to 300s typically solves the problem.

I mean no offense, but you must absolutely not fill those pools, otherwise you will be in a lot of trouble. Move data around and keep them below 80%, ideally. Once they're full, it's rarely easy to even delete a single file.
 

tmhw2024

Dabbler
Joined
May 16, 2014
Messages
28
I'm a little busy so the only day I could search was yesterday and now infact I understand so much more!
For the HDD question I saw that "Intellipark" is responsable for the fact that, WD green disks, park their heads a lot. So I will set the delay to 300s like you said. But can you reccomend me a specific utility?

For the other question I read this post on the official oracle community: https://community.oracle.com/thread/2550071 and they say that the used percentage of a disk is related to its workload. So if I don't work too much with my files, I could keep it to 90/95% with no problem and infact, actually with the 95%, I have absolutely no problem.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm a little busy so the only day I could search was yesterday and now infact I understand so much more!
For the HDD question I saw that "Intellipark" is responsable for the fact that, WD green disks, park their heads a lot. So I will set the delay to 300s like you said. But can you reccomend me a specific utility?

For the other question I read this post on the official oracle community: https://community.oracle.com/thread/2550071 and they say that the used percentage of a disk is related to its workload. So if I don't work too much with my files, I could keep it to 90/95% with no problem and infact, actually with the 95%, I have absolutely no problem.

95% is playing with fire. 90% is debatable, but the recommendation is 80%.

Again, if you reach 100%, you will probably have a hard time even deleting a file.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Also, the oracle documentation sometimes applies and sometimes doesn't. Oracle's ZFS implementation is different from the open source ZFS. For the open source version 80% is the recommended maximum for long term with 95% being when ZFS' write characteristics change which results in a major performance penalty.
 

tmhw2024

Dabbler
Joined
May 16, 2014
Messages
28
I need to know what does "playing with fire" means! When ericloewe says this, it seems that in 3 days my hdd's will explode and in the meantime cyberjock talks just of a performance penalty. Anyway I decided to bring that 2 disks to 90% and 90% and I find this thing really clamorous, ext4 (1, 2, 3) and even the shit of NTFS don't have this problem.

PS= anyone for the utility?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
95% is when ZFS changes its writing algorithm to maximize free space. Fragmentation skyrockets and whatever fragmentation you end up with you basically have forever since you can't defrag ZFS. For most situations you shouldn't be filling your pool more than 80% as your zpool will begin fragmenting more and more since your disks have less free space available to lay down new data. You also will see increased head movement which will hurt your IOs.
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
Sorry I'm confused about all the "disks" mentioned here. I have to ask: Is the 80% limit for pools or for disks/vdevs?
Because I have a pool that is 73% full and its 2 vdevs are filled to different degrees. The first vdev is filled 88% and the second one is 50%. But that is how ZFS's striping distributed the data between the vdevs...

So I was always under the impression that the 80% limit is for pools, but if I'm mistaken, then I need to fix this asap.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It's per-pool.
 
Status
Not open for further replies.
Top