Just a dumb storage calc question

Status
Not open for further replies.

bajaking

Dabbler
Joined
Oct 28, 2014
Messages
16
Being the defacto sysadmin for my lab, I should know these things, but I don't. And I should probably spend a couple hours searching the forums, but I'm lazy. And I figure someone here likes to show off their mastery of the basics at any opportunity, so here you go:

Brand new to FreeNAS. Still new to storage and servers in general.
Bought a new Supermicro box from ServersDirect with FreeNAS 9.2.1 installed.
It came with 16 x 6TB disks. I assigned 2 of them as hot spares. So 14 disks.
I created two volumes because this is how some random internet person says to do it:
9 disks in Raid Z1, lz compression
5 disks in Raid Z1, lz compression

In my ignorance, combined with the collective ignorance of other internet people which tell me that Z1 is pretty close to RAID5 in terms of capacity, I would assume I'd get about
~48TB + ~25TB = ~73TB of usable space.
Give or take some space for the OS, swap, etc.
But I only actually get
~40.4TB + ~23.5TB = 63.9TB

Where is my other 10TB?
And please hurl any insults or usable, constructive criticisms this way (still time to reconfig the box).
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
You got lost in the TiB versus TB confusion.

If there was no overhead (and stuff) you would get at most
Code:
( ( 9-1) * 6 + (5-1) * 6 ) * 1000^4/1024^4 = 65.48
72 * 1000^4/1024^4 = 65.48


Unfortunately, the above is not any mastery of basics, just reading writings of some other Internet people... :)

P.S. I have bad news for you. Not only you have to read Wikipedia, but also depending of the workload you might not want to have more that 80% of your filesystem(s) taken by files... All said and done, you have bought 16 disks and your storage capacity is what you have dreamt about with 8-9 disks...

P.P.S. Of course it would be the same percentage-wise whether you had bought 1TB or 8TB hard drives...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Did you say insults!?

You worthless piece of slime! How you could NOT know the answers. :D

Ok.. I'll be more gentle.

So 6TB drives only have about 5.25TB of usable disk space. So 8x5.25TB(remember, 1 disk of parity) = 42TB or so. That's less than you are expecting, but slightly more than you are getting.So let's get some info.

Can you post the zpool status, zpool list, and zfs list outputs? Please put them in CODE tags or pastebin. The formatting of the text is important.

Edit: I will say that using that size of disks in a RAIDZ1 is... dangerous. I'd never consider it in a second.
 

bajaking

Dabbler
Joined
Oct 28, 2014
Messages
16
Excuse the creative volume names. And thank you for the quick and helpful responses!

zpool status:

Code:
pool: Large
state: ONLINE
  scan: none requested
config:

   NAME  STATE  READ WRITE CKSUM
   Large  ONLINE  0  0  0
    raidz1-0  ONLINE  0  0  0
    gptid/82d992e2-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/83502bf4-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/83c19e32-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/84381649-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/84aeb5f8-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/852627cb-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/859c45f5-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/86125726-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/86886c77-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
   spares
    gptid/87e3e4f6-55ca-11e4-b3cc-a0369f45872c.eli  AVAIL

errors: No known data errors

  pool: Small
state: ONLINE
  scan: none requested
config:

   NAME  STATE  READ WRITE CKSUM
   Small  ONLINE  0  0  0
    raidz1-0  ONLINE  0  0  0
    gptid/caa625c6-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/cb1b8410-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/cb92c11d-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/cc10606c-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
    gptid/cc8597eb-55ca-11e4-b3cc-a0369f45872c.eli  ONLINE  0  0  0
   spares
    gptid/cd7caad2-55ca-11e4-b3cc-a0369f45872c.eli  AVAIL

errors: No known data errors


zpool list:
Code:
Large  49T  6.97T  42.0T  14%  1.00x  ONLINE  /mnt
Small  27.2T  1.20M  27.2T  0%  1.00x  ONLINE  /mnt


zfs list:
Code:
NAME  USED  AVAIL  REFER  MOUNTPOINT
Large  6.19T  36.7T  299K  /mnt/Large
Large/.system  9.26M  36.7T  313K  /mnt/Large/.system
Large/.system/cores  256K  36.7T  256K  /mnt/Large/.system/cores
Large/.system/rrd  256K  36.7T  256K  /mnt/Large/.system/rrd
Large/.system/samba4  7.14M  36.7T  7.14M  /mnt/Large/.system/samba4
Large/.system/syslog  1.31M  36.7T  1.31M  /mnt/Large/.system/syslog
Large/Temp  213M  4.00T  213M  /mnt/Large/Temp
Large/WGS  6.19T  23.8T  6.19T  /mnt/Large/WGS
Small  901K  21.4T  230K  /mnt/Small


I'm just going to throw a RTFM (more carefully) at myself because I'm sure it's overdue.

zpool list brings false happiness.
Then zfs list matches what I see in the GUI: 36.7T + 21.4T. With the assumption TB =~1.1*TiB, it matches what users see when mounting the shares: ~40.4TB + 23.5TB.

P.S. Yeah, Z1, I know. But my users all screamed for max space over robustness. I had to fight just for the spares. They swear the NAS is just for sharing files and they'll maintain offline backups. That always works out, right?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, spares were a waste. Going with RAIDZ2 with no spares is better than RAIDZ1 with spares any day. A spare is nothing but a "ready to go drive". It provides zero redundancy until it has resilvered completely without error. The problem, with RAIDZ1 there's a damn good chance you'll never get that spare resilvered completely without error.

You are literally looking at a pool that is probably going to go sideways the first time you actually have to resilver in production. So keep those backups ready, you'll very likely be using them the first time a disk fails. Even 3-disk RAIDZ1s can be dangerous, and you've got a 9-disk RAIDZ1. All I can say is "good luck to you". Any professional file server admin knows that RAIDZ1/RAID-5 died five years ago. Yes, 5 years ago. The time to abandon RAIDZ1 was years and years ago.
 

Dennis.kulmosen

Explorer
Joined
Aug 13, 2013
Messages
96
You said that you could still reconfig the box before going in production, right?
If thats the truth go reconfig it as 2 vdevs of 8 drives RAIDz2 before its too late.
That would still give you 12x6TiB and your healthy sleep at night back. ;-)
Plus your vdevs will be more even, both in terms of number of drives but also vdev capacity.


Sent from my iPhone using Tapatalk
 

bajaking

Dabbler
Joined
Oct 28, 2014
Messages
16
Any professional file server admin knows ...

Ah, and therein lies the rub. Professional I am not -- just a hack dev/dba stuck with every other electron-related responsibility at my academically funded (=poor) workplace. Scary.

I should be able to reconfig as Z2. I'll go with the aforementioned 2x8 vdevs layout. Any advice to the contrary, or any other advice in general...thanks again for the input, folks!
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Just to make sure you understand: you cannot reconfigure on-fly. You have to destroy, and create both pools anew.

It will give you a very good practise in the recovery...

P.S. I can see that utilization of your pools allows for keeping a copy of all the data inside the FreeNAS, while practicing restoring from the backups ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ah, and therein lies the rub. Professional I am not -- just a hack dev/dba stuck with every other electron-related responsibility at my academically funded (=poor) workplace. Scary.

Right, that's totally fine. I wasn't implying you were a fool or anything. I was trying to politely say that the source for your info on going with RAIDZ1 shouldn't be trusted and anything else he said to you as advice should probably be verified before using it.
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
I'm guessing the reason some guy said to use RAIDZ1 with 9 and 5 disks is due to the fact that with ashift=12 the overhead of lost disk space is minimized at 5, 9, and 17 disks with RAIDZ1.

See here for ZFS overhead stuff:
https://web.archive.org/web/2014040...s.org/ritk/zfs-4k-aligned-space-overhead.html

There are 3 sources of overhead in ZFS. Metadata (1/64th of the array), and the 2 forms of allocation overhead detailed in my link.

But otherwise that config is a pretty terrible decision for the multitude of other reasons people have stated.
 

bajaking

Dabbler
Joined
Oct 28, 2014
Messages
16
Okay, so I'm back into (wannabe) sysadmin mode this week. Still working on finalizing this configuration. I copied the few TBs of data that a user put on the 9 drive volume over to the 5 drive volume. Then I deleted the 9 drive volume with plans to use 8 of them for a RAIDZ2, as suggested by Dennis.stjern's post, after which I figured I'd copy the data back over and then convert the remaining 6 drives + 2 hot spares into a 2nd 8 drive RAIDZ2.
However, when I try to configure 8 drives for the 1st RAIDZ2 (8x1x6.0TB), I get the "non-optimal" warning.
How non-optimal are we talking? Is this just a polite warning, or is this really a terrible choice?
I capitulate: how would you configure this box to balance these criteria: max capacity, reasonable fault tolerance, performance (r/w speed) is not a concern, good but not necessarily "5 9s" uptime.

p.s. I'll butter everyone up for sage advice by noting how impressed I am so far with both this forum and the FreeNAS admin web GUI. Looking forward to getting competent with this platform and buying some more FN boxes.
 

Dennis.kulmosen

Explorer
Joined
Aug 13, 2013
Messages
96
The non-optimal warning is only regarding a small performance penalty. So in your case it should not be a problem. I have a server at a client running a pool with 3 vdevs 8 drive RAIDz2, with no problems and they are editing video and making compositing on that thing. ;-)
Rule of thumb is to use a even numbers of drives in a vdev plus the parity and to use 1GB of RAM per TB of raw storage. And don't go with RAIDz1 unless you are aware of the risk and can accept the worst outcome from that decision. :smile:
Holding within those initial boundaries, should keep your data safe.

Please read the outstanding explanation made by Cyberjock here here you will find all the best advices to keep your data safe, plus explaining the risks by not going with the rules.
ZFS is so flexible that almost everything can be done, but that is not saying that everything should be done. ;-)
 

bajaking

Dabbler
Joined
Oct 28, 2014
Messages
16
Wow, Cyberjock's PPT is amazing. Wish I'd seen it earlier. Only request (and I suspect it's already been asked in the pinned forum) is to extend the insight of dos and don'ts to datasets, shares, and users admin. But it should be posted as required reading at the top of the manuals download page on the FN site.

Ok, almost done. Another minor question:
When I configure a volume using the ZFS Volume Manager GUI, I see a capacity of 32.74TiB.
After creating the volume, the active volumes list shows it as 30.4TiB available.
I would expect the capacity calculator in the Volume Manager to account for any overhead so it matches what we get with the actual resulting volume, but I guess not? What am I missing?

Screen_Shot_2014_11_05_at_3_19_07_PM.png


Screen_Shot_2014_11_05_at_3_22_44_PM.png
 
Last edited:

Dennis.kulmosen

Explorer
Joined
Aug 13, 2013
Messages
96
I am not aware of how the calculation is done, but i only consider it as a roughly guideline. :smile:


Sent from my iPhone using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Eh, I'd say put a ticket in for it at bugs.freenas.org if it bothers you. :P
 
Status
Not open for further replies.
Top