Only half my storage is available on ZFS

Status
Not open for further replies.

crwoo

Dabbler
Joined
Aug 26, 2016
Messages
14
Hi, I am new on the forums, but I have been using Freenas for a few years. Today my problem is: i have attached a screenshot showing I have 13.7TiB storage and it is all used up with 6.6TiB used. I have 4x 4TB drives installed, so i should about 13.7 usable space. can someone tell me what my problem is by looking at the picture? Am I missing something, or do you need more info to help me?
Thanks for any response
 

Attachments

  • Freenas.jpg
    Freenas.jpg
    187.2 KB · Views: 410

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi, I am new on the forums, but I have been using Freenas for a few years. Today my problem is: i have attached a screenshot showing I have 13.7TiB storage and it is all used up with 6.6TiB used. I have 4x 4TB drives installed, so i should about 13.7 usable space. can someone tell me what my problem is by looking at the picture? Am I missing something, or do you need more info to help me?
Thanks for any response
Do you use snapshots?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
It's likely your snapshots.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
the snapshots tab shows it is using under a mb combined.
Hmmm... post the results of this command (in CODE tags, please):
Code:
zfs list -o name,used,refer,usedsnap,avail,mountpoint -d 1
 

crwoo

Dabbler
Joined
Aug 26, 2016
Messages
14
Hi, here are the results:

Code:
[root@CurtisNas ~]# zpool status                                                                                                   
  pool: StorageArray                                                                                                               
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 19h43m with 0 errors on Sun Aug 21 19:43:23 2016                                                       
config:                                                                                                                             
                                                                                                                                   
        NAME                                            STATE     READ WRITE CKSUM                                                 
        StorageArray                                    ONLINE       0     0     0                                                 
          raidz2-0                                      ONLINE       0     0     0                                                 
            gptid/5d0ab2f8-2771-11e6-8205-78e7d17c180a  ONLINE       0     0     0                                                 
            gptid/db7e9fa8-1206-11e6-beb1-78e7d17c180a  ONLINE       0     0     0                                                 
            gptid/dc56b891-1206-11e6-beb1-78e7d17c180a  ONLINE       0     0     0                                                 
            gptid/16e6ddcb-1616-11e6-8a51-78e7d17c180a  ONLINE       0     0     0                                                 
                                                                                                                                   
errors: No known data errors                                                                                                       
                                                                                                                                   
  pool: freenas-boot                                                                                                               
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat Aug 20 03:45:40 2016                                                         
config:                                                                                                                             
                                                                                                                                   
        NAME        STATE     READ WRITE CKSUM                                                                                     
        freenas-boot  ONLINE       0     0     0                                                                                   
          da4p2     ONLINE       0     0     0                                                                                     
                                                                                                                                   
errors: No known data errors                                                                                                       
[root@CurtisNas ~]# zpool list                                                                                                     
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT                                                   
StorageArray  14.5T  13.7T   856G         -    38%    94%  1.00x  ONLINE  /mnt                                                     
freenas-boot  29.8G   517M  29.2G         -      -     1%  1.00x  ONLINE  -                                                         
[root@CurtisNas ~]# zfs list -o name,used,refer,usedsnap,avail,mountpoint -d 1                                                     
NAME                        USED  REFER  USEDSNAP  AVAIL  MOUNTPOINT                                                               
StorageArray               6.62T   163K      250K   190G  /mnt/StorageArray                                                         
StorageArray/.system       76.4M   151K      198K   190G  legacy                                                                   
StorageArray/Axis           157G   157G         0   190G  /mnt/StorageArray/Axis                                                   
StorageArray/Curtis        6.08T  6.08T         0   190G  /mnt/StorageArray/Curtis                                                 
StorageArray/None           140K   140K         0   190G  /mnt/StorageArray/None                                                   
StorageArray/ServerBackup   390G   390G         0   190G  /mnt/StorageArray/ServerBackup                                           
StorageArray/Shared        5.70G  5.70G      209K   190G  /mnt/StorageArray/Shared                                                 
StorageArray/jails          232K   140K       93K   190G  /mnt/StorageArray/jails                                                   
StorageArray/jails_2        671M   174K         0   190G  /mnt/StorageArray/jails_2                                                 
freenas-boot                517M    31K         0  28.3G  none                                                                     
freenas-boot/ROOT           510M    25K         0  28.3G  none                                                                     
freenas-boot/grub          6.33M  6.33M        1K  28.3G  legacy   
 

crwoo

Dabbler
Joined
Aug 26, 2016
Messages
14
I deleted all my snapshots and it didn't make any difference. they showed usage under a mb, but I dont use them so I deleted them.
 

crwoo

Dabbler
Joined
Aug 26, 2016
Messages
14
I had two disks fail on me in the past, but i used replace and they resilvered fine. I do not think that is the problem, but i thought i'll throw that out to you. The reason I do not think that is the problem is it shows 13TiB at array size and only 800gb remaining.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Strange, and puzzling... have you recently created large filesets or in some other way come close to filling up your pool, then deleted the files?

Also, it may help us if you'll post your system information, per the forum rules.

Have you tried rebooting the server? (Hey! It works for Windoze!) ;)
 

crwoo

Dabbler
Joined
Aug 26, 2016
Messages
14
Recently i made an iscsi block, but deleted it. Could the files still be somewhere? That could be huge.
I have a HP 380g6 server with 32gb ram and a jbod raid controller. Do you want more specific info?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Recently i made an iscsi block, but deleted it. Could the files still be somewhere? That could be huge.
I have a HP 380g6 server with 32gb ram and a jbod raid controller. Do you want more specific info?
It's possible the space for that iSCSI block is still tied up... for some reason. I've never seen this myself, but I've read about others experiencing that kind of problem.

Seriously, try rebooting the server.
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
The top line under Storage is the total raw capacity of your disks. The second line is the usable capacity of your pool after taking RAID and some other things into consideration. Because you have a RAIDZ2, you only get about 2 drives worth of space, so 6.6 TB sounds correct.
This post by Cyberjock explains how many parts of ZFS work: https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
 

Omega

Dabbler
Joined
Dec 12, 2015
Messages
15
4x4TB in RaidZ2, ~7TiB usable sounds sensible, rest is parity.
 

crwoo

Dabbler
Joined
Aug 26, 2016
Messages
14
I didnt realize that i dont even get half. I thought it's like raid5. Is this the best way to setup the nas? If it is then i guess i will have to add some disks.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
The top line under Storage is the total raw capacity of your disks. The second line is the usable capacity of your pool after taking RAID and some other things into consideration. Because you have a RAIDZ2, you only get about 2 drives worth of space, so 6.6 TB sounds correct.
This post by Cyberjock explains how many parts of ZFS work: https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
Doooh! That'll teach me not to ask for system info! Thanks for setting us straight! :confused:

Sorry, @crwoo, but your pool is just full...
 

crwoo

Dabbler
Joined
Aug 26, 2016
Messages
14
@Spearfoot what do you mean with the first part? Was it sarcastic? I can post system info if that is important, but i didnt think it is relivent here. I think i probably am using the wrong raid mode.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I didnt realize that i dont even get half. I thought it's like raid5. Is this the best way to setup the nas? If it is then i guess i will have to add some disks.
RAIDZ2 (and RAID6) uses 2 disks for parity where RAIDZ1 (and RAID5) use only one. Yours is a safe configuration: you could have used RAIDZ1, but RAIDZ1/RAID5 is not recommended for 'large' drives such as yours; the chance of losing your entire pool if a drive fails is too great.

You can expand your pool by adding another vdev: ideally an additional 4 drives matching the ones you have. Alternatively, but less optimally, you could add a pair of drives in a mirrored vdev.

EDIT: Here's another alternative -- but you must have a good backup of your system to do it!

  • Buy two more 4TB disks.
  • Destroy your existing pool and create a new RAIDZ2 pool with 6 x 4TB disks.
  • Restore your data from backup.
This would give you twice the storage space you have now, at the cost of only two more drives.

Good luck!
 

Omega

Dabbler
Joined
Dec 12, 2015
Messages
15
+1 to what Spearfoot said. Beat me to it, not very quick typing on the phone :)
 
Status
Not open for further replies.
Top