I've got a little mystery with my storage space

Status
Not open for further replies.

Joe Goldthwaite

Dabbler
Joined
Jan 12, 2016
Messages
38
Ok, I've been gone for a while. I kind of got discouraged. I've been trying to upgrade my media server from 20tb to 28tb. It hasn't gone well. The first issue I ran into was with how the diskspace was more or less efficient depending on how many disks were in the array. I had built a RaidZ2 array with 12 disks. That turned out to be the least efficient Raid2z configuration available.

I fixed that by adding another disk. Now I've got a RaidZ2 array with 13 disks. It's much more efficient showing 28.6TB of space available. I was happy until......

I started copying all my data from my old 9 disk RaiDZ2 media server. Now both my old mediaserver and my new mediaserver have about 17tb of files. When look at the available file space, the old 20TB server has 2.3TB of available space where the new 28.6TB server has 1.8TB available.

In summary, 17TB of files on the old server takes up about 17TB of space. 17TB of files on the new server is taking up 26.1 TB. I have no idea where the extra space is being taken up.

I know I need to re-configure the array. I'm thinking of adding yet another drive and breaking it down into to 7 disk RaidZ2 vdevs. I'd really like to understand what's going on here before I do it.

Code:
JEGNAS% zifs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
freenas-boot  643M  13.2G  31K  none
freenas-boot/ROOT  629M  13.2G  25K  none
freenas-boot/ROOT/9.10-STABLE-201605240427  621M  13.2G  492M  /
freenas-boot/ROOT/Initial-Install  1K  13.2G  482M  legacy
freenas-boot/ROOT/Pre-9.10-STABLE-201605021851-195856  1K  13.2G  489M  legacy
freenas-boot/ROOT/default  8.43M  13.2G  490M  legacy
freenas-boot/grub  12.7M  13.2G  6.33M  legacy
mediapool  26.1T  1.93T  26.1T  /mnt/mediapool
mediapool/.system  250M  1.93T  244M  legacy
mediapool/.system/configs-f6da24756e2f4dee86c3a9c9fb75829f  4.42M  1.93T  4.42M  legacy
mediapool/.system/cores  236K  1.93T  236K  legacy
mediapool/.system/rrd-f6da24756e2f4dee86c3a9c9fb75829f  236K  1.93T  236K  legacy
mediapool/.system/samba4  827K  1.93T  827K  legacy
mediapool/.system/syslog-f6da24756e2f4dee86c3a9c9fb75829f  1024K  1.93T  1024K  legacy
mediapool/jails  236K  1.93T  236K  /mnt/mediapool/jails

JEGNAS% zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Mon Jun 27 03:45:40 2016
config:

   NAME  STATE  READ WRITE CKSUM
   freenas-boot  ONLINE  0  0  0
    da0p2  ONLINE  0  0  0

errors: No known data errors

  pool: mediapool
 state: ONLINE
  scan: none requested
config:

   NAME  STATE  READ WRITE CKSUM
   mediapool  ONLINE  0  0  0
    raidz2-0  ONLINE  0  0  0
    gptid/baa5b1e5-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/bb933277-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/bcfa581f-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/be5b0e4b-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/bfbbfd81-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/c1157de1-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/c279d194-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/c3dc64d3-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/c544a7d8-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/c6acad6c-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/c80e2f9c-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/c973f3e4-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0
    gptid/cadcd300-2477-11e6-bcd4-d05099c0e0e3  ONLINE  0  0  0

errors: No known data errors

<HGST HDN724030ALE640 MJ8OA5E0>  at scbus1 target 0 lun 0 (ada0,pass0)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus4 target 0 lun 0 (ada1,pass1)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus5 target 0 lun 0 (ada2,pass2)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus6 target 0 lun 0 (ada3,pass3)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus7 target 0 lun 0 (ada4,pass4)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus8 target 0 lun 0 (ada5,pass5)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus9 target 0 lun 0 (ada6,pass6)
<Marvell Console 1.01>  at scbus13 target 0 lun 0 (pass7)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus14 target 0 lun 0 (ada7,pass8)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus15 target 0 lun 0 (ada8,pass9)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus16 target 0 lun 0 (ada9,pass10)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus17 target 0 lun 0 (ada10,pass11)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus18 target 0 lun 0 (ada11,pass12)
<HGST HDN724030ALE640 MJ8OA5E0>  at scbus19 target 0 lun 0 (ada12,pass13)
<SanDisk Ultra Fit 1.00>  at scbus21 target 0 lun 0 (pass14,da0)

NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  14.2G  643M  13.6G  -  -  4%  1.00x  ONLINE  -
mediapool  35.2T  31.8T  3.46T  -  54%  90%  1.00x  ONLINE  /mnt

NAME  PROPERTY  VALUE  SOURCE
mediapool  type  filesystem  -
mediapool  creation  Fri May 27 18:59 2016  -
mediapool  used  26.1T  -
mediapool  available  1.93T  -
mediapool  referenced  26.1T  -
mediapool  compressratio  1.01x  -
mediapool  mounted  yes  -
mediapool  quota  none  default
mediapool  reservation  none  default
mediapool  recordsize  128K  default
mediapool  mountpoint  /mnt/mediapool  default
mediapool  sharenfs  off  default
mediapool  checksum  on  default
mediapool  compression  lz4  local
mediapool  atime  on  default
mediapool  devices  on  default
mediapool  exec  on  default
mediapool  setuid  on  default
mediapool  readonly  off  default
mediapool  jailed  off  default
mediapool  snapdir  hidden  default
mediapool  aclmode  passthrough  local
mediapool  aclinherit  passthrough  local
mediapool  canmount  on  default
mediapool  xattr  off  temporary
mediapool  copies  1  default
mediapool  version  5  -
mediapool  utf8only  off  -
mediapool  normalization  none  -
mediapool  casesensitivity  sensitive  -
mediapool  vscan  off  default
mediapool  nbmand  off  default
mediapool  sharesmb  off  default
mediapool  refquota  none  default
mediapool  refreservation  none  default
mediapool  primarycache  all  default
mediapool  secondarycache  all  default
mediapool  usedbysnapshots  0  -
mediapool  usedbydataset  26.1T  -
mediapool  usedbychildren  365M  -
mediapool  usedbyrefreservation  0  -
mediapool  logbias  latency  default
mediapool  dedup  off  default
mediapool  mlslabel  -
mediapool  sync  standard  default
mediapool  refcompressratio  1.01x  -
mediapool  written  26.1T  -
mediapool  logicalused  26.3T  -
mediapool  logicalreferenced  26.3T  -
mediapool  volmode  default  default
mediapool  filesystem_limit  none  default
mediapool  snapshot_limit  none  default
mediapool  filesystem_count  none  default
mediapool  snapshot_count  none  default
mediapool  redundant_metadata  all  default

 

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
12 drives in a single vdev is too many, split it into 2 vdevs of 6 drives each if your going to use raidz2...
 
Last edited:

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
Are you absolutely certain you didn't copy some things twice into the new pool? I'd look at that first. Also, I didn't see where you said what size your drives are in the new machine. You could use Biduleohm's calculator to see what you should get. https://jsfiddle.net/Biduleohm/paq5u7z5/1/embedded/result/
 
Last edited:

Joe Goldthwaite

Dabbler
Joined
Jan 12, 2016
Messages
38
Yeah I know I need to reconfigure the array. I was just trying to make some sense as to why it was taking up so much space.

I thought I must have somehow copied extra items but I couldn't find any duplicates. I wrote a little python pro0gram to walk all the files on the disk and put their size in an SQLite database. There are about .617TIB of extra files on the new server. It's not nearly enough to explain the missing 9TB.

Both drives are made up of 3tb HGST NAS disks.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
A possible culprit is snapshots?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I don't think so. I haven't created any. I've never created one. I was going to check last night but I forgot. I'll check it tonight.
show us
Code:
zpool list
and
Code:
zfs list
 

Joe Goldthwaite

Dabbler
Joined
Jan 12, 2016
Messages
38
Here's the output from zpool list
Code:
NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  14.2G  643M  13.6G  -  -  4%  1.00x  ONLINE  -
mediapool  35.2T  31.8T  3.46T  -  53%  90%  1.00x  DEGRADED  /mnt


And here's what zfs list has to say
Code:
NAME  USED  AVAIL  REFER  MOUNTPOINT
freenas-boot  643M  13.2G  31K  none
freenas-boot/ROOT  629M  13.2G  25K  none
freenas-boot/ROOT/9.10-STABLE-201605240427  621M  13.2G  492M  /
freenas-boot/ROOT/Initial-Install  1K  13.2G  482M  legacy
freenas-boot/ROOT/Pre-9.10-STABLE-201605021851-195856  1K  13.2G  489M  legacy
freenas-boot/ROOT/default  8.43M  13.2G  490M  legacy
freenas-boot/grub  12.7M  13.2G  6.33M  legacy
mediapool  26.1T  1.93T  26.1T  /mnt/mediapool
mediapool/.system  251M  1.93T  244M  legacy
mediapool/.system/configs-f6da24756e2f4dee86c3a9c9fb75829f  5.00M  1.93T  5.00M  legacy
mediapool/.system/cores  236K  1.93T  236K  legacy
mediapool/.system/rrd-f6da24756e2f4dee86c3a9c9fb75829f  236K  1.93T  236K  legacy
mediapool/.system/samba4  807K  1.93T  807K  legacy
mediapool/.system/syslog-f6da24756e2f4dee86c3a9c9fb75829f  1014K  1.93T  1014K  legacy
mediapool/jails  236K  1.93T  236K  /mnt/mediapool/jails


And for good measure, here's the output of zfs list -t snapshot
Code:
NAME  USED  AVAIL  REFER  MOUNTPOINT
freenas-boot/ROOT/9.10-STABLE-201605240427@2016-05-22-12:44:33  3.93M  -  482M  -
freenas-boot/ROOT/9.10-STABLE-201605240427@2016-05-23-23:46:27  7.77M  -  489M  -
freenas-boot/ROOT/9.10-STABLE-201605240427@2016-05-24-20:22:54  3.43M  -  490M  -
freenas-boot/grub@Pre-Upgrade-9.10-STABLE-201605021851  26.5K  -  6.33M  -
freenas-boot/grub@Pre-Upgrade-9.10-STABLE-201605240427  27K  -  6.33M  -
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Whoa whoa whoa, you've got other problems bro. mediapool is degraded. Let's see "zpool status -v"
 

Joe Goldthwaite

Dabbler
Joined
Jan 12, 2016
Messages
38
I know DrKK. You've been helping on my other thread. ;) No worries. I'm going to remove that drive this weekend and re-configure the media pool into two 7 drive raid2z vdevs. The problem I'm having with the missing space will hopefully go away at that point. I'd really like to understand what's happening though. It doesn't make any sense.

When I originally created this new system I had set it up with a 12 drive Raid2z array. What I didn't know was that a 12 drive Raid2Z array is horribly inefficient. I thought I had fixed that by adding another drive. Adding one more drive gave me the 27tb I thought I was going to get. As I started copying the data from the old drive to the new one (a slow process because I was using rsync) my available space started disappearing until ultimately I had 2tb less free space on my 27tb machine than I did on my old 20tb machine.

It doesn't make any sense at all. You suggested that maybe it was an issue with snapshots but I haven't created any. I know that having large cluster sizes can make each file take up more actual space on the disk but I don't think it would take up nearly enough to explain this. Also, I didn't set a large cluster size. I created the pool using the web interface and didn't do anything funky.

Anyway, I'm going to wipe this setup out and re-create it with the new one. Maybe that will fix the problem. The mystery will have to go into the book with all the other things I don't understand. It's a thick book.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Perhaps you copied a sparse file? Or a zvol?

I'd expect that to compress well tho.
 
Status
Not open for further replies.
Top