Total storage size changed overnight

Status
Not open for further replies.

vind0

Cadet
Joined
Jul 19, 2012
Messages
5
I just built a raidz1 using FreeNAS and 3 3TB drives which when I first created by zfs volume left me with about 5.3TB of total storage space. I figured that was about what I could expect. After that I copied about 850G worth of files over to an NFS share (from a Linux Centos 5.5 box) I created on FreeNAS. As the files were copying, I noticed the TOTAL volume size (not available) going down. I eventually ended up with 4.5TB of TOTAL storage. Surely this is not correct or expected.

Here is my setup

FreeNAS-8.0.4-RELEASE-p3-x64 (11703)
Platform AMD A4-3400 APU with Radeon(tm) HD Graphics
Memory 7663MB
OS Version FreeBSD 8.2-RELEASE-p9

I am using 3 WD WD30EFRX drives using the onboard SATA controller. My OS drive is a 2.5" SATA laptop drive. Here is the info from my storage (from the web gui)

zfs1 /mnt/zfs1 160.0 KiB (0%) 4.5 TiB 4.5 TiB HEALTHY
zfs1 /mnt/zfs1/myth 856.0 GiB (34%) 1.6 TiB 2.4 TiB HEALTHY


Here is the same info from the command line using zfs list

NAME USED AVAIL REFER MOUNTPOINT
zfs1 856G 4.49T 160K /mnt/zfs1
zfs1/myth 856G 1.61T 856G /mnt/zfs1/myth

Here is a df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ufs/FreeNASs1a 927M 379M 474M 44% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/md0 4.6M 1.9M 2.3M 44% /etc
/dev/md1 824K 2.0K 756K 0% /mnt
/dev/md2 149M 8.6M 129M 6% /var
/dev/ufs/FreeNASs4 20M 806K 17M 4% /data
zfs1 4.5T 160K 4.5T 0% /mnt/zfs1
zfs1/myth 2.4T 856G 1.6T 34% /mnt/zfs1/myth

And finally here is a zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfs1 8.12T 1.26T 6.87T 15% ONLINE /mnt

I can start over, but I will have to copy my data back off the FreeNAS box first and until I upgrade my network switch, thats a slow process. I have Googled and searched the forums and found people with similar problems, but did not find a solution. Thanks in advance.
 

vind0

Cadet
Joined
Jul 19, 2012
Messages
5
So nobody has seen this where the total volume size changes after you copy files to the volume?
 

praecorloth

Contributor
Joined
Jun 2, 2011
Messages
159
I'm going to guess that's a no. But this does intrigue me. It looks like you've got your zpool, and then a dataset where your data lives. zfs1 being the pool, and myth being the dataset (If I understand correctly anyways, I'm not the best with ZFS, I'm just throwing out ideas since we've got nothing yet).

Now after volume creation you had 5.3TB of space on your zpool. You then create a dataset and copy ~850GB to it. 5.3 - 0.8 = 4.5.

I don't understand why it would subtract from total space, but this may be what's going on here. I will fire up a virtual machine when I get home and test it out.
 

vind0

Cadet
Joined
Jul 19, 2012
Messages
5
I'm going to guess that's a no. But this does intrigue me. It looks like you've got your zpool, and then a dataset where your data lives. zfs1 being the pool, and myth being the dataset (If I understand correctly anyways, I'm not the best with ZFS, I'm just throwing out ideas since we've got nothing yet).

Now after volume creation you had 5.3TB of space on your zpool. You then create a dataset and copy ~850GB to it. 5.3 - 0.8 = 4.5.

I don't understand why it would subtract from total space, but this may be what's going on here. I will fire up a virtual machine when I get home and test it out.


Thanks for the reply. I created the myth data set so I could put a quota on how much space my MythTV dvr could use up. Not surprisingly, as I fill up the myth data, the total size goes down. I am at 4.4Tb now.
 

praecorloth

Contributor
Joined
Jun 2, 2011
Messages
159
Huh. Yup, it's doing this on my 8.2 VM as well. I can't imagine what the reasoning behind it is. I guess if you have multiple datasets on a single zpool, you can watch as the total space decreases. But that can be shown just as easily, and probably with less confusion, by showing the total amount of space used. Unless it wasn't a conscious decision, and just has to do with how the guts of ZFS work. I guess the best I have to offer is that it seems to be working as designed.
 

vind0

Cadet
Joined
Jul 19, 2012
Messages
5
Huh. Yup, it's doing this on my 8.2 VM as well. I can't imagine what the reasoning behind it is. I guess if you have multiple datasets on a single zpool, you can watch as the total space decreases. But that can be shown just as easily, and probably with less confusion, by showing the total amount of space used. Unless it wasn't a conscious decision, and just has to do with how the guts of ZFS work. I guess the best I have to offer is that it seems to be working as designed.

I guess I am good to go then even though subconsciously it still bugs me as I feel like I am not getting all the space I originally had :). I was originally concerned that my raidz1 was going to have a possible issue with write performance since my MythTV streams recordings to it in a continuous stream and sometimes multiple streams, but I can tell you it doesn't miss a beat and when I delete large files (4G+) they are removed instantly. My new gigabit switch should be here tomorrow and I will setup LACP on FreeNAS and my Linux server. I am curious to see what speeds I get with 2 links aggregated.

Thanks for taking time to duplicate my setup on your end and replying. I appreciate it


Eric
 
Status
Not open for further replies.
Top