Snapshot out of space

Status
Not open for further replies.

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
Im getting following error message:
Code:
Apr  1 13:16:05 san-tot autosnap.py: [tools.autosnap:337] Failed to create snapshot 'HVG-SYS/HVG-SYS@auto-20160401.1316-1w': cannot create snapshot 'HVG-SYS/HVG-SYS@auto-20160401.1316-1w': out of space


I understand i should have thought a little bit extra before i setup this system and reserve extra space for snapshot since ZVOL space are "locked" and cant be used for snapshots.
But as i understand it snapshot isnt taking up alot of space when created, instead they grow when changes are made in the filesystem.

Shouldnt there be enough space for me to create a snapshot off my zpools acordingly to the picture below?
zKumDJV.png


UP4ukhj.png

I deleted every snapshot i had from the gui before because initially i could create snapshot.
But it seems deleting those snapshots didnt free up any space?
 
Last edited:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Your pool utilization for HVG-DATA is over 80%. It's recommended not to go over this because you will run into issues such as this. You may be required to off-load some of the data from that pool to somewhere else to help free up space. You can also consider adding more drives or swap the existing drives for higher capacity.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's not recommended to fill a pool hosting an iSCSI volume or other block storage device beyond 50%. That is hopefully a mirror pool of some sort.

I'm trying to figure out what you've done. What's the size of that pool, and is there a reservation or anything else on it?
 

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
Yes this is a zpool consisting of 4 1tb drives, in a raid 10 setup.
What i did is that i created the ZVOL too big, 1,7tb out of 1.8tb available space.
The used space in my zvol on the other hand is next to none with an limit of 70%

I've changed this setup on my other systems to never go beyond 80% of the free space in my zpools.
Unfortunately this system is already up and running live so i cant take it down and redo everything to look like my other systems.

Maby i should add this if it's of any importancy to something.
The HVG-SYS is a mirrored zpool of 2 1tb disk used for storing virtual machines in an ESXI environment.
The HVG-DATA is like i said a raid 10 setup of 4 1tb drives using an Intel s3500 80GB drive as SLOG for storing data and databases for the virtual machine in esxi.
HVG-DATA has sync=always turned on
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I hope you don't mean RAID10, that'd be really bad. Can you please produce the output of zpool list in code tags?

You should not be limiting the used space on a zvol.

Using 80% of the space on a pool is far too much for block storage usage. I used to say 60% was the cap and I've become convinced that even 50% is pretty bad over time. What happens over time is that fragmentation increases and the availability of contiguous free space drops, killing write performance and creating a lot of suboptimal read performance. I write about this nearly daily so please feel free to search a little for a more detailed discussion.
 

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
zpool status
Code:
  pool: HVG-DATA                                                                                                                   
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 0h20m with 0 errors on Sun Apr  3 00:20:59 2016                                                        
config:                                                                                                                            
                                                                                                                                   
        NAME                                            STATE     READ WRITE CKSUM                                                 
        HVG-DATA                                        ONLINE       0     0     0                                                 
          mirror-0                                      ONLINE       0     0     0                                                 
            gptid/075f20d9-dd59-11e5-9e89-0cc47a7c57f0  ONLINE       0     0     0                                                 
            gptid/083bed1f-dd59-11e5-9e89-0cc47a7c57f0  ONLINE       0     0     0                                                 
          mirror-1                                      ONLINE       0     0     0                                                 
            gptid/392c7003-dd59-11e5-9e89-0cc47a7c57f0  ONLINE       0     0     0                                                 
            gptid/3a0feb3d-dd59-11e5-9e89-0cc47a7c57f0  ONLINE       0     0     0                                                 
        logs                                                                                                                       
          gptid/95045ff4-dd59-11e5-9e89-0cc47a7c57f0    ONLINE       0     0     0                                                 
        spares                                                                                                                     
          gptid/14e862e0-e158-11e5-8224-0cc47a7c57f0    AVAIL                                                                      
                                                                                                                                   
errors: No known data errors                                     


Code:
  pool: HVG-SYS                                                                                                                    
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 0h13m with 0 errors on Sat Mar  5 13:06:02 2016                                                        
config:                                                                                                                            
                                                                                                                                   
        NAME                                            STATE     READ WRITE CKSUM                                                 
        HVG-SYS                                         ONLINE       0     0     0                                                 
          mirror-0                                      ONLINE       0     0     0                                                 
            gptid/8b67f9c8-e157-11e5-8224-0cc47a7c57f0  ONLINE       0     0     0                                                 
            gptid/8c48cdf4-e157-11e5-8224-0cc47a7c57f0  ONLINE       0     0     0                                                 
        spares                                                                                                                     
          gptid/c3d9daf6-e157-11e5-8224-0cc47a7c57f0    AVAIL                                                                      
                                                                                                                                   
errors: No known data errors                                                       


zpool list
Code:
[root@san-tot ~]# zpool list                                                                                                       
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT                                                  
Backup1       1.81T  72.3G  1.74T         -     0%     3%  1.00x  ONLINE  /mnt                                                     
Backup2       1.81T   123G  1.69T         -     0%     6%  1.00x  ONLINE  /mnt                                                     
HVG-DATA      1.81T   103G  1.71T         -     2%     5%  1.00x  ONLINE  /mnt                                                     
HVG-SYS        928G  46.9G   881G         -     4%     5%  1.00x  ONLINE  /mnt                                                     
freenas-boot  7.19G  1.06G  6.13G         -      -    14%  1.00x  ONLINE  -     


What i will do is to remove my two spares and add them to HVG-SYS to make it identical to HVG-DATA except the log device.
I just need to find time when the virtual machines can be taken down, exported, destroy the current zpool's rebuild them and import the virtual machines again.

This setup is running 3 virtual machines from HVG-SYS each using a 150GB thick provisioned .vmdk in esxi.
Whats the best solution, creating a new ZVOL for each machine, with an size of 150GB?
Or Should i create one ZVOL with an size of 50% of the total storage?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, good, you don't actually have RAID10, that's a good starting point.

So what you were doing was running about 450GB of zvol data on a 900GB pool before? With mirrors, that should work fine, but might start to feel a little full, especially if you're trying to take snaps. Adding another mirror vdev would be a good idea, and should leave you with a good amount of free space on the pool, which is very desirable when running iSCSI.

I don't really see a lot of value to creating separate zvols, except perhaps that you'd get more detailed usage from going that route.

Either way, I suggest making sure you mark the datastore(s) as "SSD" in ESXi, to ensure that UNMAP is being used, to maximize free space in the pool.
 

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
Thanks for your reply, ESXi detects my storage units as SSD and i have tried using UNMAP with success.
Does UNMAP run automaticly like TRIM does or do i have to run the command my self all the time?
I know this is an ESXi question but im trying my luck here.
 
Status
Not open for further replies.
Top