80% recommended capacity relevent for specific pool

Ianm_ozzy

Dabbler
Joined
Mar 2, 2020
Messages
43
Hi all
The specs are in my signature.
So I have a pool for storage mainly.
Also one (single disk) with a VM of ubuntu server with this installed: https://lancache.net/
My games are 'backup up' on the main storage as an ISCSI drive.
A Windows 10 VM updates my games (mainly steam) overnight.
The appropriate DNS changes have been made so all game updates are through the lancache VM.
The 240GB SSD is 98% full with about 5GB free space.
It will have a maximum of about 4MB/s writes to it overnight.
I am mainly concerned about read speeds being optimum.
I want to use as much as much of the drive as practical.
Redundancy is not an issue. It is a cache drive only. Sync is turned off and no compression.

Is there some sort of 'defrag' zfs equivalent that can be used on a regular basis.
The lancache software typically reads out in 1MB chunks to whoever (me) is updating their games , ideally as quickly as practical.

Will having 5GB free space be an issue in this situation?

Thanks
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
a VM of ubuntu server

an ISCSI drive
You're doing block storage... but,

Sync is turned off
So you're probably getting reasonable write speeds/IOPS with no pool optimization.

Is there some sort of 'defrag' zfs equivalent that can be used on a regular basis.
Copy a dataset to another pool. Delete it on the origin, copy it back.

Will having 5GB free space be an issue in this situation?
You're only going to be slowing down writes with a fragmented pool (not so much the reads as you have ARC to smooth that out a bit) and you have sync=disabled to prevent that being a major issue for performance (at the cost of potential data loss).

If you're OK with the speed you're getting as it is and you like to live dangerously, it's all fine.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I tried to find some more concrete recommendations, found this, its for Oracle ZFS, but it probably applies:


Storage Pool Practices for Performance
  • In general, keep pool capacity below 90% for best performance. The percentage where performance might be impacted depends greatly on workload:
    • If data is mostly added (write once, remove never), then it's very easy for ZFS to find new blocks. In this case, the percentage can be higher than normal; maybe up to 95%.
    • If data is made of large files or large blocks (such as 128K files or 1MB blocks) and the data is removed in bulk operations, the percentage can be higher than normal; maybe up to 95%.
    • If a large percentage (more than 50%) of the pool is made up of 8k chunks (DBfiles, iSCSI Luns, or many small files) and have constant rewrites, then the 90% rule should be followed strictly.
    • If all of the data is small blocks that have constant rewrites, then you should monitor your pool closely once the capacity gets over 80%. The sign to watch for is increased disk IOPS to achieve the same level of client IOPS.
  • Mirrored pools are recommended over RAID-Z pools for random read/write workloads
  • Separate log devices
    • Recommended to improve synchronous write performance
    • With a high synchronous write load, prevents fragmentation of writing many log blocks in the main pool
  • Separate cache devices are recommended to improve read performance
  • Scrub/resilver - A very large RAID-Z pool with lots of devices will have longer scrub and resilver times
  • Pool performance is slow – Use the zpool status command to rule out any hardware problems that are causing pool performance problems. If no problems show up in the zpool status command, use the fmdump command to display hardware faults or use the fmdump –eV command to review any hardware errors that have not yet resulted in a reported fault.
 
Top