Almost 1TB loss???

MrE10

Cadet
Joined
Feb 18, 2021
Messages
8
So, I have 4 2TB Drives in one pool which I want to provide completely as an iSCSI target. The pool size is 5.11TiB and the Volume I've inched forward to use it all up. But after connecting to it and initializing the drive I'm left with 4.61TB available space. That's a bit disappointing, I have an old Synology also with 4 2TB drives in a RAID5 that gives me a drive with 5.44TB available space. Did I something wrong??
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
How did you configure the disks in your pool?
 

MrE10

Cadet
Joined
Feb 18, 2021
Messages
8
it's a raidz1 with all 4 drives besides that I have used the defaults...
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
Looks like he made a RAIDZ pool.

Please note that there is 2GB per drive for SWAP file, then you have formatting overhead for all that parity.

But after connecting to it and initializing the drive I'm left with 4.61TB available space. That's a bit disappointing, I have an old Synology also with 4 2TB drives in a RAID5 that gives me a drive with 5.44TB available space. Did I something wrong??
ZFS is a significantly more robust file system so it's due to overhead and you did not do anything wrong if you did create a RAIDZ vdev. Also the capacity is a generalization, not absolute.

I think you will find that unless you purchase a very expensive Synology, the properly built TrueNAS/FreeNAS home system will be significantly faster for all data throughput operations for less money. You should be able to saturate a 10GB connection as other have done here.
 

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
And don't forget the joker!


I have a Synology NAS and a test TrueNAS machine,
the same data in the TrueNAS are compressed with a ratio of 1.1 to 1.65 with the free LZ4
(the lower values are for already compressed data like jpegs)
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
Take a look at the ZFS Capacity Calculator. One thing to consider is TiB vs. TB storage, reserves for parity, plus the 20% you should keep free to avoid ZFS performance cratering. That leaves you with a usable storage capacity of 5.1TiB (which is 5.6TB) and a practical storage capacity of just over 4TiB (which is what you should consider the limit of your present pool).

It's one reason I went with a 8-drive pool - losses to parity, especially if you're a nut like me with a Z3 pool, are substantial.
 

MrE10

Cadet
Joined
Feb 18, 2021
Messages
8
yea well... that's quite a downer then. So my backups from why old NAS won't fit on here....

You should be able to saturate a 10GB connection as other have done here.
Hard to believe... with 4 SATA3 7200RPM drives?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
Very doubtful. To saturate a 10GbE connection with HDDs, I’d expect at least a 3VDEV pool, more likely 4. Once you graduate to that many disks, the CPU better be capable.

the computer on the receiving end would also need some pretty quick storage.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,924

MrE10

Cadet
Joined
Feb 18, 2021
Messages
8
The problem here is that you've filled your disk to 98% capacity. The recommended value is not only "below 80%", but for iSCSI it's probably "below 60%", because block storage is one of the hardest things for ZFS to handle, and you need a significant amount of free space in order for ZFS to not suffer horrible fragmentation.

In all seriousness.... you gotta be kidding me?! You want to tell me I need to buy double the amount of diskspace I'm actually needing??! Are you sure this is still the case? the post is from 2015?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458
You want to tell me I need to buy double the amount of diskspace I'm actually needing?
If you want decent performance for block storage, yes. More yet would be better.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
you did not do anything wrong if you did create a RAIDZ vdev.

I would disagree, RAIDZ is not particularly compatible with the poster's intent to use iSCSI.


In all seriousness.... you gotta be kidding me?! You want to tell me I need to buy double the amount of diskspace I'm actually needing??! Are you sure this is still the case? the post is from 2015?

Computer science didn't change to computer magic when the year turned 2021. I have lots of posts from years ago that are still quite relevant.

From 2016, and not yet mentioned, the way ZFS allocates blocks is not amenable to RAIDZ1 without extremely careful planning:

https://www.truenas.com/community/t...d-why-we-use-mirrors-for-block-storage.44068/

From 2013, a link to the classic article that discusses fragmentation vs pool occupancy:

https://www.truenas.com/community/threads/zfs-fragmentation-issues.11818/

But if you don't want to be gathering smaller clues from here-and-there, there is an awesome summary from 2019:

https://www.truenas.com/community/threads/the-path-to-success-for-block-storage.81165/

which covers virtually everything you need to know if you intend to move forward with iSCSI. It might be better to avoid using a block storage protocol, which incurs many penalties, and use a filesharing protocol like NFS or SMB, which is much less taxing on ZFS and lets a NAS really shine. Remember, this is FreeNAS, not FreeSAN. iSCSI is a SAN protocol, and while FreeNAS can do SAN, it is really best at NAS.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681

That was pretty funny, I was curious who posted that so I clicked the link ... Sorry that I've been bursting bubbles for so many years.

Anyways just as an additional data point, how much overhead you ACTUALLY need to keep free depends on use case, so perhaps the OP could mention what it is they are doing. If you are doing very busy iSCSI things like datastores, databases, backups, etc., you're really going to suffer if you don't use mirrors, and you will probably find keeping occupancy rates below 50% much more pleasant.

If you do your homework to understand how iSCSI, ZFS block sizes, and RAIDZ all interact, there are paths forward where you can probably make a workable iSCSI device that gets past 80% of the pool capacity, but it will be slowish as fragmentation increases over time. That's just the nature of a copy-on-write filesystem.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
I would disagree, RAIDZ is not particularly compatible with the poster's intent to use iSCSI.
The reference was to capacity, not how the storage was being used.
 

MrE10

Cadet
Joined
Feb 18, 2021
Messages
8
Anyways just as an additional data point, how much overhead you ACTUALLY need to keep free depends on use case, so perhaps the OP could mention what it is they are doing.

The intention is to do backup on this drive. I'm backing up whole VM's from an ESXi with Veeam.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
For backup of VMs I prefer NFS, old-fashioned as that may be. Way less storage overhead, performance is not critical, the data ends up on ZFS anyway and can be snapshot, replicated, ... whatever. GhettoVCB to NFS.
 

MrE10

Cadet
Joined
Feb 18, 2021
Messages
8
so with NFS I would only need to maintain that 80% limit and would elimintate that iSCSI "allocation" issue?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
so with NFS I would only need to maintain that 80% limit and would elimintate that iSCSI "allocation" issue?
Yes. Performance is a different issue I feel not qualified to make a definite statement, but for backups - yes.
 
Top