- Joined
- May 28, 2011
- Messages
- 10,996
Out of curiosity, what is the model number of the hard drives?
[root@freenas /]# zpool list mypool_8x6TB NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT mypool_8x6TB 43.5T 20.2T 23.3T - - 46% 1.00x ONLINE /mnt [root@freenas /]# [root@freenas /]# zfs list -t snapshot | fgrep -v freenas-boot/ NAME USED AVAIL REFER MOUNTPOINT [root@freenas /]# [root@freenas /]# [root@freenas /]# zfs list mypool_8x6TB NAME USED AVAIL REFER MOUNTPOINT mypool_8x6TB 14.4T 12.6T 14.4T /mnt/mypool_8x6TB [root@freenas /]# [root@freenas /]# zfs get all mypool_8x6TB | egrep 'T |M ' mypool_8x6TB used 14.4T - mypool_8x6TB available 12.6T - mypool_8x6TB referenced 14.4T - mypool_8x6TB quota 30T local mypool_8x6TB refquota 27T local mypool_8x6TB usedbydataset 14.4T - mypool_8x6TB usedbychildren 42.1M - mypool_8x6TB written 14.4T - mypool_8x6TB logicalused 14.3T - mypool_8x6TB logicalreferenced 14.3T - [root@freenas /]# [root@freenas /]# zfs set refquota=none mypool_8x6TB [root@freenas /]# zfs set quota=none mypool_8x6TB [root@freenas /]# zfs get all mypool_8x6TB | egrep 'T |M ' mypool_8x6TB used 14.4T - mypool_8x6TB available 15.6T - mypool_8x6TB referenced 14.4T - mypool_8x6TB usedbydataset 14.4T - mypool_8x6TB usedbychildren 42.1M - mypool_8x6TB written 14.4T - mypool_8x6TB logicalused 14.3T - mypool_8x6TB logicalreferenced 14.3T - [root@freenas /]# zfs list mypool_8x6TB NAME USED AVAIL REFER MOUNTPOINT mypool_8x6TB 14.4T 15.6T 14.4T /mnt/mypool_8x6TB
Do you have a logical explanation for that? I don't see calculation errors in the online calculator, even if I calculate the size "by hand" I get a similar size.Without them I am missing over 2TB too.
What do you mean with "that aren't aligned"? Do you mean the relevant aligning for HDs with 4K sectors (which I do have)?For vdevs (as this one) that aren't aligned there's also a added overhead and from what I've seen on other thread it seems to be very high is some cases.
zdb -C mypoolname | grep ashift
ashift=12
I thought that this is not important anymore, but I probably misunderstood it, sorry.a RAID-Z2 of 8 drives use 6 drives for data (and 2 for parity) and 6 isn't a power of 2
If you use the large_blocks feature and use 1MB records, you don't need to adhere to the rule of always putting a certain number of drives in a VDEV to prevent significant loss of storage capacity.
This enables you to create an 8-drive RAIDZ2 where normally you would have to create either a RAIDZ2 VDEV that consists of 6 drives or 10 drives.
I have no problem in destroying the pool, if that does help increasing the capacity. Do you think a different configuration/VDEV would be better?But in the end there's about 5 % more of overhead and unless you change the vdev(s) config (that means destroying the pool) you'll have to live with it.
I thought that this is not important anymore, but I probably misunderstood it, sorry.
According to this page a "bad" number of drives should not lead to capacity loss anymore:
But in the end there's about 5 % more of overhead and unless you change the vdev(s) config (that means destroying the pool) you'll have to live with it.