With 8 disks in RAIDZ2:
"Capacity: 21.82 TiB" (before volume creation)
"Available: 20.0 TiB" (after volume creation)
Giving 1.82 TiB of "lost space". Better but too much.
I don't understand what you mean. Of course I want to be able to store as much as possible should I want to. It should not differ 1,6 TiB from the theoretical number. It is not small numbers. Like half a drive. Also a good thing to post the differences between different RAIDZ configurations for someone who actually wants to try to help me figuring out where my 2.18 TiB went. The posts are made to try to solve the problem of course. It's certainly not just nerd rage. Have better things to do than that. What is the point of your posts if you don't try answer my questions in a precise and helpful way? Waste of space and time?Since there are many cases where you will not actually be able to store 20 TiB of data on the filer, is this just nerd rage at the fickleness of a nondeterministic storage system? Or do you actually have a point?
Well if "everyone is obsessing over this" is everyone wrong then or is the system/documentation wrong? Maybe everyone needs a precise and good explanation. @Bidule0hm is really trying. Thank you for that. That's what we all should do in a sane forum instead of stating that everyone has bad luck while thinking.I'm kinda wondering why everyone is obsessing over this, since RAIDZ is not likely to allocate space so neatly as to be totally predictable. It is at best an intelligent guess and at worst so far off as to be a ridiculous number.
You get the same result using "zfs list" from CLI
@Bidule0hm is really trying. Thank you for that.
Yes, my conclusion too. What's your output from zfs list -p for your pool?That's because the GUI is very very probably parsing the output of zfs list (that's also how they get the compression ratio value for example).
[root@freenas] ~# zfs list -p NAME USED AVAIL REFER MOUNTPOINT freenas-boot 546845184 114913955328 31744 none freenas-boot/ROOT 539249664 114913955328 25600 none freenas-boot/ROOT/Initial-Install 1024 114913955328 532214272 legacy freenas-boot/ROOT/default 539223040 114913955328 532976128 legacy freenas-boot/grub 7113728 114913955328 7113728 legacy tank 2918554604672 10293905648512 317696 /mnt/tank tank/.system 2522580992 10293905648512 179227264 legacy tank/.system/configs-6c245a2d045b4b77b1c9f77d48e8c7fb 336384 10293905648512 336384 legacy tank/.system/cores 40637056 10293905648512 22182656 legacy tank/.system/rrd-6c245a2d045b4b77b1c9f77d48e8c7fb 585177344 10293905648512 34339200 legacy tank/.system/samba4 47691776 10293905648512 4316928 legacy tank/.system/syslog-6c245a2d045b4b77b1c9f77d48e8c7fb 20538112 10293905648512 1532416 legacy tank/ch*** 192125936512 10293905648512 192048839168 /mnt/tank/ch*** tank/fi*** 689418120320 10293905648512 689413569792 /mnt/tank/fi*** tank/fr*** 225405069056 10293905648512 221892678144 /mnt/tank/fr*** tank/jails 7935569536 10293905648512 551296 /mnt/tank/jails tank/jails/.warden-template-standard--x64 3693169280 10293905648512 3579116416 /mnt/tank/jails/.warden-template-standard--x64 tank/jails/minidlna 4238391680 10293905648512 3762669952 /mnt/tank/jails/minidlna tank/scripts 1298816 10293905648512 364416 /mnt/tank/scripts tank/se*** 1800893788160 10293905648512 1800597919744 /mnt/tank/se*** [root@freenas] ~#
[root@freenas] ~# zfs list -o name,used,avail,compressratio,lrefer,refer,refcompressratio,usedbysnapshots,recsize NAME USED AVAIL RATIO LREFER REFER REFRATIO USEDSNAP RECSIZE freenas-boot 522M 107G 2.02x 15.5K 31K 1.00x 0 128K freenas-boot/ROOT 514M 107G 2.02x 12.5K 25K 1.00x 0 128K freenas-boot/ROOT/Initial-Install 1K 107G 1.00x 1008M 508M 2.01x 0 128K freenas-boot/ROOT/default 514M 107G 2.02x 1010M 508M 2.01x 5.96M 128K freenas-boot/grub 6.78M 107G 1.64x 11.1M 6.78M 1.64x 0 128K tank 2.65T 9.36T 1.02x 18K 310K 1.00x 2.09M 128K tank/.system 2.35G 9.36T 2.24x 179M 171M 1.05x 1.54G 128K tank/.system/configs-6c245a2d045b4b77b1c9f77d48e8c7fb 328K 9.36T 1.00x 12.5K 328K 1.00x 0 128K tank/.system/cores 38.8M 9.36T 7.86x 78.2M 21.2M 4.49x 17.6M 128K tank/.system/rrd-6c245a2d045b4b77b1c9f77d48e8c7fb 558M 9.36T 9.14x 140M 32.8M 8.80x 525M 128K tank/.system/samba4 45.5M 9.36T 4.84x 15.4M 4.12M 5.23x 41.4M 128K tank/.system/syslog-6c245a2d045b4b77b1c9f77d48e8c7fb 19.6M 9.36T 4.97x 3.17M 1.46M 6.38x 18.1M 128K tank/ch*** 179G 9.36T 1.11x 217G 179G 1.11x 74.3M 1M tank/fi*** 642G 9.36T 1.00x 702G 642G 1.00x 4.34M 1M tank/fr*** 210G 9.36T 1.19x 269G 207G 1.19x 3.27G 1M tank/jails 7.39G 9.36T 2.18x 23K 538K 1.02x 3.30M 128K tank/jails/.warden-template-standard--x64 3.44G 9.36T 2.09x 1.78G 3.33G 2.11x 109M 128K tank/jails/minidlna 3.95G 9.36T 2.24x 2.12G 3.50G 2.23x 3.65G 128K tank/scripts 1.24M 9.36T 1.11x 27.5K 356K 1.30x 912K 128K tank/se*** 1.64T 9.36T 1.00x 1.79T 1.64T 1.00x 282M 1M [root@freenas] ~#
I don't understand what you mean. Of course I want to be able to store as much as possible should I want to. It should not differ 1,6 TiB from the theoretical number. It is not small numbers. Like half a drive. Also a good thing to post the differences between different RAIDZ configurations for someone who actually wants to try to help me figuring out where my 2.18 TiB went. The posts are made to try to solve the problem of course. It's certainly not just nerd rage. Have better things to do than that. What is the point of your posts if you don't try answer my questions in a precise and helpful way?
Well you have pointed out your opinion which I have pointed out that I disagree with. So what's the point of continuing posting things that I, the one asking for help, and others, who tries find an answer to the question at hand, find pointless since it is not helpful and just cluttering the thread? BTW this forum is full of "I'm not going to help but I just want to make some noise", RTFM and, not to forget, "use the search functionality!" from a few people that thinks they are more special than others. The "use the search functionality" is the most ironic "tip" since when you try to (googling with site:freenas.org <your question>) you often find countless of other threads where the same persons again have posted RTFM and "use the search functionality!" or just posted opinions that are not helping the OP. Some kind of recursive joke. A tip to those people: Answer the question (again and again and again!)! It takes much less time compared to the other usual ranting that is tested for like 5 years here and failed and it for sure doesn't help the OP. And if you cannot answer the question, don't post! It's just natural in a forum that the same questions reappear. Deal with it. Otherwise I suggest that we delete this forum totally because it becomes pointless. A forum is existing for the sole purpose of helping people (and yes, help them take shortcuts) otherwise it is useless.Because I think I already pointed out the issues that make up what I'm talking about, and I have a limit as to the amount of verbosity I am willing to engage in when typing on a cellphone.
As I said, I and others don't find it pointless, thereof the question and discussion. Inexact is one thing. 2.18 TiB (!) is not just inexact; it's off the map. And if someone else with the same number of drives and same RAIDZ3 configuration gets totally different numbers I, and many others, want to find the root cause. If you don't, then fine. You don't have to but please stop cluttering the thread if you're not going to help.The very nature of the beast makes this an inexact figure for the typical ZFS system that is storing typical data. While it may be possible to devise a configuration and contrived data that actually allows storage of the maximum, it seems to me like worrying about the theoretical number is kinda pointless for any real application.
It's just natural in a forum that the same questions reappear. Deal with it. Otherwise I suggest that we delete this forum totally because it becomes pointless. A forum is existing for the sole purpose of helping people (and yes, help them take shortcuts) otherwise it is useless.
Should the space availability number change in GUI after changing dataset's recordsize?
In this case, if I understood correctly, 'JDPool' is the main dataset of the pool
'JDPool' is the main dataset of the pool, for which the recordsize was set to 1M.
Thus I should save around 5.27% of a total lost space, right?
What happens if 'jails' or 'dataset_1' dataset recordsize is set to 128k by default?
Apparently, they're all on the same vdev.
How did you change the recordsize on the main dataset? In a general way it's best to leave the main dataset alone and do whatever you want to do only on the sub-datasets.
Well, dataset tree in GUI is misleading me a little bit, so I used same available GUI options as sub-datasets have. I thought that all datasets were equal slices of a pool, so why not to start with one from the top?
The thing is if I try to identify main dataset on your diagram, I see it somewhere between datasets and the pool, or as an integrated part of a pool, or as a means to display compression ratio and/or available space after formating, or maybe container of system files and folders. Correct me if I am missing something, please.
Does it make any sense to increase recordsize of 6-disk raidz2 configuration?
I don't think it's FreeNAS as such that creates the root dataset, but ZFS. Still wouldn't be inclined to do pool-wide stuff, but I don't think this is an example of "don't mess with GUI stuff at the CLI."Yeah it's a dataset but the thing is it's created by FreeNAS