hello all,
i read alot about the topic but i'm at a point where i'm a bit helpless. ok, here are the details. i'm using freenas installed on a typical pc, it works fine so far and i'm happy with it. i use 6 hdds with 12tb of space in a raid z2 and the system uses a default record size of 128kb. my main computer is a windows machine, so i set up a pool and created a dataset and created a smb share. (nearly all manuals says to do so).
with my windows pc i access the smb share without errors. but i'm curious about the space used on the disk/dataset. when i look at a ntfs hdd on my local windows machine and i look at the properties, windows says that my file is (for example) 1kb big and use 4kb of space on my hdd. when i compare the same file on my smb share, windows says the file ist 1kb big and uses 8kb of space on my smb share. a file with a size of 58kb uses 64,5kb on my smb share, a file with 35.270.149.749 bytes uses 35.270.306.816 bytes on my smb share.
so my questions is, do i waste hdd space with simply using a dataset as smb share with a record size of 128kb? in terms of the last example (the big file), the space on the smb is even larger than 128kb. so, would it make more sense to create a zvol and format it with ntfs? or should i setup a dataset with less than 128kb record size? or is it because i use zfs with a far better error correction as ntfs?
at the moment the dataset smb share uses 70% of the 42tb dataset in total and tells me it uses 800gb more space as the same amount of my ntfs hdd disks.
thanks very much for your help!
kovu
i read alot about the topic but i'm at a point where i'm a bit helpless. ok, here are the details. i'm using freenas installed on a typical pc, it works fine so far and i'm happy with it. i use 6 hdds with 12tb of space in a raid z2 and the system uses a default record size of 128kb. my main computer is a windows machine, so i set up a pool and created a dataset and created a smb share. (nearly all manuals says to do so).
with my windows pc i access the smb share without errors. but i'm curious about the space used on the disk/dataset. when i look at a ntfs hdd on my local windows machine and i look at the properties, windows says that my file is (for example) 1kb big and use 4kb of space on my hdd. when i compare the same file on my smb share, windows says the file ist 1kb big and uses 8kb of space on my smb share. a file with a size of 58kb uses 64,5kb on my smb share, a file with 35.270.149.749 bytes uses 35.270.306.816 bytes on my smb share.
so my questions is, do i waste hdd space with simply using a dataset as smb share with a record size of 128kb? in terms of the last example (the big file), the space on the smb is even larger than 128kb. so, would it make more sense to create a zvol and format it with ntfs? or should i setup a dataset with less than 128kb record size? or is it because i use zfs with a far better error correction as ntfs?
at the moment the dataset smb share uses 70% of the 42tb dataset in total and tells me it uses 800gb more space as the same amount of my ntfs hdd disks.
thanks very much for your help!
kovu