Number of disks in vdev

Status
Not open for further replies.

jagter_freenas

Dabbler
Joined
Dec 13, 2013
Messages
13
Hi,

I am putting together a FreeNas box with the following:
- AsRock C 2750D4I motherboard
- 32GB ECC RAM
- 8 x 4TB Seagate Constellation ES.3 disks connected with Sata 6 GB/s
- USB 2.0 Thumb Drive as Boot Disk
- 2 x 1GB LAN

The box will be used primarily as storage for documents, media files, music and image / photoshop files in a SMB environment. It is expected that most of the time, there will not be more than 1-2 users connected and the box will be idle more than 90% of the time.

Media files will usually be streamed from the NAS, while image and photoshop files will be read and write as people edit the images.

I plan on using RAID-Z2.

From the documents, I have learned that the recommended number of disks for a RAID-Z2 is 4, 6 or 10.

As I have the disks already, the chassis is limited to 8 disks and the mother board has 8 x Sata 6GB/s ports, I need some advice

What will the consequences be if I break this rule and simply create a single RAID-Z2 vdev with 8 disks?

Should I buy two more discs and mount them in the chassis somehow and add an extra Sata controller card to have the magic number of 10 discs or is it not worth the costs and effort?

Is the FreeNas 9.1 USB Thumb drive image already configured as a Read-Only image or should I try and do this myself?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
For your function you won't care about it. Performance impact is largely seen when doing large numbers of small writes non-stop. In your case, there might be a few milliseconds of performance lost on writes, but nobody is obviously going to complain about that.

Do a RAIDZ2 and be happy. Your choices for parts are very good and you should be fine with it.

The FreeNAS USB mounts as read-only on bootup. You don't have to do anything special. Just install it according to the manual and it'll do the rest.
 

Starpulkka

Contributor
Joined
Apr 9, 2013
Messages
179
Theres a third consequence for your 10 hdd setup on raidz2 you gonna have 5% overhead about 1.1Tb of space. http://www.opendevs.org/ritk/zfs-4k-aligned-space-overhead.html but you get 21.8Tb usable space (and even more if you pack it) so you can ignore overhead. (ofcourse you cant full 100% you space as zfs going to jam)
Just good to know that if someone es jonnefield comes your home and tells you that you loose 1Tb at your current setup. So what =)
Just remember test your memorys and hdds before starting to create raidz2.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, but there's a catch...

Free space calculation is done with the assumption of 128k block size.

That's not a static condition for ZFS. Block size in FreeNAS is defaulted to 128k(the largest for our zfs version). But, that 128k setting means "up to" in powers of 2 starting with 4k(I think 4k...). So the waters get muddy immediately because not all writes will be 128k.

Those calculated numbers are if every single write is 128kb and you fill the drives. That's not realistic. And writing in smaller block sizes could make padding bigger or smaller. So unless you are going to predict the write sizes for the pool, the guess is as good as anyone's as to how it's going to come out. You can make some amount of assumption that if you are copying very large files from another source where the writes would be all together you are likely to have the majority of writes at 128kb. But that is by no means certain.

Not to discredit the argument, but I think it's a mute argument because you can't really say that the overhead numbers are "taken off the top" so to speak. You don't immediately lose that much space just because you created the pool with the wrong number of disks. Although there may be some zdb command somewhere that estimates it based on free space allocation of your pool and assuming your blocksize value. But I'm not aware of it.
 

Starpulkka

Contributor
Joined
Apr 9, 2013
Messages
179
Good point Cyberjock, that is news for me i just heared overhead stuff in this month and assumed its simple as that. And im never not going to be offended if you have the patience to keep information appropriate and if everyone else can also learn from it. So as i said earlier "so you can ignore overhead."
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You want to see some serious pool wastage.. check out this ticket... https://bugs.freenas.org/issues/2383

You can put 100GB of data on a zvol and have it take anywhere from 103G to 2.31TB of disk space on the same pool!

I think it has to do with how block sizes work and that's a completely normal and expected result. But wowzers.... 2.1TB just to store 100GB of data!

The short answer is.. that's why Oracle changed the blocksize maximum from 128kb to 1MB with ZFS v32. Unfortunately, that'll never be open source so the open source community will have to come up with our own version if we ever care.
 
Status
Not open for further replies.
Top