• Currently using VMware at work? We want to hear from you.

    Thinking of making a switch from VMware? We'd love to hear your thoughts and feedback about which hypervisor you have been researching or already using. Click here to vote and share your thoughts! You can vote HERE!

Performance expected. RAIDZ 6 x 3TB WD drives. E-350

Status
Not open for further replies.

benamira

Explorer
Joined
Oct 12, 2011
Messages
61
Hi
I have just build my new Home NAS. The purpose of this system would be a big repository for all my media collection: HD movies, ...

Specs:

ASUS E35MI-I mobo
8 GB DDR3 RAM
6 x WD 3 TB Caviar Green
Lian-Li PC-Q25

Network: Gigabit

FreeNAS 8.01 RELEASE AMd64
I have configured a Volume with all the drives and ZFS1 FS.
Exporting through NFS and AFP to MAC clients.

What performance should i expect for this config and media files (>2 Gb)?
Any fine tunning recommendation?
Any best practices for this purpose?

thanks in advance,
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
If you put all the spindles in a single RAIDZ volume, you can expect the performance of a single spindle. That's how ZFS was designed. To explain a little more, there are 2 components of ZFS, the zpool, and the vdev. The zpool contains one or more vdevs. The vdev contains one or more spindles. The performance of the vdev is limited to that of a single spindle. Thus, if you want performance greater than a single spindle, you will need to put multiple vdevs in your zpool. If you have 6 spindles, you could create 2x 3 spindle RAIDZ vdevs for your pool. This would give you the performance of 2 spindles. You could also create 3x mirror vdevs for your pool. This would give you the performance of 3 spindles.
 

benamira

Explorer
Joined
Oct 12, 2011
Messages
61
If you put all the spindles in a single RAIDZ volume, you can expect the performance of a single spindle. That's how ZFS was designed. To explain a little more, there are 2 components of ZFS, the zpool, and the vdev. The zpool contains one or more vdevs. The vdev contains one or more spindles. The performance of the vdev is limited to that of a single spindle. Thus, if you want performance greater than a single spindle, you will need to put multiple vdevs in your zpool. If you have 6 spindles, you could create 2x 3 spindle RAIDZ vdevs for your pool. This would give you the performance of 2 spindles. You could also create 3x mirror vdevs for your pool. This would give you the performance of 3 spindles.

Ok, i think i understand what you are saying, this sounds very interesting, as this is my first touch with ZFS and probably i'm already thinking in my old IOMEGA NAS architecture, and ZFS changes this, i guess.

But i have some doubts, my goal is to have a single Volume mapped to all my MAC clients. (probably AFP, it looks better than NFS in my scenario):

1) If i do what you are saying, i guess i would have several shares?? or i am wrong?

2) if i choose another config for ZFS as you suggest i will miss more storage, i mean, right now, with my config, just created a big volume RAIDZ with all the 6 drives and i get aprox 14 TB usable (from 18 RAW). If i used several pools, Will i miss more usable storage??

3) I am getting with my config between 40-65 MB/s writing to the NAS (depending on the source), this is far below from the limit of a single hard drive, am i correct?? so i wouldnt get advantage of several pools

4) Last one i have only build the Volume with all the drives and no ZFS Datasets or ZFS Volume. am i doing properly? or afeter creating the big volume i have to create ZFS volumes and datasets below that one.

Thanks for your help
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
Shares (CIFS/NFS/Appletalk) are not necessarily related to ZFS. ZFS will group all your storage into a single logical pool, and you can share it how ever you like.

You probably don't want multiple pools, but rather multiple vdevs, where 1 pool can have more than 1 vdev. For example, if you create a pool called tank, and then create a vdev that is 3 spindles RAIDZ. You can then go through the exact same process (use the pool name of tank) and create another vdev that is 3 spindles RAIDZ, you will now have 1 pool (called tank) with 2 RAIDZ vdevs in it. Your storage would be 4x what ever size the spindles are (RAIDZ uses 1 spindles worth of storage for parity, and you have 2x RAIDZ, so 4 spindles worth of data).

Yes, once you build the volume, then you need to create the zfs datasets. You can also control the datasets options here, for example, set min. and max amounts of storage to a data set. You can probably think of a data set as analogous to a (CIFS/NFS/Appletalk) share.
 

benamira

Explorer
Joined
Oct 12, 2011
Messages
61
Ok, It is now almost clear.
Conclusion: I have to think about it and finally decide between performance/capacity.

one more question to close my gap. Which is the correct sequence order to provision storage?

Until now, what i have made : Create Volume-> Choose drives -> Choose ZFS/UFS -> Force 4096k -> Choose Protection RAIDZ1 RAIDZ2...
And thats all (No ZFS pools, no zfs dataset). Then i configure permissions. And add AFP share. It works.

So the complete sequence you suggest would be:

1) Create Volume
2) Create ZFS Pool
3) Create ZFS Dataset
??

Thx again!
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
When you did your create volume, there was a sort of implied zpool creation. The GUI doesn't really discuss what is going on, so most people don't realize what is happening underneath.

You did the correct steps, assuming you need to use 4k blocks. After you have created the volume, I would create a data set for each share. This will allow you to limit space should you wish. By default (no limits), any share can consume all your space. The limits are configurable at any time, the only limit I can think of is that you can't set the limit for less than is currently on the data set (you couldn't for example, have a 40G data set and set the limit to 30G).

After you've created your data sets, configure your shares (CIFS/NFS/Appletalk) and things should work as expected.
 
Status
Not open for further replies.
Top