Getting the most out of a ZFS pool takes a little bit of math and some thought about how you want your system to work. There is a central trade-off between redundancy, performance, and capacity. Choose any 2. For ultimate performance and capacity, there is no beating a striped array. For redundancy and performance, mirror... etc. RAID 5 and 6 strike a balance. Even more so, RAID 5+0 or 6+0 (striped arrays of RAID 5s or 6s). If you have a huge disk array using FreeNAS and ZFS there are many opportunities for optimizing your configuration. But, to do it requires a trick.
First an example of deciding how to make a configuration:
Let's assume that you have a 96 drive disk array (big, I know, but bare with me)
I can tell you now that you DO NOT want a 96 drive RAID-6, or even a 94 drive RAID-6 with 2 spares. Not only is that an unwieldy Z-pool, ZFS doesn't like it either. That document says that less than 40 devices is recommended, but less than 12 is preferred. Also, as a side benefit, if you want to add devices to the pool later, expanding the capacity, you can, but only in the same sized chunks that the stripe already uses. In the same example, if you built up your disk array using 12 device JBODs (Just a Bunch Of Disk), and were using a 12 device pool then you can add capacity one JBOD at a time.
Moving on to the tradeoffs... I've made a spreadsheet that helps you calculate these numbers for an arbitrary configuration, but for the above example:
If each pool contains less than or equal to 5 disks, I'll assume RAID-5, unless it's 2 disks which will be mirrored. Above 5 disks, to 12, I assume RAID-6. My design criteria is to have at least 1 hot-spare.
The '#' symbol is the number of disks in the pool, lost means lost capacity (due to parity or spares), read and write are the multiplier of a single disk speed, assuming that the disk is the bottleneck.
Now, in convenient graph form:
I decided to use 5 disk pools. This was because it was the only option with a single spare disk. I ran the math out until the next time there was a single spare, and it didn't occur until I had pools of 19 disks each, which is too much for me. I will admit that I used RAID-5 rather than RAID-6 because it's a development machine.
Now the question: "How do I make FreeNAS build a configuration like this?" It's not available in the web interface, but you can make it work. The trick is using the command line zpool command. First, you need a list of disk devices. I don't remember how I did this in FreeNAS, but I'll update this when I remember. Or, someone can reply with the answer. Once you have a list of devices, paste it into a text file. Collect the device names for each disk into a single line for each pool. Do this for every pool that you want. Leave the spare devices on a separate line. Like this:
zpool create data \
raidz c1t5000C50025F5D3B7d0 c1t5000C50025F6448Fd0 c1t5000C50025FCD6C7d0 c1t5000C50025FD2D3Fd0 c1t5000C50025FD5A0Fd0 \
raidz c1t5000C50025FD9D9Fd0 c1t5000C50025FD93E3d0 c1t5000C50025FD669Fd0 c1t5000C50025FDACFBd0 c1t5000C50025FDBC27d0 \
raidz c1t5000C50025FDC39Bd0 c1t5000C50025FDC393d0 c1t5000C50025FDDD93d0 c1t5000C50025FDE9FFd0 c1t5000C50025FDF6C7d0 \
raidz c1t5000C50025FDFE4Bd0 c1t5000C50025FE1BB3d0 c1t5000C50025FE2C93d0 c1t5000C50025FE3D6Bd0 c1t5000C50025FE012Fd0 \
raidz c1t5000C50025FE174Fd0 c1t5000C50025FE226Fd0 c1t5000C50025FE446Bd0 c1t5000C50025FE494Bd0 c1t5000C50025FE2867d0 \
raidz c1t5000C50025FE5503d0 c1t5000C50025FED877d0 c1t5000C50025FF90FBd0 c1t5000C50025FFAC3Fd0 c1t5000C50025FFAF83d0 \
raidz c1t5000C50025FFB7C7d0 c1t5000C50025FFB027d0 c1t5000C50025FFB99Fd0 c1t5000C50025FFB917d0 c1t5000C50025FFC9CFd0 \
raidz c1t5000C50025FFD1DBd0 c1t5000C50025FFD2AFd0 c1t5000C50025FFD79Bd0 c1t5000C50025FFD787d0 c1t5000C50025FFE13Bd0 \
raidz c1t5000C50025FFE88Bd0 c1t5000C50025FFF55Fd0 c1t5000C5002600AFE7d0 c1t5000C5002600B7A7d0 c1t5000C5002600B587d0 \
raidz c1t5000C5002600BADBd0 c1t5000C5002600BE13d0 c1t5000C5002600C1E7d0 c1t5000C5002600C8A3d0 c1t5000C5002600CB47d0 \
raidz c1t5000C5002600CF2Bd0 c1t5000C5002600DDF7d0 c1t5000C5002600E62Fd0 c1t5000C5002600E483d0 c1t5000C5002600EDABd0 \
raidz c1t5000C5002600F02Bd0 c1t5000C5002600F027d0 c1t5000C5002600F33Bd0 c1t5000C5002600F96Bd0 c1t5000C5002600FA27d0 \
raidz c1t5000C5002601A47Bd0 c1t5000C5002601A237d0 c1t5000C5002601D96Fd0 c1t5000C5002601D197d0 c1t5000C5002601DF77d0 \
raidz c1t5000C5002602E77Fd0 c1t5000C5002604EF4Bd0 c1t5000C5002604F19Fd0 c1t5000C5002604F313d0 c1t5000C50026014DA7d0 \
raidz c1t5000C50026016A87d0 c1t5000C50026016AF7d0 c1t5000C50026016BEBd0 c1t5000C50026016C03d0 c1t5000C50026016CD3d0 \
raidz c1t5000C50026017A5Bd0 c1t5000C50026017CFBd0 c1t5000C50026018D6Bd0 c1t5000C50026018D97d0 c1t5000C50026019D9Bd0 \
raidz c1t5000C50026033BE7d0 c1t5000C50026050EBFd0 c1t5000C500260103A7d0 c1t5000C500260112FBd0 c1t5000C500260113DFd0 \
raidz c1t5000C500260135CBd0 c1t5000C500260315B3d0 c1t5000C5002601043Bd0 c1t5000C5002603314Fd0 c1t5000C5002603335Fd0 \
raidz c1t5000C50026033293d0 c1t5000C50026013853d0 c1t5000C50026017323d0 c1t5000C50026017623d0 c1t5000C50026052597d0 \
spare c1t5000C50026016737d0
Add "raidz" before each line, '\<CR>', where <CR> is a actual carriage return, at the end of each line, and "zpool create data \<CR>" at the top of the document. Finally, add "spare" to the line with your spare disks.
Copy and paste this into the command line as root. This will instruct ZFS to create your Z-pool as a stripe of many RAID'd volumes.
Now, the trick. We need to "export" the pool. This means to essentially take it out of the OS. This may seem counter intuitive, but do it anyway.
"zpool export data"
Now, if you run "zpool status" you shouldn't see any Z-pools.
In the web interface, go to Storage->Volumes->View all Volumes
Click on the "Auto import all volumes"
In a few minutes your carefully crafted ZFS pool should appear!
Idea credit: http://blogs.oracle.com/roch/entry/when_to_and_not_to
First an example of deciding how to make a configuration:
Let's assume that you have a 96 drive disk array (big, I know, but bare with me)
I can tell you now that you DO NOT want a 96 drive RAID-6, or even a 94 drive RAID-6 with 2 spares. Not only is that an unwieldy Z-pool, ZFS doesn't like it either. That document says that less than 40 devices is recommended, but less than 12 is preferred. Also, as a side benefit, if you want to add devices to the pool later, expanding the capacity, you can, but only in the same sized chunks that the stripe already uses. In the same example, if you built up your disk array using 12 device JBODs (Just a Bunch Of Disk), and were using a 12 device pool then you can add capacity one JBOD at a time.
Moving on to the tradeoffs... I've made a spreadsheet that helps you calculate these numbers for an arbitrary configuration, but for the above example:
If each pool contains less than or equal to 5 disks, I'll assume RAID-5, unless it's 2 disks which will be mirrored. Above 5 disks, to 12, I assume RAID-6. My design criteria is to have at least 1 hot-spare.
The '#' symbol is the number of disks in the pool, lost means lost capacity (due to parity or spares), read and write are the multiplier of a single disk speed, assuming that the disk is the bottleneck.
Code:
# Spares Pairity Lost Read Write Raid level 1 0 0.00 0.00% 96 96 Striping 2 0 48.00 50.00% 96 1 Mirroring 3 0 32.00 33.33% 64 64 Raid-5 (Z) 4 0 24.00 25.00% 72 72 Raid-5 (Z) 5 1 38.00 40.63% 57 57 Raid-6 (Z2) 6 0 32.00 33.33% 64 64 Raid-6 (Z2) 7 5 26.00 32.29% 65 65 Raid-6 (Z2) 8 0 24.00 25.00% 72 72 Raid-6 (Z2) 9 6 20.00 27.08% 70 70 Raid-6 (Z2) 10 6 18.00 25.00% 72 72 Raid-6 (Z2) 11 8 16.00 25.00% 72 72 Raid-6 (Z2) 12 0 16.00 16.67% 80 80 Raid-6 (Z2)
Now, in convenient graph form:
I decided to use 5 disk pools. This was because it was the only option with a single spare disk. I ran the math out until the next time there was a single spare, and it didn't occur until I had pools of 19 disks each, which is too much for me. I will admit that I used RAID-5 rather than RAID-6 because it's a development machine.
Now the question: "How do I make FreeNAS build a configuration like this?" It's not available in the web interface, but you can make it work. The trick is using the command line zpool command. First, you need a list of disk devices. I don't remember how I did this in FreeNAS, but I'll update this when I remember. Or, someone can reply with the answer. Once you have a list of devices, paste it into a text file. Collect the device names for each disk into a single line for each pool. Do this for every pool that you want. Leave the spare devices on a separate line. Like this:
zpool create data \
raidz c1t5000C50025F5D3B7d0 c1t5000C50025F6448Fd0 c1t5000C50025FCD6C7d0 c1t5000C50025FD2D3Fd0 c1t5000C50025FD5A0Fd0 \
raidz c1t5000C50025FD9D9Fd0 c1t5000C50025FD93E3d0 c1t5000C50025FD669Fd0 c1t5000C50025FDACFBd0 c1t5000C50025FDBC27d0 \
raidz c1t5000C50025FDC39Bd0 c1t5000C50025FDC393d0 c1t5000C50025FDDD93d0 c1t5000C50025FDE9FFd0 c1t5000C50025FDF6C7d0 \
raidz c1t5000C50025FDFE4Bd0 c1t5000C50025FE1BB3d0 c1t5000C50025FE2C93d0 c1t5000C50025FE3D6Bd0 c1t5000C50025FE012Fd0 \
raidz c1t5000C50025FE174Fd0 c1t5000C50025FE226Fd0 c1t5000C50025FE446Bd0 c1t5000C50025FE494Bd0 c1t5000C50025FE2867d0 \
raidz c1t5000C50025FE5503d0 c1t5000C50025FED877d0 c1t5000C50025FF90FBd0 c1t5000C50025FFAC3Fd0 c1t5000C50025FFAF83d0 \
raidz c1t5000C50025FFB7C7d0 c1t5000C50025FFB027d0 c1t5000C50025FFB99Fd0 c1t5000C50025FFB917d0 c1t5000C50025FFC9CFd0 \
raidz c1t5000C50025FFD1DBd0 c1t5000C50025FFD2AFd0 c1t5000C50025FFD79Bd0 c1t5000C50025FFD787d0 c1t5000C50025FFE13Bd0 \
raidz c1t5000C50025FFE88Bd0 c1t5000C50025FFF55Fd0 c1t5000C5002600AFE7d0 c1t5000C5002600B7A7d0 c1t5000C5002600B587d0 \
raidz c1t5000C5002600BADBd0 c1t5000C5002600BE13d0 c1t5000C5002600C1E7d0 c1t5000C5002600C8A3d0 c1t5000C5002600CB47d0 \
raidz c1t5000C5002600CF2Bd0 c1t5000C5002600DDF7d0 c1t5000C5002600E62Fd0 c1t5000C5002600E483d0 c1t5000C5002600EDABd0 \
raidz c1t5000C5002600F02Bd0 c1t5000C5002600F027d0 c1t5000C5002600F33Bd0 c1t5000C5002600F96Bd0 c1t5000C5002600FA27d0 \
raidz c1t5000C5002601A47Bd0 c1t5000C5002601A237d0 c1t5000C5002601D96Fd0 c1t5000C5002601D197d0 c1t5000C5002601DF77d0 \
raidz c1t5000C5002602E77Fd0 c1t5000C5002604EF4Bd0 c1t5000C5002604F19Fd0 c1t5000C5002604F313d0 c1t5000C50026014DA7d0 \
raidz c1t5000C50026016A87d0 c1t5000C50026016AF7d0 c1t5000C50026016BEBd0 c1t5000C50026016C03d0 c1t5000C50026016CD3d0 \
raidz c1t5000C50026017A5Bd0 c1t5000C50026017CFBd0 c1t5000C50026018D6Bd0 c1t5000C50026018D97d0 c1t5000C50026019D9Bd0 \
raidz c1t5000C50026033BE7d0 c1t5000C50026050EBFd0 c1t5000C500260103A7d0 c1t5000C500260112FBd0 c1t5000C500260113DFd0 \
raidz c1t5000C500260135CBd0 c1t5000C500260315B3d0 c1t5000C5002601043Bd0 c1t5000C5002603314Fd0 c1t5000C5002603335Fd0 \
raidz c1t5000C50026033293d0 c1t5000C50026013853d0 c1t5000C50026017323d0 c1t5000C50026017623d0 c1t5000C50026052597d0 \
spare c1t5000C50026016737d0
Add "raidz" before each line, '\<CR>', where <CR> is a actual carriage return, at the end of each line, and "zpool create data \<CR>" at the top of the document. Finally, add "spare" to the line with your spare disks.
Copy and paste this into the command line as root. This will instruct ZFS to create your Z-pool as a stripe of many RAID'd volumes.
Now, the trick. We need to "export" the pool. This means to essentially take it out of the OS. This may seem counter intuitive, but do it anyway.
"zpool export data"
Now, if you run "zpool status" you shouldn't see any Z-pools.
In the web interface, go to Storage->Volumes->View all Volumes
Click on the "Auto import all volumes"
In a few minutes your carefully crafted ZFS pool should appear!
Idea credit: http://blogs.oracle.com/roch/entry/when_to_and_not_to