So, the point is not this, but if is possible to expand the storage area.
If today I install a "real" storage space about 4,5 TB, I cannot think to destroy it next year only to add others one or two disks.
I'll only address the part of this that it seems like you don't understand.
There are two components you need to wrap your head around when thinking about RAID:
- vdev: this is a virtual device made up of one or more actual disks. These can be single, or mirrored (including things like 3-way mirrors), or set up on a parity RAID configuration like RAIDZ2. Once a parity raid vdev has been created it can not be expanded.
- Zpool: this is a stripe of vdevs. The Zpool doesn't care about the nature of the underlying vdev, and it will let you stripe across a RAIDZ3 vdev, and a mirrored vdev, and a single drive vdev. However, as this is a stripe, if any vdev fails the pool is lost.
If you create a RAIDZ(whatever) vdev on FreeNAS, then you can not add additional drives to it later and expand it as you can with higher-end RAID controllers. A 5 drive RAIDZ3 vedv will
alwaysbe a 5 drive RAIDZ3 vdev. If you want to add more drives, then you need to create another vdev and add it to the pool, which then stripes across both vdevs (and generates some performance gains as part of the process.)
So, what you can do instead is create 2 mirrored vdevs, and add them to a pool so they're striped together, and as you decide to add more data later you can just add more mirrored vdevs, so you'll be adding two drives at a time. Again, though, this is different than what you're used to with RAID cards:
- The Zpool stripes across all the vdevs in the pool, but it does so proportionally based on their available capacity.
- The Zpool's data is not evenly distributed across the entire pool when the pool is expanded. If you've got a Zpool made up of one mirrored vdev at 90% capacity and you add a new vdev to the pool that's otherwise identical, now you have a pool consisting of one vdev @ 90% capacity, and one at 0% capacity. Writes will favor the new vdev at 9:1 or so, and you won't see the doubling of read/write performance that you would otherwise expect in a stripe.
So there's a lot to get you head around -- even if you've been doing this for a while, it isn't what you're used to.
If you want to add drives over time, ignore RAIDZ1 (because it's bad form to use it with big drives - this will cause data loss when resilvering just like with RAID5 does in the 1TB drive world), and choose one of the following:
- Add mirrors, and grow the thing using mirrors. If initial performance is fine you won't lose anything, but if you need additional IOPS in addition to additional capacity you'll need to move the contents off the Zpool then back on it.
- Start with a RAIDZ2 vdev of an appropriate number of drives (<= 8, but search is your friend), and when it's time to bump capacity add another RAIDZ2 vdev. Again, you won't see performance gains unless you migrate the data off and back on again.
- Build what you want now, with the understanding that you'll need to destroy the Zpool and create a larger one when you need to expand it. Plan to do this on a weekend when you won't affect the business.
Something else you probably also don't understand. RAID6 has read performance equal to (the number of drives in the array minus 2) * the performance of an individual drive. RAIDZ2 (the ZFS alternative) has performance ~= to the speed of one drive. So if you build a 100 drive RAIDZ2 vdev (not recommended), it will be about as fast as a single drive. ZFS does this to guarantee data integrity by protecting against some kind of faults, but if you're not expecting it you'll look like an idiot when you don't get the performance you planned for.
As always, I'm a n00b here, but I hope this helped.