So, I'm planning to build a NAS using FreeNAS, obvious enough.
It'll be my vault for all files, my hypervisor storage via iscsi, recordings of my home surveillance cameras, you name it.
So far I've come up with a general build.
Chassis: antec 900 - reason: small, high air flow, 9 5.25 bays.
3 Supermicro CSE-M35T hdd cages, this will let me put 15 drives in, easy access.
Motherboard/CPU: something along the line of Xeon/Opteron, ECC memory support is a must for future upgrade, 32-64GB max memory, low power enough to be on 24/7.
I'm reading through the guides, just having a few questions
http://forums.freenas.org/threads/confused-about-that-lsi-card-join-the-crowd.11901/ <~~ "This is the BIOS probe of an M1015 in IT mode with 8 drives attached via a single SFF8087 and an LSI SAS expander" - Does that mean I can use a SAS HBA card connected to an SAS expander and then connect to SATA drives? Isn't that a bad idea due to SATA being single device per channel vs SAS can have multiple (32 in the old days?)?
http://doc.freenas.org/index.php/Hardware_Recommendations#RAID_Overview <~~ not more than 12 drives per array is recommended, I suppose in my configuration having 2 arrays each having 7 drives is good? That's 1 way to fill up the drives?
cyberjock's slideshow mentioned vdev can't be modified, and can't be removed from zpool once added. I can understand vdev cannot be modified but zpool is a logical unit and removing vdev from it shouldn't be a problem, no? Perhaps not yet implemented but maybe plans in the future? I've seen someone mentioning exporting the zpool, destroy it, and reimport it to remove the vdev, is that true? Is there any way to manage which vdev stores the data of the zpool or is it all automated? I'm guessing the vdevs appears as a drive to a zpool and then applies yet another layer of RAID onto it?
And my greatest concern: If I don't want to use raidz despite popular belief and benefits, what other ways I can use freenas with the drives aside from standard raid levels? Something more flexible in removing/adding/expanding the array, like those found in Dell Perc raid controller or BeyondRAID of Drobos? Reason I ask is that I have several spare drives with mixed sizes 1.5TB and 3TB that I'd like to put in the NAS, and once I got it up and running, transfer all 7.5TB of data on my desktop to it, take the 3TB drives out of my desktop, shove it in the NAS and expand it out, or so is my plan. Certainly I can borrow a drive or 2 from someone else, copy the data over and then put it all in, make it pretty from the start, but that's 1. messy, and 2. what if I need to do something similar in the future? RAIDZ1 is pretty much out of the question, and having 2 raidz2 or 3 raidz2 means I'm wasting 2 to 4 more drives for parity when what I really wanted was 2 drives fault tolerance, or if the whole thing is in 1 single array, perhaps even raidz3.
Sorry if this whole post is a mess. While I type it up some questions I had come up answered and some I forgot. Articulate my thought isn't the strongest suit.
Thanks.
It'll be my vault for all files, my hypervisor storage via iscsi, recordings of my home surveillance cameras, you name it.
So far I've come up with a general build.
Chassis: antec 900 - reason: small, high air flow, 9 5.25 bays.
3 Supermicro CSE-M35T hdd cages, this will let me put 15 drives in, easy access.
Motherboard/CPU: something along the line of Xeon/Opteron, ECC memory support is a must for future upgrade, 32-64GB max memory, low power enough to be on 24/7.
I'm reading through the guides, just having a few questions
http://forums.freenas.org/threads/confused-about-that-lsi-card-join-the-crowd.11901/ <~~ "This is the BIOS probe of an M1015 in IT mode with 8 drives attached via a single SFF8087 and an LSI SAS expander" - Does that mean I can use a SAS HBA card connected to an SAS expander and then connect to SATA drives? Isn't that a bad idea due to SATA being single device per channel vs SAS can have multiple (32 in the old days?)?
http://doc.freenas.org/index.php/Hardware_Recommendations#RAID_Overview <~~ not more than 12 drives per array is recommended, I suppose in my configuration having 2 arrays each having 7 drives is good? That's 1 way to fill up the drives?
cyberjock's slideshow mentioned vdev can't be modified, and can't be removed from zpool once added. I can understand vdev cannot be modified but zpool is a logical unit and removing vdev from it shouldn't be a problem, no? Perhaps not yet implemented but maybe plans in the future? I've seen someone mentioning exporting the zpool, destroy it, and reimport it to remove the vdev, is that true? Is there any way to manage which vdev stores the data of the zpool or is it all automated? I'm guessing the vdevs appears as a drive to a zpool and then applies yet another layer of RAID onto it?
And my greatest concern: If I don't want to use raidz despite popular belief and benefits, what other ways I can use freenas with the drives aside from standard raid levels? Something more flexible in removing/adding/expanding the array, like those found in Dell Perc raid controller or BeyondRAID of Drobos? Reason I ask is that I have several spare drives with mixed sizes 1.5TB and 3TB that I'd like to put in the NAS, and once I got it up and running, transfer all 7.5TB of data on my desktop to it, take the 3TB drives out of my desktop, shove it in the NAS and expand it out, or so is my plan. Certainly I can borrow a drive or 2 from someone else, copy the data over and then put it all in, make it pretty from the start, but that's 1. messy, and 2. what if I need to do something similar in the future? RAIDZ1 is pretty much out of the question, and having 2 raidz2 or 3 raidz2 means I'm wasting 2 to 4 more drives for parity when what I really wanted was 2 drives fault tolerance, or if the whole thing is in 1 single array, perhaps even raidz3.
Sorry if this whole post is a mess. While I type it up some questions I had come up answered and some I forgot. Articulate my thought isn't the strongest suit.
Thanks.