HeloJunkie
Patron
- Joined
- Oct 15, 2014
- Messages
- 300
Hello Everyone -
First let me say that this forum is a fantastic wealth of information! I have spent a month or so reading hundreds of posts.
I have been running freenas for a while now but on pretty small systems. Mostly Dell 2950s with four to six drives. Basically tinkering around with it, watching and reading the forums, trying new stuff (iscsi). I like it a lot and it has been very solid, so I decided to take the next step and actually build a production system.
That opened a whole can of worms that I am hoping I can get some help with before moving forward. Most of my (perceived) issues revolve around conflicting information I am reading about freenas and how to set it up and configure it. Because I eventually want to use freenas in a production environment, I would like to get some good, clear understanding of a couple of key point (for me).
I have several questions, but let me start by first sharing my current hardware configuration:
Supermicro Superserver 6028R-TRT
1 x X10DRi-T Supermicro Motherboard
1 x Intel Xeon E5-2650 V3 LGA2011-3 Haswell 10 Core 2.3GHz 25MB 5.0GT/s
4 x 16GB PC4-17000 DDR4 1233Mhz Registered ECC Dual-Ranked 1.2V Memory
9 x 4TB Western Digital WD40EFRX Red NAS SATA Hard Drives
Dual 740 Watt Platinum Power Supplies
Dual APC 1500 UPSs (one for each power supply)
8GB USB Thumb Drive for booting
The X10DRi-T motherboard has two Intel X540 based 10Gb Ethernet adaptors along with an Intel C612 Express chipset and 10 SATA3 ports.
The system will be used for several things:
1) Time Machine for five laptops and two desktops
2) Several iSCSI connections to Windows 2012 Servers for a software application that will not permit a share
3) CIFS and AFS Shares for ten or so people (NFS is doable as opposed to CIFS if the performance gain is worth using it)
4) Some transcoding via Plex for training videos for classrooms (four streams max)
5) Backup for another freenas system which is offsite (2.5TB) (preseeded)
Initially after doing a bunch of reading, I was going to install 8 x 4TB drives and do a RAID10 mirror, but after reading more and more about RaidZ3, I decided that would be the best way to go given space and performance. None of my required uses really seems to meet the needs of the RAID10 performance and the RaidZ3 seemed to give me the most space and performance for what I was trying to do. Of course, this was based on the reading that I did. I thought the optimal was an even number of drives plus parity. In my case, six drives and three parity drives.
So with the RaidZ3 decision firmly behind me, I ordered a 9th 4TB hard drive, installed it in the server and promptly was told that a 9 drive RaidZ3 configuration was sub-optimal. So I went back to my bookmarks and read and re-read the following:
Solaris Internals RAIDz Recommendations
RAIDZ Configuration Requirements and Recommendations
A RAIDZ configuration with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised.
Then I was reading somewhere about compression making that a non issue since with compression freenas could alter the size of the data to meet the necessary drive requirements. What I was never able to locate was a definitive explanation as to what would happen if I choose to move forward with my sub-optimal configuration. It was at this point that I decided that I needed to ask questions and get answers before trying to put something into production that I did not fully understand, and why I am here now.
So that is really my first question or series of questions:
What happens if I go forward with a 9 drive RaidZ3 configuration? Do I risk losing data, losing space, losing performance, all three? Will this be noticeable to me and my staff with so little traffic going to this NAS or is it considered a hard line in the sand you do not ever want to cross.
What is the "actual" optimal configuration for number of hard drives and where is the definitive Freenas guide on this topic? Do I follow the Solaris post, or go with the latest Freenas manual which says:
When determining how many disks to use in a RAIDZ, the following configurations provide optimal
performance. Array sizes beyond 12 disks are not recommended.
• Start a RAIDZ1 at at 3, 5, or 9 disks.
• Start a RAIDZ2 at 4, 6, or 10 disks.
• Start a RAIDZ3 at 5, 7, or 11 disks.
The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple
groups.
OK, so here I am with two conflicting (at least to me) drive recommendations. So I am still left wondering if my 9 drive configuration will work with RaidZ3, why is it sub-optimal and what actual performance hits I am going to take by using this instead of maybe a RaidZ2 and make my new 9th drive a spare!
So my research into the drive question led me to the discussion about on-board SATA controllers vs. something like the IBM ServerRAID M1015 and an expander (like the Intel RES2SV240). My onboard controller is the Intel C612 which seems to be a good chipset from what I have read. I cannot find enough information to make a good judgement call about on-board vs. PCI SATA controllers. I have put enough energy and effort into this project that a few hundred more dollars for a dedicated SATA card is worth it if, in fact, they will perform better than my on-board controller.
To recap, the questions I have relate to the specific number of drives for RaidZ3, why do the Solaris folks and the freenas folks differ on this number and which one is correct?
What sort of penalties am I facing if I do not use the "optimal" drive configuration for RaidZ3? Is this a 5% performance hit and a 7.265% hard drive space hit or....??
My other question related to the use of on-board SATA controller (Intel C612 on the Supermicro X10DRi-T vs the IBM ServerRAID M1015 and an expander. Which one will provide the best overall performance?
Thank You
First let me say that this forum is a fantastic wealth of information! I have spent a month or so reading hundreds of posts.
I have been running freenas for a while now but on pretty small systems. Mostly Dell 2950s with four to six drives. Basically tinkering around with it, watching and reading the forums, trying new stuff (iscsi). I like it a lot and it has been very solid, so I decided to take the next step and actually build a production system.
That opened a whole can of worms that I am hoping I can get some help with before moving forward. Most of my (perceived) issues revolve around conflicting information I am reading about freenas and how to set it up and configure it. Because I eventually want to use freenas in a production environment, I would like to get some good, clear understanding of a couple of key point (for me).
I have several questions, but let me start by first sharing my current hardware configuration:
Supermicro Superserver 6028R-TRT
1 x X10DRi-T Supermicro Motherboard
1 x Intel Xeon E5-2650 V3 LGA2011-3 Haswell 10 Core 2.3GHz 25MB 5.0GT/s
4 x 16GB PC4-17000 DDR4 1233Mhz Registered ECC Dual-Ranked 1.2V Memory
9 x 4TB Western Digital WD40EFRX Red NAS SATA Hard Drives
Dual 740 Watt Platinum Power Supplies
Dual APC 1500 UPSs (one for each power supply)
8GB USB Thumb Drive for booting
The X10DRi-T motherboard has two Intel X540 based 10Gb Ethernet adaptors along with an Intel C612 Express chipset and 10 SATA3 ports.
The system will be used for several things:
1) Time Machine for five laptops and two desktops
2) Several iSCSI connections to Windows 2012 Servers for a software application that will not permit a share
3) CIFS and AFS Shares for ten or so people (NFS is doable as opposed to CIFS if the performance gain is worth using it)
4) Some transcoding via Plex for training videos for classrooms (four streams max)
5) Backup for another freenas system which is offsite (2.5TB) (preseeded)
Initially after doing a bunch of reading, I was going to install 8 x 4TB drives and do a RAID10 mirror, but after reading more and more about RaidZ3, I decided that would be the best way to go given space and performance. None of my required uses really seems to meet the needs of the RAID10 performance and the RaidZ3 seemed to give me the most space and performance for what I was trying to do. Of course, this was based on the reading that I did. I thought the optimal was an even number of drives plus parity. In my case, six drives and three parity drives.
So with the RaidZ3 decision firmly behind me, I ordered a 9th 4TB hard drive, installed it in the server and promptly was told that a 9 drive RaidZ3 configuration was sub-optimal. So I went back to my bookmarks and read and re-read the following:
Solaris Internals RAIDz Recommendations
RAIDZ Configuration Requirements and Recommendations
A RAIDZ configuration with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised.
- Start a single-parity RAIDZ (raidz) configuration at 3 disks (2+1)
- Start a double-parity RAIDZ (raidz2) configuration at 6 disks (4+2)
- Start a triple-parity RAIDZ (raidz3) configuration at 9 disks (6+3)
- (N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 6
- The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups.
Then I was reading somewhere about compression making that a non issue since with compression freenas could alter the size of the data to meet the necessary drive requirements. What I was never able to locate was a definitive explanation as to what would happen if I choose to move forward with my sub-optimal configuration. It was at this point that I decided that I needed to ask questions and get answers before trying to put something into production that I did not fully understand, and why I am here now.
So that is really my first question or series of questions:
What happens if I go forward with a 9 drive RaidZ3 configuration? Do I risk losing data, losing space, losing performance, all three? Will this be noticeable to me and my staff with so little traffic going to this NAS or is it considered a hard line in the sand you do not ever want to cross.
What is the "actual" optimal configuration for number of hard drives and where is the definitive Freenas guide on this topic? Do I follow the Solaris post, or go with the latest Freenas manual which says:
When determining how many disks to use in a RAIDZ, the following configurations provide optimal
performance. Array sizes beyond 12 disks are not recommended.
• Start a RAIDZ1 at at 3, 5, or 9 disks.
• Start a RAIDZ2 at 4, 6, or 10 disks.
• Start a RAIDZ3 at 5, 7, or 11 disks.
The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple
groups.
OK, so here I am with two conflicting (at least to me) drive recommendations. So I am still left wondering if my 9 drive configuration will work with RaidZ3, why is it sub-optimal and what actual performance hits I am going to take by using this instead of maybe a RaidZ2 and make my new 9th drive a spare!
So my research into the drive question led me to the discussion about on-board SATA controllers vs. something like the IBM ServerRAID M1015 and an expander (like the Intel RES2SV240). My onboard controller is the Intel C612 which seems to be a good chipset from what I have read. I cannot find enough information to make a good judgement call about on-board vs. PCI SATA controllers. I have put enough energy and effort into this project that a few hundred more dollars for a dedicated SATA card is worth it if, in fact, they will perform better than my on-board controller.
To recap, the questions I have relate to the specific number of drives for RaidZ3, why do the Solaris folks and the freenas folks differ on this number and which one is correct?
What sort of penalties am I facing if I do not use the "optimal" drive configuration for RaidZ3? Is this a 5% performance hit and a 7.265% hard drive space hit or....??
My other question related to the use of on-board SATA controller (Intel C612 on the Supermicro X10DRi-T vs the IBM ServerRAID M1015 and an expander. Which one will provide the best overall performance?
Thank You