Globalhawk
Dabbler
- Joined
- Jun 15, 2015
- Messages
- 10
Hey guys,
for first I want to thank you all here for your work and time you are investing to solve problems and answer questions.
My motivation is to have the full control about all my data and additional my data have to be consistent and redundant. I want to use FreeNAS to save sensitive and non-sensitive data. The system should be upgradable to a maximum of 11-14 physical hard drives. (In this planned server)
So, to enable the highest possible reliablility for me I want to create a ToDo-List together with you to verify the consistence of the whole DATA-System. I'm reading this forum and will add point for point I'm finding. I know that I'm not able to create a trustworthy ZFS-System for myself right now. But I'm still in the rapid learning curve.
What do I expect from the system?
I think I'm fine with at least 12TB of usable space in the beginning. In my opinion (and to reduce resilvering time) mirroring Vdevs is my favourite option where I'm feeling comfortable with. I really want a reliable system. I don't want to worry about my data NEVER NEVER NEVER.
If I know about / trust the encryption technology which FreeNAS is offering I want to use this option (therefore I'm considering AES-NI support in my CPU decision). Also compression should be activated. My data will be home-user stuff with streaming-videos, work-data with design related stuff (PSD, InDesign, photos and material) but also experimental in the future like Plex or smth else.
My hardware:
Chassis
Mainboard
Processor
Disk Drives
RAM
OS-Drives
Okay this is my planned hardware. Now I have to think about the configuration and after that about the testing/experimenting and verification process to be sure that I really want to safe my whole data on this machine.
The very first step I will do is to do a favor to myself: BACKUP. The second more important step is also for myself: verify that this backup is really recoverable and consistent.
My Data have been backuped and I'm personally sure that the data can survive a apocalypse without loosing any of my data for now. I think i'm ready to check the hardware.
Logical thinking about my ZFS-Storage:
If I'm accidentially add a VDEV (with only one HDD in it) into my ZPOOL I'm loosing the redundancy completely. And due to the fact that I can't remove any VDEV out of the ZPOOL, I have to migrate the whole data onto a new ZPOOL, right?
Could you please confirm: In the case that I have to replace a failed HDD the right steps (best practise) are:
At least, I won't be a statistic!
Best
Globalhawk
EDITED & ADDED
for first I want to thank you all here for your work and time you are investing to solve problems and answer questions.
My motivation is to have the full control about all my data and additional my data have to be consistent and redundant. I want to use FreeNAS to save sensitive and non-sensitive data. The system should be upgradable to a maximum of 11-14 physical hard drives. (In this planned server)
So, to enable the highest possible reliablility for me I want to create a ToDo-List together with you to verify the consistence of the whole DATA-System. I'm reading this forum and will add point for point I'm finding. I know that I'm not able to create a trustworthy ZFS-System for myself right now. But I'm still in the rapid learning curve.
What do I expect from the system?
I think I'm fine with at least 12TB of usable space in the beginning. In my opinion (and to reduce resilvering time) mirroring Vdevs is my favourite option where I'm feeling comfortable with. I really want a reliable system. I don't want to worry about my data NEVER NEVER NEVER.
If I know about / trust the encryption technology which FreeNAS is offering I want to use this option (therefore I'm considering AES-NI support in my CPU decision). Also compression should be activated. My data will be home-user stuff with streaming-videos, work-data with design related stuff (PSD, InDesign, photos and material) but also experimental in the future like Plex or smth else.
My hardware:
Chassis
This chassis is well known and I've got a good shot on eBay.
SuperMicro CSE-933T-R760B
SuperMicro CSE-933T-R760B
Mainboard
A good board for my expectations and needs I think. I can flash the built-in (IT-flashable) LSI SAS controller. My requirements are the ability to administrate everything remotely. So IPMI/iDrac/ILO is a must. It has Intel-NICs and a bunch of S-ATA ports.
Supermicro X10SL7-F (MBD-X10SL7-F-O)
Supermicro X10SL7-F (MBD-X10SL7-F-O)
Processor
This is the last CPU without internal graphics and should be enough single-core-power for SMB. (Last/highest CPU in this model which have a reasonable price for me):
Intel Xeon E3-1241v3 (8M Cache, 3.50 GHz)
Intel Xeon E3-1241v3 (8M Cache, 3.50 GHz)
Disk Drives
A price and capacity i'm feeling okay with. Also well tested and for 24/7 constructed.
6x WD Red 4TB (WD40EFRX)
6x WD Red 4TB (WD40EFRX)
RAM
Due to RAM Recommendations and titled good-known modules escpecially for the choosen mainboard:
4x 8GB Samsung M391B1G73QH0-YK0 DDR3L
4x 8GB Samsung M391B1G73QH0-YK0 DDR3L
OS-Drives
I've searched some time for a reliable booting system. I don't want to invest a S-ATA Port for SATA DOM boot device, should I? Some people here working well with this USB flash. The System will be mirrored onto both sticks.
2x SanDisk Cruzer Fit USB Flash Drive
2x SanDisk Cruzer Fit USB Flash Drive
Okay this is my planned hardware. Now I have to think about the configuration and after that about the testing/experimenting and verification process to be sure that I really want to safe my whole data on this machine.
The very first step I will do is to do a favor to myself: BACKUP. The second more important step is also for myself: verify that this backup is really recoverable and consistent.
My Data have been backuped and I'm personally sure that the data can survive a apocalypse without loosing any of my data for now. I think i'm ready to check the hardware.
Logical thinking about my ZFS-Storage:
2-Way Mirror with Striping
As I mentioned I want to have a fully reliable storage. I'm okay with an storage efficiency of only 50% due to the mirroring. Additional I want to add some performance so that the journey will end in a Striped Mirrored Vdev Zpool. Please confirm my thinking about this config:
VDEV1 -> A/B = mirror
VDEV2 -> C/D = mirror
VDEV3 -> E/F = mirror
ZPOOL1 -> VDEV1, VDEV2, VDEV3 -> 12TB usable space (right calculation with /1024 left out)
With this configuration the data is safe with 3 failed HDDs if the fails are in different VDEVs. The whole pool will be destroyed if 2 HDDs of one VDEV will fail. Right? The striping will go over HDDs A,C and E.
It will not be possible to add another HDD to each VDEVs so that I cannot create a tripple mirroring after the first setup. But it is possible to add another VDEVn into that ZPOOL1, right?
VDEV1 -> A/B = mirror
VDEV2 -> C/D = mirror
VDEV3 -> E/F = mirror
ZPOOL1 -> VDEV1, VDEV2, VDEV3 -> 12TB usable space (right calculation with /1024 left out)
With this configuration the data is safe with 3 failed HDDs if the fails are in different VDEVs. The whole pool will be destroyed if 2 HDDs of one VDEV will fail. Right? The striping will go over HDDs A,C and E.
It will not be possible to add another HDD to each VDEVs so that I cannot create a tripple mirroring after the first setup. But it is possible to add another VDEVn into that ZPOOL1, right?
RAID-Z2
The more I've read about 2-way mirror with striping the more I'm unhappy about the fact that if two of one VDEV HDDs will fail, the whole data will be lost.
So i'm thinking also about a RAID-Z2 constellation. The maximum failrate of 2 HDDs but no matther which ones may be better. What should I consider for the decision? Please confirm if that logic would be fine:
VDEV1 -> A/B/C/D/E/F
ZPOOL1 -> VDEV1 -> 16TB usable space (right calculation with /1024 left out)
If i'm going this way it's possible to withstand 2 failed HDDs. With only one VDEV I don't have to sacrifice more HDDs than 2 instead of creating 2 VDEVs (and "loose" 4 HDDs) . The disadvantage of this concept is the increased resilvering-time. But I need your knowledge at this point.
What are the most important differences between this both configurations ? What am I missing here?
So i'm thinking also about a RAID-Z2 constellation. The maximum failrate of 2 HDDs but no matther which ones may be better. What should I consider for the decision? Please confirm if that logic would be fine:
VDEV1 -> A/B/C/D/E/F
ZPOOL1 -> VDEV1 -> 16TB usable space (right calculation with /1024 left out)
If i'm going this way it's possible to withstand 2 failed HDDs. With only one VDEV I don't have to sacrifice more HDDs than 2 instead of creating 2 VDEVs (and "loose" 4 HDDs) . The disadvantage of this concept is the increased resilvering-time. But I need your knowledge at this point.
What are the most important differences between this both configurations ? What am I missing here?
If I'm accidentially add a VDEV (with only one HDD in it) into my ZPOOL I'm loosing the redundancy completely. And due to the fact that I can't remove any VDEV out of the ZPOOL, I have to migrate the whole data onto a new ZPOOL, right?
Could you please confirm: In the case that I have to replace a failed HDD the right steps (best practise) are:
- Set the HDD offline inside of FreeNAS
- Wait for confirmation of the system that the disk have been marked as offline
- Shut-Down the machine (just to be safe)
- Remove the failed HDD
- Insert the new (at least same sized or bigger HDD but suggested SAME Model, Firmware, etc.) HDD
- Start the machine
- Mark the HDD as online (?)
- Resilvering progress
- [DONE] - Update BIOS / Firmware of Mainboard / IPMI
- [DONE] - Configure IPMI
- [DONE] - Update Firmware of all hard drives
- [DONE] - Flash LSI to IT-Mode (here)
- [DONE] - Activate AHCI in BIOS for hot plug ability
- [DONE] - Activate power ON after power loss
- [DONE] - Deactivate instant shutdown after pressing the power button
- [DONE] - Start MemTest for at least 48 hours
- View at system and especially HDD temperatures
- [DONE] - SMART conveyance test, don't know how long
- [DONE] - SMART long/extended test
- BurnIN-Phase (~1 week)
- [DONE] Some reboots of the machine
- [DONE] Individual sequential read and write tests.
- dd if=/dev/da${n} of=/dev/null bs=1048576" to do a read test, and "dd if=/dev/zero of=/dev/da${n} bs=1048576" to do a write test
- Simultaneous sequential read and write tests
- dd if=/dev/da${n} of=/dev/null bs=1048576" to do a read test, and "dd if=/dev/zero of=/dev/da${n} bs=1048576" to do a write test
- [DONE] Running jgreco's script from ftp://ftp.sol.net/incoming/solnet-array-test-v2.sh
- Running Iozone in a seek-heavy manner (with incompressible test data)
- [DONE] View at system and especially HDD temperatures
- Measurements about wattage / power consumption
- Schedule periodic long SMART tests (every 14 days)
- Schedule periodic short SMART tests (every 3rd day)
(Memo to myself: Scrub and a SMART Long test will NEVER be run at the same time. This can cause scrubs to never end.) - Schedule Scrubs
- Schedule backups
- Schedule config database backups
- Add machine to monitoring-server via snmp
- Configure mail status messages
- Implement Gnuplot scripts to visualize collected IOZone-Data
At least, I won't be a statistic!
Best
Globalhawk
EDITED & ADDED
- Do I have to modifiy the behavior of the drive to wait longer before positioning the heads in their park position and turning off unnecessary electronics?
- It seems that we aren't able to update any firmware of WD RED. I've talked to technical support germany and worldwide of western digital. There is no official option to update the firmware.
- If you use this mainboard with that kind of chassis, you need an PCI-E 8-pin extension cable for your mainboard like THIS.
Additional you need a front panel motherboard cable due to incompatibilities. But there is a official solution for this case. The header pins are the same (compared to the mainboard of the other user as I mentioned before). CBL-0084L is the official model-no. for the split-cable. You also can use CBL-0068L but it can be difficult to replace the original 16-pin flat cable with CBL-0068L split cable because the original cable is routed underneath the chassis fans.
If you need some additional screws like me and want plate for your HDD-slots, you can use MCP-410-00005-0N.
Last edited: