AgileLogic
Dabbler
- Joined
- Oct 20, 2015
- Messages
- 20
This build project is intentionally trying to build (over-build?) a pretty top-of-the-line FreeNAS system. I have a fairly large chunk of budget available before the end of the year, so practicality and cost-savings are not the primary objectives. But it needs to perform great and function properly as a mission-critical business system, and be set up to handle growth over the next 2-5 years.
This NAS is intended to service both a small multi-person home office, and for home media use.
For the SOHO application, the NAS will store shared business files (currently 500 GB), and be the first-line backup repository (currently about 6 TB in image backups).
For the home media application, the NAS will store pro-quality photos (currently 2 TB), store and stream a moderate number of DVDs and Blu-Rays (currently 2.5 TB) and a large lossless music library (currently 2 TB). I may end up feeding up to 6 media centers in a new house probably with Plex.
I'm currently storing 13.3 TB on a Synology DS1513+ with 5 X 6 TB WD Reds in RAID6, with 3.1 TB free. I anticipate storage needs increasing to about 20-25 TB over the next few years. Given the recommendation to run at about 50% capacity, I'm targeting a NAS with around 40 TB to start.
Just FYI, I'm a professional software engineer and I've been building systems since the 80s. I learned about NAS by working my way up through Synology models, and now I want to get into an open, more stable ZFS system.
I've done lots of reading and research on the net and forums (I can't thank everyone enough for the postings), so hopefully this build list makes sense and is pretty close. But I really value everyone's experiences and advice here, so please let me know where I'm off base. Specific questions I have are bold italics.
Here we go:
Chassis/Enclosure: Caselabs Magnum TX10 ($1,354)
L2ARC Drive: N/A
The total cost of all that would be right around $7,300 or so. Not scary yet for my budget.
OK, that base system may seem over the top, but it enables me to expand in a couple of directions.
I could add another VDev/pool (assuming 11 X 6 TB RAIDZ3). That would require another HBA (I'd have 5 ports left over on the other two) The X10SRL-F has 7 PCIe slots so that should be plenty for the NIC and 3 HBAs. And then 11 more HGST 6 TB drives, a few more SAS to SATA cables and a few more fans. Another volume would add about $3,700, making the grand total about $11,000. That gets my attention, but still doable if I wanted over 75 TB of storage. I could also add a different VDev/pool arrangement, which might be best if I split for a backup volume and a media volume.
The Caselabs enclosure can actually easily hold up to 48 drives (with great ventilation), so I could even add two more 11 disk VDev/pool (I doubt the 1200W PSU can handle that much disk). I have no idea what I'd do with 150 TB of storage, but it's fun to think about. My bucket list has building a petabyte NAS on it, so a tenth of the way there is at least something.
And then I could even expand to a dual system, especially if it makes more sense to run the office and home use as fully separate NAS, or maybe to create a NAS as a replication target. I would need to add another power supply, mainboard, CPU, CPU cooler, RAM, NIC and boot drives into the other half of the enclosure. Then I would have 2 NAS inside the one giant Caselabs enclosure. And I'd be out an additional $2,600 or so for a grand total of $13,700. That's really pushing the budget, but possible if it makes more sense to build two NAS instead of one giant one.
So, that's why the crazy enclosure, so I can have this flexibility in the space I have available.
So, what do you think? I plan to pull the trigger on the parts purchase over the next couple of weeks and do the build over the holidays.
Many thanks in advance for your comments, guidance, suggestions, ridicule, etc.!
This NAS is intended to service both a small multi-person home office, and for home media use.
For the SOHO application, the NAS will store shared business files (currently 500 GB), and be the first-line backup repository (currently about 6 TB in image backups).
For the home media application, the NAS will store pro-quality photos (currently 2 TB), store and stream a moderate number of DVDs and Blu-Rays (currently 2.5 TB) and a large lossless music library (currently 2 TB). I may end up feeding up to 6 media centers in a new house probably with Plex.
I'm currently storing 13.3 TB on a Synology DS1513+ with 5 X 6 TB WD Reds in RAID6, with 3.1 TB free. I anticipate storage needs increasing to about 20-25 TB over the next few years. Given the recommendation to run at about 50% capacity, I'm targeting a NAS with around 40 TB to start.
Just FYI, I'm a professional software engineer and I've been building systems since the 80s. I learned about NAS by working my way up through Synology models, and now I want to get into an open, more stable ZFS system.
I've done lots of reading and research on the net and forums (I can't thank everyone enough for the postings), so hopefully this build list makes sense and is pretty close. But I really value everyone's experiences and advice here, so please let me know where I'm off base. Specific questions I have are bold italics.
Here we go:
Chassis/Enclosure: Caselabs Magnum TX10 ($1,354)
- I know this seems like a lot of enclosure, but it's to accommodate expansion in various ways. I want room for at least 22 drives, preferably up to 33 or 44 (11 disk RAIDZ3 VDev). With this case I can add disks and/or make it a dual system. I want extreme airflow. I have a particular space this case fits in nicely. The Supermicro rack enclosures are popular, but I'm not convinced about the ventilation and I don't really have the right space for a rack.
- The calculations from the "Proper Power Supply Sizing Guidance" thread come to ~1000W. Is 1200W enough for everything I've got here (with up to 22 drives)?
- The Caselabs case has lots of mounting locations for fans. All these should create quite a vortex in the server room. Are these the right fans to connect to the mainboard fan headers?
- Wanted Xeon E5 CPU, 128+ GB RAM and 5+ PCIe slots.
- I think the extra cores of the 1650 (6 vs 4) should help with simultaneous video transcoding, but I could go with an E5-1620 and save $300. I'm not shooting for subtle with this build.
- This thing just looks too cool, so why not?
- I wanted to use a 4 module set and folks here always advise to invest in a lot of RAM. If I get up over 40 TB in disk, that should require over 48 GB. I went for overkill on this so ZFS has lots to work with. Is 128 GB too extreme, I can start with 4 X 16 GB?
- These two ports are in addition to the 2 on the mainboard for 4 total ports. I will be running link aggregation, since all the laptops back up on Sunday evening and it really helps on the Synology. I'm limited to Gbe by the overall network infrastructure, so no point in going 10Gbe. Reading says use Intel NICs, are these specific cards recommended or some other part number? Will this NIC work in conjunction with the built-in NIC on the mainboard?
- I'll mirror the boot drive, although based on some postings maybe that's not as valuable as it sounds? I know many folks recommend USB drives or SATA DOMs, but these SSDs are very good and cheap. Yes, I know 120 GB is way too big, but it's the smallest size for this model. Is there a reason not to go with SSDs?
- Question here, do I use multiple HBA cards for 11 or more disks, or one card with an expander? I went with the LSI card because the connectors face out the back side of the card and that seems like it will work better for the cable runs.
- What are considered the "premium" cables to use?
- OK, maybe I'm over-doing it here? I went RAIDZ3 because I'm really paranoid. I've experienced a cascading multi drive failure on a RAID5 array. Not fun although this was before I had a proper 3-2-1 backup scheme in place. I wanted more drives per VDev to increase the % usable, but I read 10-11 is the recommended max VDev size. A 10 disk RAIDZ2 VDev saves me $282 and I get 5.5 TB more usable (43.7 TB total). I went with HGST because of the reported reliability of the brand. Advice on the best volume approach appreciated given I want about 40 usable TB total. I could split the storage into one VDev/pool for biz/backups and one for media. I'll also pick up a couple of drives for spares.
L2ARC Drive: N/A
- Reading @cyberjock's ZFS guide, I'm pretty convinced I don't need SLOG or L2ARC. Or I don't know enough to know if I need either, so I'm starting with neither and use metrics to inform the decision. I'm probably never going to be hosting VMs or databases on this system. But I've also read to run in sync mode all the time for the highest data safety. Deduplication may save a lot for backup images. My preference is to build this system once and not make fundamental changes once it's working, so if I should be adding SLOG or L2ARC now please advise.
The total cost of all that would be right around $7,300 or so. Not scary yet for my budget.
OK, that base system may seem over the top, but it enables me to expand in a couple of directions.
I could add another VDev/pool (assuming 11 X 6 TB RAIDZ3). That would require another HBA (I'd have 5 ports left over on the other two) The X10SRL-F has 7 PCIe slots so that should be plenty for the NIC and 3 HBAs. And then 11 more HGST 6 TB drives, a few more SAS to SATA cables and a few more fans. Another volume would add about $3,700, making the grand total about $11,000. That gets my attention, but still doable if I wanted over 75 TB of storage. I could also add a different VDev/pool arrangement, which might be best if I split for a backup volume and a media volume.
The Caselabs enclosure can actually easily hold up to 48 drives (with great ventilation), so I could even add two more 11 disk VDev/pool (I doubt the 1200W PSU can handle that much disk). I have no idea what I'd do with 150 TB of storage, but it's fun to think about. My bucket list has building a petabyte NAS on it, so a tenth of the way there is at least something.
And then I could even expand to a dual system, especially if it makes more sense to run the office and home use as fully separate NAS, or maybe to create a NAS as a replication target. I would need to add another power supply, mainboard, CPU, CPU cooler, RAM, NIC and boot drives into the other half of the enclosure. Then I would have 2 NAS inside the one giant Caselabs enclosure. And I'd be out an additional $2,600 or so for a grand total of $13,700. That's really pushing the budget, but possible if it makes more sense to build two NAS instead of one giant one.
So, that's why the crazy enclosure, so I can have this flexibility in the space I have available.
So, what do you think? I plan to pull the trigger on the parts purchase over the next couple of weeks and do the build over the holidays.
Many thanks in advance for your comments, guidance, suggestions, ridicule, etc.!
Last edited: