BUILD SOHO/Media NAS Build - Review/Suggestions Appreciated

Status
Not open for further replies.

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
This build project is intentionally trying to build (over-build?) a pretty top-of-the-line FreeNAS system. I have a fairly large chunk of budget available before the end of the year, so practicality and cost-savings are not the primary objectives. But it needs to perform great and function properly as a mission-critical business system, and be set up to handle growth over the next 2-5 years.

This NAS is intended to service both a small multi-person home office, and for home media use.

For the SOHO application, the NAS will store shared business files (currently 500 GB), and be the first-line backup repository (currently about 6 TB in image backups).

For the home media application, the NAS will store pro-quality photos (currently 2 TB), store and stream a moderate number of DVDs and Blu-Rays (currently 2.5 TB) and a large lossless music library (currently 2 TB). I may end up feeding up to 6 media centers in a new house probably with Plex.

I'm currently storing 13.3 TB on a Synology DS1513+ with 5 X 6 TB WD Reds in RAID6, with 3.1 TB free. I anticipate storage needs increasing to about 20-25 TB over the next few years. Given the recommendation to run at about 50% capacity, I'm targeting a NAS with around 40 TB to start.

Just FYI, I'm a professional software engineer and I've been building systems since the 80s. I learned about NAS by working my way up through Synology models, and now I want to get into an open, more stable ZFS system.

I've done lots of reading and research on the net and forums (I can't thank everyone enough for the postings), so hopefully this build list makes sense and is pretty close. But I really value everyone's experiences and advice here, so please let me know where I'm off base. Specific questions I have are bold italics.

Here we go:

Chassis/Enclosure: Caselabs Magnum TX10 ($1,354)
  • I know this seems like a lot of enclosure, but it's to accommodate expansion in various ways. I want room for at least 22 drives, preferably up to 33 or 44 (11 disk RAIDZ3 VDev). With this case I can add disks and/or make it a dual system. I want extreme airflow. I have a particular space this case fits in nicely. The Supermicro rack enclosures are popular, but I'm not convinced about the ventilation and I don't really have the right space for a rack.
Power Supply: Seasonic SS-1200XP3 ($207)
  • The calculations from the "Proper Power Supply Sizing Guidance" thread come to ~1000W. Is 1200W enough for everything I've got here (with up to 22 drives)?
Cooling Fans: Noctua NF-S12A 120mm PWM ($21 X 11 = $231)
  • The Caselabs case has lots of mounting locations for fans. All these should create quite a vortex in the server room. Are these the right fans to connect to the mainboard fan headers?
Mainboard: Supermicro X10SRL-F ($290)
  • Wanted Xeon E5 CPU, 128+ GB RAM and 5+ PCIe slots.
CPU: Intel Xeon E5-1650 ($634)
  • I think the extra cores of the 1650 (6 vs 4) should help with simultaneous video transcoding, but I could go with an E5-1620 and save $300. I'm not shooting for subtle with this build.
CPU Cooler: Noctua i4 NH-U12DXi4 ($64)
  • This thing just looks too cool, so why not?
RAM: 128 GB as 4 X Samsung M393A4K40BB0-CPB ECC 32 GB RDIMM ($254 X 4 = $1,016)
  • I wanted to use a 4 module set and folks here always advise to invest in a lot of RAM. If I get up over 40 TB in disk, that should require over 48 GB. I went for overkill on this so ZFS has lots to work with. Is 128 GB too extreme, I can start with 4 X 16 GB?
NIC: Intel Pro/1000 Dual Port Gbe ($46)
  • These two ports are in addition to the 2 on the mainboard for 4 total ports. I will be running link aggregation, since all the laptops back up on Sunday evening and it really helps on the Synology. I'm limited to Gbe by the overall network infrastructure, so no point in going 10Gbe. Reading says use Intel NICs, are these specific cards recommended or some other part number? Will this NIC work in conjunction with the built-in NIC on the mainboard?
Boot Drive: Mirrored Samsung 850 EVO 120GB 2.5-Inch SATA III SSD MZ-75E120B/AM ($67 X 2 = $134)
  • I'll mirror the boot drive, although based on some postings maybe that's not as valuable as it sounds? I know many folks recommend USB drives or SATA DOMs, but these SSDs are very good and cheap. Yes, I know 120 GB is way too big, but it's the smallest size for this model. Is there a reason not to go with SSDs?
HBA: LSI SAS9211-8I 8 port ($199 X 2 = $398)
  • Question here, do I use multiple HBA cards for 11 or more disks, or one card with an expander? I went with the LSI card because the connectors face out the back side of the card and that seems like it will work better for the cable runs.
SAS to SATA Cables: StarTech.com 1m SAS SFF-8087 to SATA SAS8087S4100 ($22 X 3 = $66)
  • What are considered the "premium" cables to use?
Storage Drives: 11 drive RAIDZ3 VDev (38.2 TB usable) with HGST Deskstar NAS 6 TB H3IKNAS600012872SN ($282 X 11 = $3,102)
  • OK, maybe I'm over-doing it here? I went RAIDZ3 because I'm really paranoid. I've experienced a cascading multi drive failure on a RAID5 array. Not fun although this was before I had a proper 3-2-1 backup scheme in place. I wanted more drives per VDev to increase the % usable, but I read 10-11 is the recommended max VDev size. A 10 disk RAIDZ2 VDev saves me $282 and I get 5.5 TB more usable (43.7 TB total). I went with HGST because of the reported reliability of the brand. Advice on the best volume approach appreciated given I want about 40 usable TB total. I could split the storage into one VDev/pool for biz/backups and one for media. I'll also pick up a couple of drives for spares.
SLOG Drive: N/A
L2ARC Drive: N/A
  • Reading @cyberjock's ZFS guide, I'm pretty convinced I don't need SLOG or L2ARC. Or I don't know enough to know if I need either, so I'm starting with neither and use metrics to inform the decision. I'm probably never going to be hosting VMs or databases on this system. But I've also read to run in sync mode all the time for the highest data safety. Deduplication may save a lot for backup images. My preference is to build this system once and not make fundamental changes once it's working, so if I should be adding SLOG or L2ARC now please advise.
What else am I missing for a complete build list? Any other parts or cables or anything?

The total cost of all that would be right around $7,300 or so. Not scary yet for my budget.

OK, that base system may seem over the top, but it enables me to expand in a couple of directions.

I could add another VDev/pool (assuming 11 X 6 TB RAIDZ3). That would require another HBA (I'd have 5 ports left over on the other two) The X10SRL-F has 7 PCIe slots so that should be plenty for the NIC and 3 HBAs. And then 11 more HGST 6 TB drives, a few more SAS to SATA cables and a few more fans. Another volume would add about $3,700, making the grand total about $11,000. That gets my attention, but still doable if I wanted over 75 TB of storage. I could also add a different VDev/pool arrangement, which might be best if I split for a backup volume and a media volume.

The Caselabs enclosure can actually easily hold up to 48 drives (with great ventilation), so I could even add two more 11 disk VDev/pool (I doubt the 1200W PSU can handle that much disk). I have no idea what I'd do with 150 TB of storage, but it's fun to think about. My bucket list has building a petabyte NAS on it, so a tenth of the way there is at least something.

And then I could even expand to a dual system, especially if it makes more sense to run the office and home use as fully separate NAS, or maybe to create a NAS as a replication target. I would need to add another power supply, mainboard, CPU, CPU cooler, RAM, NIC and boot drives into the other half of the enclosure. Then I would have 2 NAS inside the one giant Caselabs enclosure. And I'd be out an additional $2,600 or so for a grand total of $13,700. That's really pushing the budget, but possible if it makes more sense to build two NAS instead of one giant one.

So, that's why the crazy enclosure, so I can have this flexibility in the space I have available.

So, what do you think? I plan to pull the trigger on the parts purchase over the next couple of weeks and do the build over the holidays.

Many thanks in advance for your comments, guidance, suggestions, ridicule, etc.!
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Howdy. First let me say "holy crap... that's a serious system chassis".

Personally, I'm not to big on that type of chassis. I own something similar, the Mountain Mods Duality. In fact, I have two of them. The work pretty well, but their ventilation design is very disappointing compared to a Supermicro chassis. Supermicro chassis have fans that have high D/P and high airflow (and they are also loud as a result) and Noctura and other popular 'desktop' fans have no chance of competing, even if you have 20 of them. So dismissing the Supermicro because you don't have faith in the cooling I think is a drastic error. Now if you have no rack for it that's a bigger problem and you did say that you don't have a rack. Just be sure your priorities are appropriate for the task. You probably don't want to spend this kind of money and then have heat problems with your disks, right? ;)

Now, I don't know if the cooling of that Magnum chassis is going to be adequate or not, but I can promise you with 100% certainty that you have no chance of competing with the cooling provided by a Supermicro with a chasiss like a Magnum.

Having 128GB of RAM isn't bad, but it's probably overkill unless you are going to go with 10Gb ethernet. You *are* going to be bottlenecked by the NIC, so trying to do anything to give you more than about 200MB/sec on the zpool is just throwing money at something you'll never see a benefit from.

I hate EVO SSDs. I don't buy them, I don't recommend them, and I'd never even use them as boot devices. Samsung has had so many obscure problems with them and TLC that I have zero faith in TLC for data integrity or long-term reliability. You could easily find cheaper, less expensive stuff on ebay that I'd trust more.

If your intent is to set sync=always for your zpool, having an slog is going to be a requirement. I'm pretty sure that 2MB/sec throughputs for writes aren't going to make you happy for a system of this cost.

In your case, you can go with multiple LSI controllers to attach to all of the drives, or go with a SAS expander. I used a SAS expander in the past, but the Supermicro chassis have a SAS expander as part of the backplane, so only 1 connector from 1 LSI controller is needed to accomodate all 24 drives.

There are plenty of places where you could cut costs (go with a Supermicro chassis that has a SAS expander built-in for like $400 on ebay, which would have you probably save you $800 at the minimum), go with used SSDs and ditch the EVO, go with used LSI controllers from ebay, etc. But the build you have isn't bad.
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
I have CaseLabs M8 currently and it is nice, but it is not an efficient use of space. For only a few inches more wide I now have a 22U rack that takes up virtually the same amount of space and can hold multiple servers. I have the M8 maxed out with 12 drives. If I remove my WC loop I could add another 12 in the top and some in the bottom but that is way too much work. Also it gets dusty as all hell and is HORRIBLE to clean because of the powder coating. The inside looks gray now even after wiping it down buut it does looks damn pretty.

I agree with cyberjock and highly recommend you rethink getting a Supermicro chassis. They are used by many of us on the forum and companies around the world. Affordable and expandable. Norco chassis are also popular but the air flow is meh. You need to put in better fans to get good HDD temps unless you keep it in a meat locker.

I used the CL case because it simply 'looked cool' and I got it basically for free. It is fully water cooled too just for the hell of it. But after 2 years of dealing with cleaning it and trying to add drives I said enough is enough. (Even if it was fully air cooled) So I am now in the process of replacing it with 2x FreeNAS servers, a pfsense box and 2x R710 ESXi boxes and it takes up the same space.

My other thoughts.

CPU HS is over prices. Get the Supermicro 2U or 4U and you will be set.
128GB or RAM is a waste for a system like this.
Mirror'd SSD is a waste. Get 2x 16GB Scandisk Curizer Fit 2.0's
Using the 24 bay Supermicro Chassis you can have 3vdevs of 8x6TB drives in RaidZ2. Buy 1 or 2 more drives for shelf spares.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
There are also 36-bay 4U chassis available from Supermicro - that's what I run. That would get you to 4 8-drive RAIDZ2 VDEVs if you want (with 6TB drives, that gets you to 192TB raw/144TB online/115.2TB usable). You could use the other 4 bays to run either boot drives, SLOG or L2ARC devices down the road, or for hotspare drives for your pool.

I went the eBay route for much of this system and had no issues; however, pay close attention. Things like the type of SAS expander... if it's not SAS2, you'll only see ~2.2TB of each drive. You could definitely save yourself some money going this route.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Given the recommendation to run at about 50% capacity, I'm targeting a NAS with around 40 TB to start.
50% is usually the recommendation if using iscsi, otherwise you can probably be closer to 80%.

Is 1200W enough for everything I've got here (with up to 22 drives)?

1200W could be ok for 22 drives (reference point - my 12 disk head (freenas1) unit pulls around 250-350w on average)

Is 128 GB too extreme, I can start with 4 X 16 GB?

It's not extreme, but it's probably not needed. You could get by with 64.

Reading says use Intel NICs, are these specific cards recommended or some other part number? Will this NIC work in conjunction with the built-in NIC on the mainboard?
I don't think there is a specific model. They tend to use the same driver.

Is there a reason not to go with SSDs?
Generally, the reasons to not go with SSD's are cost and SATA port usage. If those aren't an issue, then by all means go with SSD's.

Question here, do I use multiple HBA cards for 11 or more disks, or one card with an expander?
SAS to SATA Cables: StarTech.com 1m SAS SFF-8087 to SATA SAS8087S4100 ($22 X 3 = $66)
OK, maybe I'm over-doing it here? I went RAIDZ3 because I'm really paranoid. I've experienced a cascading multi drive failure on a RAID5 array. Not fun although this was before I had a proper 3-2-1 backup scheme in place.
Advice on the best volume approach appreciated given I want about 40 usable TB total. I could split the storage into one VDev/zpool for biz/backups and one for media. I'll also pick up a couple of drives for spares.
If you want to really grow your solution, direct cabling is going to become a nightmare. Look into expanders (especially a hot-swap chassis and backplane). My expansion chassis has 36 drives connected to my main server with a single SFF8087 cable.

SLOG Drive: N/A
L2ARC Drive: N/A
if I should be adding SLOG or L2ARC now please advise.
You should be fine without both of these. If you ever want to add either, it's very simple and can be added (and removed) at any time. If you go with an L2ARC, you will need to ensure you have enough RAM.

As others have mentioned, you are in the land of a large rack mount server like the supermicro's.

Also, what are your plans to backup this system?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Keep in mind that power supply sizing is more about the spin-up currents. My behemoth runs about 550W on average... but spikes to nearly 1KW at boot, since it's spinning up 20 drives.
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
First off, thanks for the replies -- great stuff!

Just a quick comment on my VDev/pool proposal...

I had totally spaced on something. The VDevs above were sized based on alignment (using the 2^n+p formula), so 10 drives for a RAIDZ2 VDev and 11 for a RAIDZ3 VDev. I didn't go for 6 and 7 drive VDevs respectively so I could get more % usable out of each VDev.

But (and I had even asked this question in another thread, oops), I will likely be using compression. So, I'm thinking I really should be sizing my pools based on my data profile, for example a pool for backups with a VDev that balances size, number of drives and the % usable it produces, and another pool for media, etc. I had spec'ed things out above assuming I was determining VDev size based on alignment and therefore having only 1 or 2 pools. My bad, I need to rethink that.

The prior advice I received was that with a GBe network, I'll saturate it with pretty much any sort of VDev, aligned or not, compressed or not, so I should optimize on data needs not alignment and use compression to maximize the storage space.

Are there any considerations for multiple pools on a single server? For example, can I have multiple different pools with one having a 10 X 4TB RAID2 VDev, another having an 7 X 6 TB RAIDZ3 VDev, another having two 5 X 4 TB RAID2 VDevs, or whatever?

@depasseg asked: Also, what are your plans to backup this system?

As I'm rethinking the enclosure and pool/VDev strategy, I am considering replication or backup to another NAS. I could perhaps use my old Synology boxes for that, or build something as part of this project (it looks like maybe I can recover some budget from the build that could make that possible). That would be the first line on-site backup.

For deeper off-site backup, I'm not sure yet. Since I have a lot of data, I'd need some sort of big enterprise backup solution, like a tape drive or something. That's what I have now, but it's really old and just backing up 13 TB takes forever swapping out tapes.

Or, I could get a Crashplan account and backup to there. It seems like it might take weeks to upload just my current 13 TB, assuming my ISP doesn't cut me off for using too much upload bandwidth. Or I guess they have a way to seed it directly, I assume I'd send them drives with all the data?

What are folks/companies using these days for backing up 25-50 TB of data?
 
Last edited:

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
Keep in mind that power supply sizing is more about the spin-up currents. My behemoth runs about 550W on average... but spikes to nearly 1KW at boot, since it's spinning up 20 drives.

The HGST 4 TB and 6 TB drives are listed as drawing 1.2A on +5V and 2A on +12V on spin up, so spinning up 22 of them should be about 660W for the drives, I think?
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
What are folks/companies using these days for backing up 25-50 TB of data?

What type of media are you backing up? Pretty much no one backs up music/movies to a cloud account. Those can be easily re-ripped. Most only back up data like pictures and documents. I have just over 9TB of data on my server and Crashplan only holds about 3.5TB of it.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Just a quick comment on my VDev/zpool proposal...

I had totally spaced on something. The VDevs above were sized based on alignment (using the 2^n+p formula), so 10 drives for a RAIDZ2 VDev and 11 for a RAIDZ3 VDev. I didn't go for 6 and 7 drive VDevs respectively so I could get more % usable out of each VDev.

But (and I had even asked this question in another thread, oops), I will likely be using compression. So, I'm thinking I really should be sizing my zpools based on my data profile, for example a zpool for backups with a VDev that balances size, number of drives and the % usable it produces, and another zpool for media, etc. I had spec'ed things out above assuming I was determining VDev size based on alignment and therefore having only 1 or 2 zpools. My bad, I need to rethink that.

The prior advice I received was that with a GBe network, I'll saturate it with pretty much any sort of VDev, aligned or not, compressed or not, so I should optimize on data needs not alignment and use compression to maximize the storage space.

Are there any considerations for multiple zpools on a single server? For example, can I have multiple different zpools with one having a 10 X 4TB RAID2 VDev, another having an 7 X 6 TB RAIDZ3 VDev, another having two 5 X 4 TB RAID2 VDevs, or whatever?

The alignment thing doesn't really apply anymore. I forget the exact rationale, but it's not relevant. So go with whatever. And use compression. It doesn't add much if any impact to the server. And feel free to have multiple pools (I have an 11 disk main pool (with a spare) and a 22 disk backup pool (with different configs) in the same box. I'm going to be adding a third soon.
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
What type of media are you backing up? Pretty much no one backs up music/movies to a cloud account. Those can be easily re-ripped. Most only back up data like pictures and documents. I have just over 9TB of data on my server and Crashplan only holds about 3.5TB of it.

I'd still like to know what would be a good professional backup solution for 25-50 TB.

I have close to 2,800 CDs in my music library. Although ripping is "easy," it will take many, many hours to re-rip all of them, and each disc requires my attention to load and unload it into the drive. For CDs that are mastered so loud that the waveforms clip, I also post-process them with a de-clipper, and for many CDs I also create a FLAC and MP3 version along with the raw WAV. Restoring a backup is a much better use of my time than re-ripping and re-processing. For each CD, I scan the cover and touch up the image to create my own full size and thumbnail images. I'd also have to re-create all of those. So, definitely not as trivial to re-create the music library from scratch as it sounds.

The DVD/Blu-Ray library is stright rips, and there's not too many of them, but even still ripping requires attention for each disk, while restoring a back up can be largely unattended once kick-off and has a way shorter elapsed time until they are all available again.

Even if I was willing to re-rip the music and video collection, I still have 6 TB of image backups and 2 TB of my photos that need to be backed up, which seems to be a lot for a cloud backup, no?

How long did it take to upload 3.5 TB of data to Crashplan (what is your upload speed)?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
This build project is intentionally trying to build (over-build?) a pretty top-of-the-line FreeNAS system.

This NAS is intended to service both a small multi-person home office, and for home media use.

For the SOHO application, the NAS will store shared business files (currently 500 GB), and be the first-line backup repository (currently about 6 TB in image backups).

For the home media application, the NAS will store pro-quality photos (currently 2 TB), store and stream a moderate number of DVDs and Blu-Rays (currently 2.5 TB) and a large lossless music library (currently 2 TB). I may end up feeding up to 6 media centers in a new house probably with Plex.
With two quite different applications and equally diverse sets of requirements, have you at least considered the possibility of two more moderate systems, rather than one behemoth?
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
How long did it take to upload 3.5 TB of data to Crashplan (what is your upload speed)?
Well first off I have 2x FreeNAS systems in my rack. I also back up my critical data to 2x external drives, one is kept with me at all times and one is at my parents house. My internet speed is absolute crap because I live in China right now. Took about 6 months to upload the 3.5TB.

With two quite different applications and equally diverse sets of requirements, have you at least considered the possibility of two more moderate systems, rather than one behemoth?
+10000000
This is what you should do with that HUGE budget of yours.
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
I haven't used them personally, but I've seen them mentioned before: http://www.tarsnap.com/

Tarsnap charges $0.25 per GB per month. If I wanted to backup my current 13.3 TB, that would be something like $3,400 per month. Plus they also charge for bandwidth -- the initial load would cost $3,400 at $0.25 per GB, and then whatever incremental backup traffic gets generated. That's seems pretty crazy expensive.

It looks like Crashplan ProE is $250 per month minimum ($10 per user with a 25 user minimum), with unlimited cloud storage. That's still pretty pricey but less than 10% of Tarsnap. Maybe the Crashplan Family Plan might work (2-10 computers, still unlimited cloud storage), which is only $12.50 per month? I haven't really compared the feature sets and limitations.

I'm still not sure if cloud backups are practical, but maybe. I currently get ~24 Mbps upload speed from my home office. At that rate, the first backup of 13.3 TB would take ~54 days. I'm not sure if Comcast/xFinity would even let me use that much upload bandwidth per month.

The data center ops guys I chatted with today recommended a two stage disk + tape, or disk + cloud backup strategy. That gives me my 3-2-1 (3 copies of everything, 2 on-site, 1 off-site).

The first level backup is to disk -- it could either be a mirrored/replicated NAS, or it could be a NAS used as a backup target with some accompanying backup software. The mirrored/replicated approach can be nearly real-time, in other words they are kept pretty much in sync, or they can sync on an interval. The backup approach is on an interval and usually takes a lot more time (hours). This could be a budget NAS since it's very single-purpose.

The second level backup is to tape, they recommended one of the LTO tape drives, there are different levels for example an LTO-5 drive uses cartridges holding 1.5 TB uncompressed (maybe 2.0-3.0 compressed for my data) and costs $1,700 for the drive and $20 each for cartridges (I'd need ~6 cartridges for my 13.3 TB, and would probably use a 3-set rotation, so 18 tapes at $360). Two sets of tapes would be stored off-site. It might sound pricey, but it would it would pay for itself vs. a Crashplan ProE subscription in 10 months. But if a Crashplan Family subscription can work, it would take 7 years to pay back the tape system, so that's where a cloud backup could make more sense than tape. The economics change as my storage size grows towards the 40 TB I expect.

Poking around on the net this evening, the two stage backup strategy seems pretty common practice in professional data centers. How fancy their mirrored/replicated servers are, and whether they use cloud storage or tape seems to depend on budget and also how legacy their systems are. If I can keep myself under control, I might be able to implement a two stage strategy on a more reasonable budget.
 

CheckYourSix

Dabbler
Joined
Jun 14, 2015
Messages
19
Tarsnap charges $0.25 per GB per month. If I wanted to backup my current 13.3 TB, that would be something like $3,400 per month. Plus they also charge for bandwidth -- the initial load would cost $3,400 at $0.25 per GB, and then whatever incremental backup traffic gets generated. That's seems pretty crazy expensive.
"These prices are based on the actual number of bytes stored and the actual number of bytes of bandwidth used — after compression and data deduplication." http://www.tarsnap.com/deduplication.html
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
With two quite different applications and equally diverse sets of requirements, have you at least considered the possibility of two more moderate systems, rather than one behemoth?

Yes, I'm realizing this might be far more practical. I can still make each system very functional and properly configured, and not overbuild.

Stay tuned, revisions coming this weekend...
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
"These prices are based on the actual number of bytes stored and the actual number of bytes of bandwidth used — after compression and data deduplication." http://www.tarsnap.com/deduplication.html

Yes, you're right, I didn't consider that. Their example of reducing a 96 TB to 15.8 GB of actual storage is very interesting. It might be worth opening an account and uploading a representative subset of my data to see how much it can be reduced and what the actual cost might be.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
$0.25 per GB per month
If you can figure out a way to do it, e.g. from a client computer running Arq Backup, Amazon Glacier and Google Nearline are only $0.01/GB month. BackBlaze B2 is slated to be $0.005/GB month.
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
With two quite different applications and equally diverse sets of requirements, have you at least considered the possibility of two more moderate systems, rather than one behemoth?

My usage profiles seem to be 1) the backups of the laptops/workstations; 2) serving up video & audio; and 3) shared file access for the business files.

If I was to build one NAS to be a backup target, and then one as a media server for the music and videos, what would be the differences in the configurations?

I would think the backup NAS would need less CPU. I might want to use deduplication on it since the backup sets seem perfect for that, so I'd need lots of RAM and maybe an L2ARC? And size the pool for the anticipated size of the backups, with perhaps pools for different sets of backups.

The media NAS might want more CPU cores for transcoding, and wouldn't need L2ARC or SLOG. The pools would be sized around the media.

I would think the shared business files might be best stored on the media NAS. It might not make sense as far as who the data belongs to, but the usage profile seems to more fit that NAS.

Or is there a better way to carve this up, some other aspects of NAS designs I'm not seeing?
 
Last edited:
Status
Not open for further replies.
Top