BUILD 160 TB Component Discussion

Status
Not open for further replies.

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
Hi FreeNAS Gurus,

So far the largest FreeNAS I've built has been 10 TB RAID-Z2 on a quad-core AMD system. On all the systems I've been very impressed with features and reliability. Now it's time to double-down.

Looking to build 160 TB archiver with RAID-Z3. Here's what I'm thinking ... I'd appreciate any component-level advice, questions, discussion, etc.

  • 2x Micron enterprise-class 100 GB SSD in hardware RAID-1 for FreeNAS install
  • Tyan dual G34 socket server board with 1x Intel 82574L GbE, 2x Intel 82576EB GbE, quad-channel DDR3, onboard LSI RAID for the SSD mirror
  • 2x 3ware 9750-24i4e SATA/SAS 6Gb/s PCIe 2.0 w/512 MB onboard memory controller card (FreeNAS built in monitoring? or CLI only?)
  • 2x battery backup for above HBA
  • 2x Opteron 6200 8-core (16 cores really necessary? how about 8 cores on a single CPU? or better to go with 2x 4-core? probably only use light ZFS compression)
  • 256 GB ECC DDR3 SDRAM (256 GB really necessary? Maybe 128 GB instead? 192 GB? ... not doing dedupe)
  • 45x Seagate 4 TB 5900-RPM SATA drives (model ST4000DM000), 64 MB cache SATA-6G... (cheap, low-power, low-noise, RAID-Z3 so who cares?)
  • 1x spare drive above for 'warm spare' hoping true hot-spare daemon comes to FreeNAS someday :smile:
  • Chenbro RM91250: 9RU, 50x 3.5" hot-swap SAS bay, 2x 3.5" hot-swap SATA bay, 12x SAS backplanes (8087), 1620W 3+1 redundant PSU
  • Beefy UPS with 1-hour runtime

Would really appreciate you guys' and gals' feedback on this one.

This will be an archival and slow-restore system, seeing about 50 TB of churn for only one quarter of the year (only reads for the remaining three quarters of the year).
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
  • 2x Micron enterprise-class 100 GB SSD in hardware RAID-1 for FreeNAS install

Would really appreciate you guys' and gals' feedback on this one.

This will be an archival and slow-restore system, seeing about 50 TB of churn for only one quarter of the year (only reads for the remaining three quarters of the year).

This would be an exceptionally huge waste of money. As has been discussed here many times, FreeNAS boots into and runs from RAM, other than that all that really gets written is your settings when you save them, and minor data for the reporting graphs. Maybe get a couple of SATA DOM's and keep one as a spare and backup your config file from the GUI to some place other than the NAS.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
Considering the price tag for the entire system, those SSDs are negligible. I know they're completely overkill, but I've had a lot of problem with USB flash drives dying within months because they're generally cheap, low-quality, no-name NAND from the bargain bin of the manufacturers. At least using enterprise-class SSD I can rely on the NAND, error-correction redundancy, and fault-tolerance in the mirror. The server will not be physically accessible to me after build, test, and setup ... so I'm just preferring to make it as reliable as possible with as little user intervention (other than saying, hey replace disk #26) as possible.

If you really think using a 4GB Sandisk USB flash drive would be sufficient, by all means let me know. I presently have this Sandisk model ready to put into my 10 TB NAS when I get time, hopefully before the second OCZ Rally2 USB flash drive fails (had one fail in months ... happened to have an identical one so I used it because I didn't have anything else handy).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Gee. An issue with flash memory on an OCZ device! Say it ain't so!
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
LOL, I know, I know. I bought their original VERTEX 60 GB Indilinx drive years ago and it's still running strong today. Then I had three failed Sandforce drives in less than as many months, and a few failed drives later based on their rebadged Marvell controller with the craptastic async NAND. They finally replaced those drives with the more expensive sync NAND version for free, but I will never buy another product from them again.

USB flash drive... What can go wrong? I once again underestimated the incompetence of OCZ...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah. I have one of the first gen OCZ Core based on the crappy jmicron controller, and 2 OCZ Summits based on Samsung. All 3 are still running strong. I hate the OCZ Core, it sucks in write performance. But its perfect for my pfsense box. :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm about >< this far from pulling the trigger on the whole fscking USB flash key thing because they tend to fail and they're annoyingly slow on top of it. I've consistently had problems with this for the entire FreeNAS 8 lifecycle. Make configuration changes, wait for minutes sometimes. Take a system that's been quietly running for months and then do something that involves remounting stuff rw, and something - no clue what - floods out the device for many minutes at the peak speed the device can handle. Bits seem to rot if the devices are left long enough, resulting in an unrebootable FreeNAS, which isn't horribly bad because you can just swap in a new USB key, but sometimes the process of recovering is still some work, and when the FreeNAS is 800 miles away, then it is also money to have someone lay hands on the box. We're mostly moving to virtualization anyways but it leaves some small issues like the N36L here which isn't suitable for virtualization.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I thought it was just me that found usb flash media slow to make changes. I've tried 3 different motherboards, and a large multitude of different usb flash drives. All of them take an extraordinarily long amount of time to make changes.

Even something as simple as adding a sysctl through the gui takes many minutes. Remounting rw is fast. Usually the change to the filesystem itself is fast. But then remounting ro can take ~5 minutes. gstat shows almost no writes going to the flash (5-10 kb/sesc), although usage % is very near 100. Adding an ssh key for replication takes about 5 minutes too. I assume because it has to do the full mount rw, mount ro thing too.

I've tried SDHC cards in a usb card reader with the same result.

The only thing that made things snappy was my tiny 'pcb only' 8 gig sata ssd. I had it lying around from another project, so it didn't cost anything. And since I have an offboard controller for most of the drives, I have free sata ports.

I wish there was different options for fast boot media. Usb is nice and modular / swappable, and it works fine, except for when you're making changes to the box. Then you make a change, and 'go make coffee' while it applies the change. Kind of annoying.

I'll note that writing to each one of these flash drives is at 'normal' speed if they're used in their 'normal' way, ie in windows with fat32 / ntfs partitions. I assume it's something with ufs and how bsd does it's thing that slows them down.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hi FreeNAS Gurus,

So far the largest FreeNAS I've built has been 10 TB RAID-Z2 on a quad-core AMD system. On all the systems I've been very impressed with features and reliability. Now it's time to double-down.

Looking to build 160 TB archiver with RAID-Z3. Here's what I'm thinking ... I'd appreciate any component-level advice, questions, discussion, etc.

  • 2x Micron enterprise-class 100 GB SSD in hardware RAID-1 for FreeNAS install
  • Tyan dual G34 socket server board with 1x Intel 82574L GbE, 2x Intel 82576EB GbE, quad-channel DDR3, onboard LSI RAID for the SSD mirror

Something like an S8236-IL? Um, that just seems to me like the start of a bad set of choices. Three gigabit ethernets for such a large system could turn frustrating, especially with few options for expansion when you're limited to two PCIe slots. You probably don't need the two dozen cores that the board was built for. You're short on everything else: poor networking, limited PCIe, limited memory. I don't know that the SP5100 SAS is supported by FreeBSD, which means you might be stuck needing to get the LSI SAS2008 variant. And of course you have to populate both sockets to get access to the full 256GB. All that fun for $600.

For $500, you could get the Supermicro X9SRH-7TF. Requires higher density memory to hit 256GB, but sports 10GbE and three usable PCIe. Onboard LSI SAS2308 for basic RAID functionality, compatible with FreeBSD.

For $800, you can get the super-crazy-fun Supermicro X9DR7-TF+, dual 2011, up to 768GB, six PCIe slots, dual 10GbE, and LSI 2208. There are other variants that substitute quad gigE or other features for a lower price. Beautiful board, we've had one here in shop for several months to play with and it is very nice.

  • 2x 3ware 9750-24i4e SATA/SAS 6Gb/s PCIe 2.0 w/512 MB onboard memory controller card (FreeNAS built in monitoring? or CLI only?)
  • 2x battery backup for above HBA

You're not planning to use ZFS, then? Because you really shouldn't put a dinky little hardware RAID controller in front of your massive ZFS RAID system. It defeats the purpose. You should be picking out HBA hardware if possible.

Conspicuously missing from this discussion is what you're going to throw everything in. And I'm going to suggest something.

An archival system, by definition, probably does not need instant highest possible speed access to every drive at every second. Consider a chassis like the SC846BE16-R920B. This unit will house your mainboard and will connect all 24 front bays to your HBA of choice via a single SFF8087 cable. And here's the point: those ST4000DM000's are only capable of about 150MB/sec, times 24 is 3600MB/sec, and a SFF-8087 is four 6Gbps lanes for 24Gbps, which is something around 125MB/sec per drive for 24 drives. You buy two of these chassis, one for your mainboard and drives, the other as an expansion chassis. You hook them up to something like an IBM ServeRAID M1015, or maybe pair of M1015's. You end up with a substantially lower expense for unnecessary RAID cards, and you're also ending up on hardware known to work well with FreeNAS.

  • 2x Opteron 6200 8-core (16 cores really necessary? how about 8 cores on a single CPU? or better to go with 2x 4-core? probably only use light ZFS compression)
  • 256 GB ECC DDR3 SDRAM (256 GB really necessary? Maybe 128 GB instead? 192 GB? ... not doing dedupe)
  • 45x Seagate 4 TB 5900-RPM SATA drives (model ST4000DM000), 64 MB cache SATA-6G... (cheap, low-power, low-noise, RAID-Z3 so who cares?)
  • 1x spare drive above for 'warm spare' hoping true hot-spare daemon comes to FreeNAS someday :smile:
  • 3+1 Redundant 1800W PSU
  • Beefy UPS with 1-hour runtime

Would really appreciate you guys' and gals' feedback on this one.

This will be an archival and slow-restore system, seeing about 50 TB of churn for only one quarter of the year (only reads for the remaining three quarters of the year).
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
jgreco, first of all, thank you very much for taking the time to make such a detailed study and reply of the hardware. I really appreciate that.

Something like an S8236-IL? Um, that just seems to me like the start of a bad set of choices. Three gigabit ethernets for such a large system could turn frustrating, especially with few options for expansion when you're limited to two PCIe slots. You probably don't need the two dozen cores that the board was built for. You're short on everything else: poor networking, limited PCIe, limited memory. I don't know that the SP5100 SAS is supported by FreeBSD, which means you might be stuck needing to get the LSI SAS2008 variant. And of course you have to populate both sockets to get access to the full 256GB. All that fun for $600.

You're not planning to use ZFS, then? Because you really shouldn't put a dinky little hardware RAID controller in front of your massive ZFS RAID system. It defeats the purpose. You should be picking out HBA hardware if possible.

I haven't extensively checked the HBAs against FreeBSD 8 HCL. I'll be doing that soon.

I only picked that SAS controller because it's 3ware (likely to be supported in FreeBSD 8) and with two cards, on the limited two PCIe slots of the mobo, I would have enough SAS ports to attach all of the drives (12x 8087 connectors). I had intended to use the controller in JBOD/HBA mode, not in RAID mode of course. ZFS is an absolute requirement.

For $500, you could get the Supermicro X9SRH-7TF. Requires higher density memory to hit 256GB, but sports 10GbE and three usable PCIe. Onboard LSI SAS2308 for basic RAID functionality, compatible with FreeBSD.

For $800, you can get the super-crazy-fun Supermicro X9DR7-TF+, dual 2011, up to 768GB, six PCIe slots, dual 10GbE, and LSI 2208. There are other variants that substitute quad gigE or other features for a lower price. Beautiful board, we've had one here in shop for several months to play with and it is very nice.

3x GbE for the environment in which this system will be used should be more than adequate. However, I was a little apprehensive about only 2x PCIe. I haven't had time to look for more motherboards but I will look at your suggestions. All of my previous experience with server-class motherboards has been with ASUS, Tyan, and OEM (Dell, HP, etc.). I know of SuperMicro but I have never used their products. They seem to be held in high regard here on the forums, so I'm good with that.

As for CPU, yes this is a big question. 16 cores seems stupid to me for this system. I'm not using dedupe or high-compression on the ZFS. I question whether the way this server will be used will be able to effectively take advantage of more than 4 or 6 cores.

What is your opinion on the number of cores? Given no dedupe, fast-compression, and the relatively slow drives, how many cores do you feel would be appropriate?

RAM. Yes, the mobo is limited to 256 GB of RAM, but the chassis is also limited to 50 SAS bays, too. (I'll get to the chassis next). I was questioning whether I even needed 256 GB ... of course I could easily attached another external JBOD under the same system domain. More RAM could potentially be advantageous in that case, but I think chances are slim that this system will ever be expanded.

Conspicuously missing from this discussion is what you're going to throw everything in. And I'm going to suggest something.

An archival system, by definition, probably does not need instant highest possible speed access to every drive at every second. Consider a chassis like the SC846BE16-R920B. This unit will house your mainboard and will connect all 24 front bays to your HBA of choice via a single SFF8087 cable. And here's the point: those ST4000DM000's are only capable of about 150MB/sec, times 24 is 3600MB/sec, and a SFF-8087 is four 6Gbps lanes for 24Gbps, which is something around 125MB/sec per drive for 24 drives. You buy two of these chassis, one for your mainboard and drives, the other as an expansion chassis. You hook them up to something like an IBM ServeRAID M1015, or maybe pair of M1015's. You end up with a substantially lower expense for unnecessary RAID cards, and you're also ending up on hardware known to work well with FreeNAS.

I forgot to mention the chassis. I'll edit the original post with that info next. I'm planning to use this as the main chassis for this initial set of disks and mobo. I don't even know if the system will need expanded to more disks in the future, but if it does, I can use the external SAS connector to a JBOD with a SAS expander.


  • Chenbro RM91250: 9RU, 50x 3.5" hot-swap SAS bay, 2x 3.5" hot-swap SATA bay, 12x SAS backplanes (8087), 1620W 3+1 redundant PSU, enough fans to give me a splitting headache is less than a second :smile:

Again, thank you for your commentary and suggestions. I'll be checking SuperMicro mobos tonight, and will be scrubbing the HCL. If I can get more PCIe slots, I'd much rather use simpler HBAs with a proven track record.

What about using 2x HBA with only 2x 8087 connections to a couple SAS expanders I install in the chassis? Given these are slow drives, might this be a good solution since I don't need massively parallel I/O?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I only picked that SAS controller because it's 3ware (likely to be supported in FreeBSD 8) and with two cards, on the limited two PCIe slots of the mobo, I would have enough SAS ports to attach all of the drives (12x 8087 connectors). I had intended to use the controller in JBOD/HBA mode, not in RAID mode of course. ZFS is an absolute requirement.

3Ware is an LSI "legacy" brand, which in some cases means LSI controllers with firmware that somewhat resembles 3Ware's.

Using a RAID controller in JBOD mode may still be undesirable. The point is that a lot of these RAID controllers have something like a PowerPC 800 MHz processor in them to handle management, and since what you're doing is just passing raw blocks back and forth through them, all you've managed to do is to introduce an additional point at which things could be slowed down. Worse, many RAID controller "JBOD" modes still insist on having their own disk label on the pack, so that the RAID firmware doesn't smash the contents or something equally bad. This means you're more or less permanently marrying those disks to that family of controllers.

3x GbE for the environment in which this system will be used should be more than adequate. However, I was a little apprehensive about only 2x PCIe. I haven't had time to look for more motherboards but I will look at your suggestions. All of my previous experience with server-class motherboards has been with ASUS, Tyan, and OEM (Dell, HP, etc.). I know of SuperMicro but I have never used their products. They seem to be held in high regard here on the forums, so I'm good with that.

You can find some nice boards like the X9DRi-LN4F+ with quad gigE built in, useful...

Anyways, don't just take local opinion here as a positive sign. Netgear, TrueNAS (commercial FreeNAS), etc. are among the vendors using the Supermicro chassis and boards as a no-brainer NAS platform. Supermicro is usually the one you don't hear about, because they're mostly selling to Internet companies and systems integrators.

As for CPU, yes this is a big question. 16 cores seems stupid to me for this system. I'm not using dedupe or high-compression on the ZFS. I question whether the way this server will be used will be able to effectively take advantage of more than 4 or 6 cores.

What is your opinion on the number of cores? Given no dedupe, fast-compression, and the relatively slow drives, how many cores do you feel would be appropriate?

Think your real question is likely to be, how fast do you need it to be? If you're doing CIFS work, for example, core speed (per core) is a bit of a question. You are almost certain to wind up with more cores than you need, so my suggestion would be to try to stay small.

RAM. Yes, the mobo is limited to 256 GB of RAM, but the chassis is also limited to 50 SAS bays, too. (I'll get to the chassis next). I was questioning whether I even needed 256 GB ... of course I could easily attached another external JBOD under the same system domain. More RAM could potentially be advantageous in that case, but I think chances are slim that this system will ever be expanded.

You're up in a realm where it is hard to know. Best bet is to not buy low density memory, stick higher density stuff in and then if it turns out to be too little, you still have more slots free and can cram more in. The sizing guidelines kind of assume that your server is moderately busy and there's a lot of stuff going on, so I would be tempted to try 64GB first (4 x 16GB) and then if that seems okay, just be aware that it'll probably degrade somewhat as things get fuller in the future. Maybe check to see 64 works for now and then stuff 128 in, just to extend the time before you have to readdress the issue. :smile:

I forgot to mention the chassis. I'll edit the original post with that info next. I'm planning to use this as the main chassis for this initial set of disks and mobo. I don't even know if the system will need expanded to more disks in the future, but if it does, I can use the external SAS connector to a JBOD with a SAS expander.


  • Chenbro RM91250: 9RU, 50x 3.5" hot-swap SAS bay, 2x 3.5" hot-swap SATA bay, 12x SAS backplanes (8087), 1620W 3+1 redundant PSU, enough fans to give me a splitting headache is less than a second :smile:

Yeah, um, just so you're aware, those are like four-person-lift as they'll weigh in just shy of 300 lbs when fully loaded. Really not the recommended way to go... but if you can rack the thing and be assured of never needing to service it, I guess maybe. Heh.

Again, thank you for your commentary and suggestions. I'll be checking SuperMicro mobos tonight, and will be scrubbing the HCL. If I can get more PCIe slots, I'd much rather use simpler HBAs with a proven track record.

What about using 2x HBA with only 2x 8087 connections to a couple SAS expanders I install in the chassis? Given these are slow drives, might this be a good solution since I don't need massively parallel I/O?

Right, well, look at those Supermicro 24 drive chassis and you'll find exactly that. They have an LSI 1x36 expander built right into the backplane. It is very neat and easy to set up.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi yottabit,

If I were you I would call our good friends over at ix Systems (http://www.ixsystems.com/) and ask them to write you up a quote. As I understand it they know a thing or 2 about FreeNAS & building big ZFS systems.

Barring that I would encourage you to try and do an all Supermicro system at least. Don't try and mix & match components for a project like this, buy gear that has been designed & tested to all work together.

-Will
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
Another wrinkle.

I had intended to create a 45-drive vdev Z3 + 1 spare, because why not? :smile:

But I started doing some more research on large arrays and found that I should really create smaller vdevs and stripe them together.

Unfortunately, if I use the optimized vdev sizes (11-disk Z3, 10-disk Z2), I now have many more parity drives and not enough bays to get to 150+ TB requirement in this chassis.

Alternative: 2x vdev consisting of 24-disk Z2 = 176 TB usable, or removing 2 additional disks for warm-spare = 2x vdev of 23-disk Z2 + 2 spares = 168 TB usable.

That's perfect. The OpenSolaris Wiki says not to use more than 40+ disks in a single vdev. Now I realize that 23 disks is far from optimum because of stripe size, but I really don't have any other alternative other than spending a lot more money on many more disks and another JBOD chassis. I also realize that probably the biggest risk in this configuration is that resilvering will take a very long time and there's a high possibility of another disk fail during the process (plan to have ~150 TB of live data on the array).

Opinions? Since I don't care if the system is a little slow, who cares? Or is my approach completely bat-#$!^ crazy?

- - - Updated - - -

If I were you I would call our good friends over at ix Systems (http://www.ixsystems.com/) and ask them to write you up a quote. As I understand it they know a thing or 2 about FreeNAS & building big ZFS systems.

I have actually talked with them today. At this point I'm more interested in buying a support contract from them than actual hardware. Just trying to save money on the hardware. I wish I had the budget to actually buy a full TrueNAS solution from them.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Any vdev larger than an 11 disk z3 is generally considered bad.

Do you really need all 150 tb right now? If not, add disks later when higher capacity disks are available. Then you might be able to get the 150tb in one chassis.

A 24 disk z2 vdev is definitely crazy.

Can you do an 11 disk z3 vdev of 3tb drives right now, and expand with additional vdevs later, using bigger drives?

Even 11 disk vdevs would probably be frowned upon by most. "Conventionally", ie, the safer way to go would be multiple 6 disk z2 vdevs. You gain a lot in redundancy (which is good), but it does take more drives. But since the vdevs are smaller, expanding only takes 6 disks at a time, and resilver times are lower. This would give you 8 vdevs of 6 disks in z2. 48 'active' disks with the option for 2 warm spares. Capacity would be 32 disks. Depending on what size disks are available when the system needs expanding, will this be enough?

Otherwise, you'll simply need the ability to have more disks.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
Thanks to everyone replying in this thread. I have earnestly read all of your suggestions and commentary, and taken it to heart.

I've decided to completely change directions here. I studied some SuperMicro barebones solutions and came away quite impressed. I've swapped out the chassis for SuperMicro, gone with a SuperMicro mobo that comes in the chassis already that includes the SAS HBAs and expanders (still need to check the FreeBSD HCL to double-check ... FreeBSD wasn't mentioned in the compatibility notes on the SuperMicro site, but rarely do companies even acknowledge that FreeBSD exists), changed from dual AMD CPUs to single Intel CPU (with expansion to dual option), reduced RAM to 128 GB (with expansion to 512 GB option), and probably most important have switched from a 1x vdev-Z3 (45 disks in one vdev) or 2x vdev-Z2 (24 disks per vdev) to a 5x vdev-Z2 (10 disks per vdev).

The change in configuration surprisingly cost only ~$2k more. That's peanuts in this case. But I had to write justification, so here goes:

  • Increased fault-tolerance from 3 disk fails to 10 disk fails (divided into 2 fails each per 5 groups)
  • Increased warm-spare drives from 1 to 2
  • Substantial increased performance, especially on writes and when rebuilding redundancy after replacing a failed disk
  • Increased LAN ports from 3 to 4
  • Increased expansion capability up to 512 GB RAM (other solution was 256 GB maxed out)
  • Increased expansion capability to grow in-chassis storage from 160 TB to 192 TB without any additional hardware (other solution was maxed out)
  • Increased expansion capability to grow external storage with practically no limit (at least 1.5 PB = 1500 TB) under the same system domain (other solution could not accept external growth)

Going with the SuperMicro SuperStorage Server 6047R-E1R72L now. Hey it even saved me 3RU in the rack, amazing.

I anticipate this will make a lot of you happy. It sure did make me happy to see the difference in cost between the initial solution and this solution was minimal and provided such substantial benefits.

Comments and opinions welcome as always! Thanks again, everyone!
 

Kosta

Contributor
Joined
May 9, 2013
Messages
106
Daaamn....
Totally new to this, but what are you building the 160TB storage for? And what does the final price come up to, without having now to count the components myself :)
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
Daaamn....
Totally new to this, but what are you building the 160TB storage for? And what does the final price come up to, without having now to count the components myself :)

I'm building this for a software distribution business. Need 150+ TB online for duplicators to pull from in creating CD/DVD/flash as ordered for fulfillment.

The total hardware comes to $20,500. I think that's around 1/3 to 1/2 of what an OEM (i.e. HP, Dell, EMC, etc.) would charge for a similar setup.

ixSystems was competitive for sure, and TrueNAS provides some great features FreeNAS lacks. But unfortunately in this case I didn't have the budget to use their turnkey system. I will likely be using them for maintenance/service.
 

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
yottabit,

Did you complete this system? Did you do performance testing on it? Have you expanded it since then? Anything you would change? I'm about to do the same thing.

Thank you for your input.
 
Status
Not open for further replies.
Top