Small Home-NAS build: searching for advice

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
Ok, so here my requirements list:
  1. Must have
    • Redundant storage with suitable SMB read and write performance for GBit LAN.
    • ECC RAM
    • IPMI (without additional license fees like for example some HP servers)
    • Compact and quiet build with front accessible hot swap bays.
      Something the size of a Norco ITX-S8, Ablecom CS-T80 (both unavailable) or Silverstone DS380 (has some serious HDD cooling issues, and cheap-looking drive trays without individual LEDs) would be ideal.
      Six 3.5" bays would be sufficient for RAIDZ2, but I didn't find any chassis that size.
      So I came up with four bays with RAIDZ1, with chassis like SuperMicro 721TQ, or Inter-Tech IPC SC-4100 / Norco ITX-S4 (those two look exactly the same).
      From all these chassis, the SuperMicro 721TQ and the Ablecom CS-T80 seem best quality-wise.
      Why is that eight-bay-stuff so darn rare...?
      I'd actually prefer 8 bays to 4 bays, if only a decent chassis were available.
  2. Should have
    • Upgradeable CPU
    • Front door
  3. Nice to have
    • An additional 2.5" hot swap bay, or a 3.5" or slim ODD bay where I could fit a hot swap bay for the boot SSD.
    • Maybe Plex jail with hardware transcoding support (if that feature becomes available)
  4. Don't care to have
    • 10GBit LAN
    • Powerful virtualisation capabilities
    • Database server
    • Domain controller, mail server, OwnCloud etc.
  5. Must not have (new point from me)
    • 19" rack mount chassis or huge full size tower.
      In the worst case, instead of buying one of those, I'd rather populate my old Sharkoon Rebel9 with some 5.25" to 3.5" hot swap backplanes...
And here my current space requirements (just a rough look):
  1. Files that are easily replaceable, or lack of replacement would be a minor inconvenience at worst
    ~1.5TB
  2. Files that are not as easy to replace but not critical - losing them would be a large inconvenience but not the end of the world
    ~1TB, plus Windows Backups for 2 PCs
  3. Files that are irreplaceable
    ~0.2TB
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
You'll make your life a lot easier if you ditch the hot-swap requirement.

I think hot swap drives for smaller systems (<10 drives) is a nice to have at best in a home server environment. Better to find a case that features easy access internal bays as you'll have more/better/cheaper options.

Worst case without hot swap is you'll have to schedule up to 15 minutes downtime to swap the drive/power back on. On a schedule you control (i.e. do it late/early/on weekend). For an event that may only happen once or twice during the years-long lifetime of the server and may never occur at all. In return you gain a *lot* of flexibility because as you've already discovered hot-swap, particularly on the smaller end, comes with a lot of compromises and not a lot of choices. BTW I tried and returned the CS380 myself, it is pretty awful.

For larger rack systems with 16, 24, etc. drives, sure, no question you want hot swap HDD, but you've already made that a not-have.

I'd look at a case like the Node 804. I like them so much I bought a second one and swapped my backup system to it. Pretty compact, very flexible in terms of hardware (easy fit mATX, ATX power supplies, just easy to work with), can fit up to (10) 3.5/2.5 drives and (2) 2.5 drives, very easy to cool and keep quiet with large, flexible fan installations, and while the drives aren't hot swap-able they are very easy to access and remove. And like most Fractal cases they are a rare combo of cheap (cost) and well made. The 804 is a much more solid build than the CS380 for example and costs half as much as the latter. All you get with the 380 is hot swap with cheap plastic drives trays, a sketchy backplane, and poor cooling.

Here's a shot of the drive side of mine - 4 thumb screws (2 for the side panel and 2 for the drive cage) and I have the drive cage out and ready to make the swap. Label your drives on the side like I did and you'll know exactly where the problem child is. I can actually swap and have the server back up in less than 15 minutes.

IMG_2571-2 - Copy.jpg

And if you are thinking you can't tolerate even 15 minutes of (rare) downtime you have bigger system planning concerns because the availability of the array is just one piece of the high-availability puzzle.
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
I fully agree with you, in a home environment a disk failure is not very likely to happen anyway, and the downtime is not mission-critical either.
Actually, the one only HDD that ever died in one of my PCs was one from the (in)famous IBM "Deathstar" series, and that one died in its very first power-on hour.

But the reason why I prefer hot swap is not related to time.
One reason is the additional info you get, if everything is ok you get a lot of green lights and access blinking, and if a drive fails it is indicated (and of course, blinking lights are just the thing men want their stuff to have ;)).
Plus, you have overtemperature warning and fan monitoring independent from the mainboard.

But my main reason is tidyness.
When using internal drive bays, power is usually connected with daisy-chained cables, and you have angled SATA cables hanging around there, too.
If you want to pull out a drive in the middle or at the bottom, you have to unplug all cables for all drives in that cage, and re-plug them again after.
If you have a backplane instead, you plug in all cables, arrange them tidily and for optimal airflow and fix them, and you never have to touch that again, you only need to pull and push the drives.

Call me quirky, but I like backplanes for that simplicity.
I wish there would be some small case with internal hot swap bays, like some old Lian Li chassis.

I acutally have a Fractal Design Arc Midi, and I like it very much much because of its quality, features and the reasonable price.
The Node 804 looks nice, but is a litle bit wide. Though it would fit under my desk if I were careful with my office chair.
I'll definitely think about it.

<brainstorming mode>
Maybe an Arc Mini, that has 6 internal 3.5" bays.
Bigger than I wished, I could stack two Supermicro 721TQ chassis instead and have two NASes, main and backup.
But maybe I can attach those 1-Port 3.5" Inch HDD SATA Hot-Plug Backplane Board to the cages somehow to simplify HDD mounting.
Moreover, these PCBs also offer access to Pin5 (for Presence Detection) and Pin11 (Activity LED).
So if I butcher a used Supermicro SAS823TQ 6x SATA/SAS Backplane with a MG9072 chip on it, and connect it to the small PCBs, I'd have complete backplane functionality (except front access) including sideband header connection to a compatible mainboard.
And finally put all the additional LEDs that a server grade mainboard offers (LAN link status, fault and whatnot) and the drive LEDs into one of the 5.25" covers.
Most probably I won't ever do this, but I kinda like thinking of those possibilities...
</brainstorming mode>
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
My pic doesn't really show it but my data and power cables are arranged and zip tied together so that I don't have to remove any of them. I can just pull the entire cage out, swap the drive, double check to make sure I didn't knock another cable loose, and slide the cage back in. Neat and easy peasy. I think you'd be surprised how easy it is to keep the cables, if not perfectly neat, then at least untangled and out of the way of airflow. To be honest I have an easier time working in it than I do my SuperMicro 745 4u case.

I've long stopped relying on blinky lights for status and besides, the regular health checks, SMART reports, etc. that you can (and should) configure in FreeNAS will keep you up to date with far better information about how your drives are doing. For fan monitoring, you can already separate that although why you wouldn't want to go the PWM route a good board like Supermicro offers and use the excellent fan script located here on these forums to keep your case as quiet and yet cool as possible I couldn't say other than maybe you haven't thought that part through yet :)

You don't need hot swap for what you are trying to accomplish, basically. If you want it, sure, your choice after all, but you don't need it and you'll have an easier and cheaper time with your build without it.
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
My pic doesn't really show it but my data and power cables are arranged and zip tied together so that I don't have to remove any of them. I can just pull the entire cage out, swap the drive, double check to make sure I didn't knock another cable loose, and slide the cage back in. Neat and easy peasy.
Ah ok, what PSU do you use?
I know from my Seasonic PSUs that these use flat cables with IDC connectors, so the wires go right angled (from top to bottom) through the connector, and the cables are quite stiff.
Did you bend both ends away and tie them together? That would work if there is enough room.
Do normal SATA data cables fit, or did you have to use angled ones because normal ones would have collided with the PSU?

Fun fact: As I normally prefer Seasonic PSUs, I used their online wattage calculator.
I selected a server mainboard, Xeon 1225, 16 GB RAM, 8 HDDs, 1 SSD, an additional RAID controller card, 6 120mm fans, 24/7 power on.
The calculator gave me a load wattage of 291W and a recommended PSU size of 341W.
But their proposed model says "Seasonic PRIME FOCUS Modular 650W" (that model doesn't even exist) and a link to Amazon "Focus Plus 650W Gold" model.
So not only is the proposed model almost double the calculated wattage, but it also only has 8 SATA power connectors so I'd have to use a molex to SATA adapter for the SSD. Meh...
I wonder why they didn't propose the 850W model, that comes with 10x SATA power.
Basically, I guess I could use the Focus Plus 550W model, or even the Focus Gold 450W, if I could get 2 peripheral cables with 4 SATA connectors on them.
Those cables do exist and are shipped with the bigger models, the only question is how to get them and if they would cost more than if I bought the bigger model.

Did you use fans above the drive cages?

I've long stopped relying on blinky lights for status and besides, the regular health checks, SMART reports, etc. that you can (and should) configure in FreeNAS will keep you up to date with far better information about how your drives are doing.
Of course the regular checks and reports are important. But if something starts beeping and blinking, it gets one's attention quicker, at least that's what I thought.

You don't need hot swap for what you are trying to accomplish, basically.
Yes you're right, and the more I look at the Node 804, the more I like it.

To be honest, I'd love to have a separate room with a big 19" rack containing patchpanel, switch, NAS, UPS and all that stuff.
But I don't have the space for that, and my girlfriend would kill me if I put something like this somewhere in our apartment.
So the only place I have for a NAS is underneath my home office desk.
That's why I thought about stacking a SuperChassis 721TQ and a HP Gen7, because that stack would fit under my desk.
Or if it really were a super quiet NAS, I could maybe put it behind the couch in the living room, and put a (perhaps a little noisier) backup NAS under my desk.
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
I've used regular consumer Corsairs for years; I use the HX650 in my FreeNAS builds currently. I also size based on the low end, general rule of thumb (very general) being your lowest wattage needs should be at least 20% of the rating and your highest 80%. If you go with a Gold power supply (like that one) then the minimum wattage target you want to hit is 20%, in the case of a 650w Gold PS that is 130w, with 130x1.13=147w your actual power draw (Gold being a minimum of 87% efficient at 20%).

A lot will come down to your drives which can vary widely, even among the same capacities and spindle speeds. For example my WD 10TB Reds idle at 2.8w and under load are a little over 7w, which is quite low. The 8TB version of the same drive is more than double both. And so on. So make sure you calculate the right draw for your drives.

You don't need angled connectors for the drives, there's enough room, though they wouldn't hurt either. For power I use 4-1 Sata power extenders (which are angled) you can get them inexpensively on Amazon. There's more than enough room in the 804 to tie the excess off and out of the way. Lots of system builders initially overlook that in many cases it is better to have longer cables and zip-tie the excess since it's easier to position everything where you need it. Also, this will allow you to leave enough slack to pull the drive cages in and out without having to disconnect anything.

I don't use fans on the top of the 804. Not needed overall and with the drive cages installed there's no room on that side anyway. I use two front/back fans on the drive side in a push-pull configuration. They are PWM Noctua fans controlled by the excellent Supermicro Fan control script located here on the forums. My drive temps rarely exceed 37c under load, and I have 8 of them on that side.

The 804 isn't one of Fractals "silent" cases but with 4 case fans and the CPU cooler, all PWM, controlled by the script, it is basically silent except under load, and even then isn't obnoxious (i.e. not high pitched or "vacuum cleaner" in tone or level). You could probably get away with putting it in the living room just know that it won't always be completely silent and depends on what amount of background noise you consider bothersome.
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
I've used regular consumer Corsairs for years; I use the HX650 in my FreeNAS builds currently. I also size based on the low end, general rule of thumb (very general) being your lowest wattage needs should be at least 20% of the rating and your highest 80%. If you go with a Gold power supply (like that one) then the minimum wattage target you want to hit is 20%, in the case of a 650w Gold PS that is 130w, with 130x1.13=147w your actual power draw (Gold being a minimum of 87% efficient at 20%).
So is your PSU actually overpowered, or does your system really draw 130/147W idle power, or is it because of the maximum load power draw?
I'm just curious here, I thought I would be on the safe side with an 450W model, or 550W for extra safety, please correct me if I'm wrong.

You don't need angled connectors for the drives, there's enough room, though they wouldn't hurt either. For power I use 4-1 Sata power extenders (which are angled) you can get them inexpensively on Amazon. There's more than enough room in the 804 to tie the excess off and out of the way. Lots of system builders initially overlook that in many cases it is better to have longer cables and zip-tie the excess since it's easier to position everything where you need it. Also, this will allow you to leave enough slack to pull the drive cages in and out without having to disconnect anything.
Ah, thanks for that info!
I also like longer zip-tied cables more than a-little-to-long ones.
Although for the disks I would prefer to use the original cables instead of additional adapters.

I don't use fans on the top of the 804. Not needed overall and with the drive cages installed there's no room on that side anyway. I use two front/back fans on the drive side in a push-pull configuration. They are PWM Noctua fans controlled by the excellent Supermicro Fan control script located here on the forums. My drive temps rarely exceed 37c under load, and I have 8 of them on that side.
Ok, I didn't notice top fans were impossible with drive cages installed.
But I'm glad to hear a front+rear fan will be sufficient to even keep 8 drives below 40°C.
That spin control script looks very interesting indeed. Will it work with 3 fans (front, CPU and back) in CPU compartment and 2 fans (front and back) in HDD compartment? Or would I have to use Y-adapters?

The 804 isn't one of Fractals "silent" cases but with 4 case fans and the CPU cooler, all PWM, controlled by the script, it is basically silent except under load, and even then isn't obnoxious (i.e. not high pitched or "vacuum cleaner" in tone or level). You could probably get away with putting it in the living room just know that it won't always be completely silent and depends on what amount of background noise you consider bothersome.
Of course it cannot be called silent with 4 fan openings at the top and a mesh top cover. But it's good to read that it still can be running silent.

I'll post a new config suggestion soon when I have picked everything.
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
I'm right at the bottom edge for recommended idle with my system so technically yes it is overpowered; I could get away with a 550 easily. I already had the 650w laying around. You can of course run below 20% actual, and it will draw less power overall, it's just less efficient below that. Like, if you only drew 90w at idle with a 650w you will probably end up with 105-110w actual because the PS is less efficient. Or something like that, there are others here who know more than I do on this topic.

You are probably more than ok at 550w.... just checking - what drives are you using again? I am only near the bottom with a E5-2680v3 system because my drives peak at around 7w, which is low - WD changed their power consumption for the better with the 10TB Red. Yellows will of course draw more as do many other HDDs over 2TB.

Supermicro motherboards have two fan zones, so yes you can do 3 CPU side (+CPU fan) and 2 drive side, just use decent PWM fans and hook them up right. I use a splitter for the drive side fans because my Supermicro board only has a Zone A, not A+B. Fans 1-4 (or 5, depends on board) are in one zone, usually the one the CPU fan goes in. Zone A, and B if present, are the other. Works great with the 804 since the case is divided like that.

You probably don't need 2 intake fans CPU side either, and if you do go with one intake I'd install it in the lower slot so it hits your PCIe slots and bridge heatsinks better. I use two because I have an x54010G-Base T in one of my PCIe slots and they run hot - hotter than many CPUs.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Last edited:

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
Ok, so here my updated configuration plan:
  • Chassis: Fractal Design Node 804
  • Chassis fans: installed Fractal Design or Noctua NF-S12B redux
  • PSU: Seasonic Focus Gold 450W or Seasonic Focus Plus Gold 550W
  • Mainboard: Supermicro X11SSM-F or Supermicro X11SSH-F
  • RAM: Samsung DIMM 16GB, DDR4-2400, CL17-17-17, ECC (M391A2K43BB1-CRC)
  • CPU: Intel Pentium Gold G4560
  • CPU cooler: stock, or if that one is too noisy maybe a Scythe Kotetsu Mark II
  • System drive: Intel Optane Memory 16GB M.2 or Intel SSD 320
  • Data drives: WD Red, not sure about size and configuration yet
Possible HDD configurations:
  • 8TB:
    • 3x4TB RAIDZ1 (~355€)
    • 5x2TB RAIDZ1 (~390€)
    • 4x4TB RAIDZ2 (~475€)
    • 6x2TB RAIDZ2 (~470€)
  • 12TB:
    • 3x6TB RAIDZ1 (~580€)
    • 4x4TB RAIDZ1 (~475€)
    • 5x3TB RAIDZ1 (~485€)
    • 4x6TB RAIDZ2 (~770€)
    • 5x4TB RAIDZ2 (~590€)
    • 6x3TB RAIDZ2 (~585€)
What should I go for, if I have now about 3TB of data?
Is it correct that it is best to have an even number of data drives, so the total number would be odd for RAIDZ1 and even for RAIDZ2?
From an energy viewpoint, fewer drives are better of course.

Supermicro motherboards have two fan zones, so yes you can do 3 CPU side (+CPU fan) and 2 drive side, just use decent PWM fans and hook them up right. I use a splitter for the drive side fans because my Supermicro board only has a Zone A, not A+B. Fans 1-4 (or 5, depends on board) are in one zone, usually the one the CPU fan goes in. Zone A, and B if present, are the other. Works great with the 804 since the case is divided like that.

You probably don't need 2 intake fans CPU side either, and if you do go with one intake I'd install it in the lower slot so it hits your PCIe slots and bridge heatsinks better. I use two because I have an x54010G-Base T in one of my PCIe slots and they run hot - hotter than many CPUs.
I would put 2 fans in the mainboard compartment, lower front and back, plus the CPU fan, so a total of 3.
And two fans in the HDD compartment, upper front and back.
The board has Fan1-4 and FanA, that would mean I would use Fan1-3 for CPU compartment, and FanA with a Y-adapter for the HDD fans.
That would leave 1 fan connector on the board unused, and I'd lose rpm monitoring for a fan, what a bummer.
I haven't looked at the script source, but couldn't the assignments be changed? That would be great.

This build in a fractal 804 might help visualise what's been discussed: https://ramsdenj.com/2016/01/01/freenas-server-build.html
He used a SeaSonic G-550 Power Supply and 5 PWM fans. The article has a lot of detail.
Thank you, that looks good!
But I wouldn't dare placing that expensive hardware onto that IKEA-looking shelf...
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
For NAS media/backup storage, in a case that size (which is going to give you a max of (10) 3.5 drives) I'd lean towards a smaller number of higher density drives in at least a Z2 configuration. If you really need the iops it's better to go with several mirrored pools instead but I seriously doubt you will. For jails I run mirrored SSDs which is where most of my io heavy usage happens anyway. A 10 drive Z2 is also the max number you'd want for the drive>pool spread.

Or you could go with a 9 drive Z3 if you go with really dense drives (12TB+). The higher the individual drive density, the longer the re-slivering/rebuilding if a drive fails, and the more redundancy you want, is the general rule of thumb. In this day and age, where even "small" drives are 2TB or more, I wouldn't consider Z1 for anything.

Or if you want to start out smaller and grow, do a Z2 pool of 4 or 5 drives now, and add another 4-5 drive pool down the line when you need the additional space. Again, since the bulk of your storage is media at rest, that is probably a better option than mirrored pools, etc.

Fans: I think you are stuck with just the two zones with Supermicro, but I wouldn't worry too much about having a pair on a Y adapter, it'll spin them up and down at the same rate, and from an alert perspective what I usually recommend is, pay attention to what matters. A fan dying isn't what matters, your drive temps going up is what matters. Key notifications and alerts off the latter. Knowing that the fan is the culprit can help with initial troubleshooting, sure, but knowing that isn't the immediate concern you want to make sure is addressed. No setup is 100% perfect anyway :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Or you could go with a 9 drive Z3 if you go with really dense drives (12TB+). The higher the individual drive density, the longer the re-slivering/rebuilding if a drive fails, and the more redundancy you want, is the general rule of thumb.
Just to share a data-point. I have a server at work that is running 10TB drives and I can resilver one of those in around 17 hours.
I have another server that is running 6TB drives and that system takes 96 hours to resilver. So, it isn't all about the size of the drive.
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
True enough hence my "general rule of thumb" at the end :) In any case I think steering the OP away from Z1 if he's looking at 4TB+ drives is good advice no?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
True enough hence my "general rule of thumb" at the end :) In any case I think steering the OP away from Z1 if he's looking at 4TB+ drives is good advice no?
Absolutely. I wasn't complaining about your advice, that is why I said I was just adding a data-point.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Just to share a data-point. I have a server at work that is running 10TB drives and I can resilver one of those in around 17 hours.
I have another server that is running 6TB drives and that system takes 96 hours to resilver. So, it isn't all about the size of the drive.

I wonder what the primary factor is... fragmentation? CPU performance? pool shape? IO bottlenecks?
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
For NAS media/backup storage, in a case that size (which is going to give you a max of (10) 3.5 drives) I'd lean towards a smaller number of higher density drives in at least a Z2 configuration. If you really need the iops it's better to go with several mirrored pools instead but I seriously doubt you will.
Correct, I don't think I need this much IOPS, 10GbE is not necessary (at least for the next years).
And I don't want to fill all bays at once, to leave some space for the future.

Or you could go with a 9 drive Z3 if you go with really dense drives (12TB+). The higher the individual drive density, the longer the re-slivering/rebuilding if a drive fails, and the more redundancy you want, is the general rule of thumb. In this day and age, where even "small" drives are 2TB or more, I wouldn't consider Z1 for anything.
I think 72TB of net storage is a little out of my league.:D
Most of my movies are DVD or BD, only some of them ripped so that I don't need the disc all the time, and my ripped music collection is not that big.

Or if you want to start out smaller and grow, do a Z2 pool of 4 or 5 drives now, and add another 4-5 drive pool down the line when you need the additional space. Again, since the bulk of your storage is media at rest, that is probably a better option than mirrored pools, etc.
Ok, then I think I'll take the 5x4TB Z2 layout for now.
Then I could add a mirrored pool for jails, if ever needed.
And I still had a free SATA port on the mainboard to successively exchange the HDDs for bigger ones, when more space would be needed.
Or just add another 5 disc pool and a SATA HBA.

Fans: I think you are stuck with just the two zones with Supermicro, but I wouldn't worry too much about having a pair on a Y adapter, it'll spin them up and down at the same rate, and from an alert perspective what I usually recommend is, pay attention to what matters. A fan dying isn't what matters, your drive temps going up is what matters. Key notifications and alerts off the latter. Knowing that the fan is the culprit can help with initial troubleshooting, sure, but knowing that isn't the immediate concern you want to make sure is addressed. No setup is 100% perfect anyway :)
You're right, maybe I'm overthinking this a little.
I could try to connect the rpm wire of the unmonitored fan to the remaining fan header, so the rpm can be read.
But I'll have to check out if this doesn't confuse regulation.

Did you get all your questions answered?
For the moment yes, especially the links in your signature were extremely informative and useful, thank you!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I wonder what the primary factor is... fragmentation? CPU performance? pool shape? IO bottlenecks?
I want to say up front, the system was ordered and configured before I started working this job and I had no hand in that process.

I think the poor performance is a combination of factors.
One factor is pool shape because the slow pool of 6TB drives is four RAIDz2 vdevs of 15 drives each.
This server was provisioned before I started with the organization and the reason for their decision was to maximize the capacity of the pool and they did this without any understanding of potential impact to reliability or performance.
The second factor, I think is an IO bottleneck caused by the fact that the server is using a pair of these for drive controllers:
HighPoint Rocket 750
https://www.newegg.com/Product/Product.aspx?Item=N82E16816115140

I may be totally wrong, but I think they are slower than a SAS controller and expander would be. Interesting that it only has a one star rating on NewEgg.
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
I'd guess IO bottleneck too.

I'm not a fan of Highpoint in general but in all fairness of the two one-star reviews one is from someone complaining about lack of new Linux kernel drivers for a 7 year old HBA that is EOL and the other gave it one star because "it's not RAID, just JBOD" :)

There are definately reasons to avoid that HBA but the Newegg reviews aren't one of them.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I want to say up front, the system was ordered and configured before I started working this job and I had no hand in that process.

I think the poor performance is a combination of factors.
One factor is pool shape because the slow pool of 6TB drives is four RAIDz2 vdevs of 15 drives each.
This server was provisioned before I started with the organization and the reason for their decision was to maximize the capacity of the pool and they did this without any understanding of potential impact to reliability or performance.
The second factor, I think is an IO bottleneck caused by the fact that the server is using a pair of these for drive controllers:
HighPoint Rocket 750
https://www.newegg.com/Product/Product.aspx?Item=N82E16816115140

I may be totally wrong, but I think they are slower than a SAS controller and expander would be. Interesting that it only has a one star rating on NewEgg.

Yep, that’d do it. Excessively wide with io bottlenecks
 
Top