Resource icon

Proper Power Supply Sizing Guidance

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
i have NEVER lost a drive in 15 years
If you have never lost a drive in 15 years, you have not had enough drives. I replaced 9 failed drives in my home NAS last year alone, to say nothing of the drives I had to replace at work. I had to replace 4 in a single server in the first year it was in service, then again, it did have 60 drives.
 

Mega Man

Explorer
Joined
Jun 29, 2015
Messages
55
or, i have had reasonably good luck, and very good replacement on a timely manner, after burn ins...... ( burn in all drives - verify results and always trash/shelve drives on production data ( read data i care about, either on nas or personal pcs )

to be very clear, while i dont run data centers, i build pcs, some of which i have over 30k into and i have 5 total caselabs cases in production ( not all my nas, some pcs, but to give you ideas of what i have ) 1 m8, 2 s3s, 1 th 10, 1 tx10. one of the rigs in the tx10 has over 26 drives attached to it. again..... never lost 1, since the first .... first signs of trouble = shelved......
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
first signs of trouble = shelved......
That equals loosing a drive. You use the wrong terms for what you are doing and call it not loosing a drive. If a drive malfunctions and you have to remove it from service, it is lost.
Just fyi, my main NAS has 48 drive bays and just now is completely full. My backup NAS has 24 drive bays but only has 12 in it at the moment.
I have been working full time in the Information Technology industry supporting computers in one capacity or another since 1995 and was doing it part time before that since 1987 when I started college and worked in the computer lab as a student worker.
I don't like case labs cases, they are insanely expensive for what you get.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
That equals loosing a drive. You use the wrong terms for what you are doing and call it not loosing a drive. If a drive malfunctions and you have to remove it from service, it is lost.

I'd have to agree. Chris has given a reasonable (possibly not comprehensive) definition for losing a drive here. Looking at it from a different starting point (non-FreeNAS), I have lots of spinners, but very few of them are in nonredundant configurations. When a RAID controller decides that a drive isn't behaving, you generally need to replace it. You might not lose any data, which is good, but if you have to sideline, RMA, replace, etc., a disk, it is no longer as useful as a fully functional drive.
 

Mega Man

Explorer
Joined
Jun 29, 2015
Messages
55
I took this to a PM, as I was worried it was far off topic, but basically I conceded, but I said, I meant I have not lost data since.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Not losing data is the awesome outcome we all desire! It's all good. ;-)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,462
Now I'll play devil's advocate for a moment... "lost a disk" could reasonably mean "had a hard failure of a disk in 'production' use"--and in that case, it's entirely plausible to not have had that happen in many years.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Now I'll play devil's advocate for a moment... "lost a disk" could reasonably mean "had a hard failure of a disk in 'production' use"--and in that case, it's entirely plausible to not have had that happen in many years.

I wish I had that luck. Now, guys, don't make me lock the thread again :smile:
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Based on the hardware in my signature is my 450w psu enough?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
Based on the hardware in my signature is my 450w psu enough?
See post #1 in the thread, looks like it. But you're using a 550W PSU?

First of all, many thanks for the sizing guide, I really appreciate it.

My take on the matter is that this problem is not unlike what I encounter with fridges, BVMs, and like devices. They are held to a spec where they have to bring a load down to cool temperature quickly yet idle most of the time. Power supplies in NAS' are no different - spin ups are a big blip, followed by days, months, whatever of 4-5x less power draw.

To me, this suggests paying close attention to part-load efficiency of a PSU, i.e. how well it does at supplying 20% of rated output. That's why Platinum or even Titanium-rated PSUs may be of interest - they have to meet a minimum spec in that range. The key detail being that many OEMs don't publish 20% load efficiency ratings because they usually are significantly less than 50%-load ratings. That issue is what Titanium and Platinum ratings specifically tried to target by mandating 10%, and 20% minimum efficiency ratings, respectively.

Whether or not a high-efficiency PSU pencils out economically (i.e. marginal cost of higher-quality PSU vs. electrical cost) is likely going to be region- and use-specific. It's hard to economically-justify energy efficiency at 4c/kWh but if you're on an island and paying 55c/kWh+ or off-grid altogether, all sorts of efficiency measures start to make sense from a pocketbook perspective.

I chose a 400W platinum Seasonic for my next case as the peak power requirements should be able to be met by the PSU, yet it's "small" enough to work semi-efficiently while it's not spinning up drives. A Titanium-rated power supply appears impossible to come by in the 400W-range (smallest I saw at Newegg is 600W) and hence Titanium-rated power supplies are over-sized and considerably more expensive to purchase ($80 vs. $160).

In my use case, the delta is unlikely to be made up via higher efficiency - at 134W idle per jgreco,
  • the 400W Platinum PSU will be operating at 33% load, with an estimated conversion efficiency of ~92%
  • the 600W Titanium PSU will be operating at 22% load, with an estimated conversion efficiency of ~94%
Assuming 24/7 operation and 0.25$/kWh, that pencils out to about $6 per year difference. The payback for the titanium PS upgrade would exceed a decade or two, depending on your time/value of money, expected rate of return, utility price increases, etc. More than anything the exercise points to the benefit of minimizing the number of HDDs in your system.

I am currently replacing a 9-drive set of 3TB HGST 7K4000's with a 8-drive set of 10TB HGST Helium drives. Per FarmerPling2's excellent analysis tools, that swap-out results in a $60 annual savings in my use case while also significantly upgrading the storage capacity of the NAS. I encourage everyone looking into upgrading their storage pools to consider the power consumption of their drives as part of the equation. You may be able to get the added capacity you need yet also reduce the power consumption at the same time.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
To me, this suggests paying close attention to part-load efficiency of a PSU, i.e. how well it does at supplying 20% of rated output. That's why Platinum or even Titanium-rated PSUs may be of interest - they have to meet a minimum spec in that range. The key detail being that many OEMs don't publish 20% load efficiency ratings because they usually are significantly less than 50%-load ratings. That issue is what Titanium and Platinum ratings specifically tried to target by mandating 10%, and 20% minimum efficiency ratings, respectively.
Many commercially produced servers will come with Platinum or Titanium rated PSUs.
Assuming 24/7 operation and 0.25$/kWh, that pencils out to about $6 per year difference. The payback for the titanium PS upgrade would exceed a decade or two, depending on your time/value of money, expected rate of return, utility price increases, etc. More than anything the exercise points to the benefit of minimizing the number of HDDs in your system.
The greater energy efficiency and density of storage is one of the prime driving forces in the enterprise storage space for purchasing new hardware. I recently decommissioned three old servers where I work that had 500 and 750 GB drives. The server (just one) that replaced them had 4 TB drives and where the three old servers consumed 12U should have been 9U (Rack Units) of space, the new server is just 3U. It saves rack space, reduces power consumption and provides a much greater amount of storage. I am also working on another replacement (waiting for the data to copy) that will replace 15U of servers (in 5 separate chassis) with a single 4U server. The old servers have 4 TB drives and the new server has 10 TB drives, 60 of them, so we are almost doubling the capacity while saving 11U of space and reducing the number of chassis that are powered. Newer, high capacity drives are almost always a good idea, but you don't want to reduce the drive count to the point where you don't have good redundancy.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
My take on the matter is that this problem is not unlike what I encounter with fridges, BVMs, and like devices. They are held to a spec where they have to bring a load down to cool temperature quickly yet idle most of the time. Power supplies in NAS' are no different - spin ups are a big blip, followed by days, months, whatever of 4-5x less power draw.

Well, it's similar, but "no different" I'd quibble with. The whole reason I wrote the PSU sizing guide was because we had a number of people coming in here with blatant misinformation and encouraging people to follow bad advice.

With a fridge, if you don't get the load down to cool temperature quickly, it doesn't potentially do permanent and/or hard-to-detect damage to both the fridge and the contents. You might ruin the contents, yes, but usually you can tell.

People building a budget NAS seem all too focused on their expensive storage chassis, board, CPU, RAM, and of course hard drives, but all too willing to cheap out and undersize the PSU. This is a threat to both the hardware and also to the data stored on the hardware. Obviously, the art of sizing for a larger array of HDD's is an unusual thing, and most of the guides you find on the Internet are aimed more at sizing for some gaming PC scenario or other things that do not have the unusual spinup issues of a large NAS. You really do want to prefer to go a little larger and a little more expensive. Your entire system really depends on the PSU.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
Again, thanks for the guide, Jgreco. I will look forward to seeing how much my revised Avoton actually draws during spin-up, activity, and idle. I will also have to have a look at the PSU that I started with - the one inside the Mini XL. Should make for an interesting comparison as the disk count is the same. Main difference between the XL mini case and the Q26-based one that I'm moving to is far more effective cooling and likely much quieter operation.

As for the fridge analogy, you are correct - bad food can usually be detected - but it also can make people very ill. Similarly, a marginal PSU may lead to system instability. Yes, sometimes a bad PS might be obvious - no POST, magical smoke, etc. but at other times, the system may simply crash due a unusually heavy load "randomly" tripping the safeties inside the PSU.

The big difference between a fridge and a DIY NAS is that few people build fridges on a DIY basis at home, and commercially-sold stuff has to succeed a pull-down test or it can't be sold. Typically, that involves the OEM sending a sample fridge or BVM to a commercial test lab and certifying pulldown + energy efficiency per DOE/NRCAN test (at least in the USA & Canada).

As you pointed out, some folk spend far more on a beautiful case and other non-essentials vs. the very heart of the system, a good PS that is well-suited for the job. Whether it's a single PSU like the ones typically found in a SOHO system or the multiple, redundant supplies that you and Chris get to work with, a good power supply is the essential foundation to make a server work trouble-free for many years.

So, again thanks for your contribution.

PS: It might be interesting to extend FarmerPling2's disk analysis... to look at whole system power consumption, not just a per disk basis. Include the CPU, etc. and use a interpolation table based on typical PS efficiencies to see where the whole system will clock in. For example, I had no idea my 20W TDP system could clock in at 134W continuously.
 

Craigk19

Dabbler
Joined
Apr 16, 2019
Messages
19
I'm looking at running eight 7200RPM 2Tb drives and after the math, I would need a 550W by your suggestions. I'm kind of looking at a u-nas 810A case and they look to only use 350w Seasonic Gold U1 Flex PSUs that wouldn't be enough any suggestions? I see Newegg has a 500 but not sure it's a Flex or if it will work. Any suggestions?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm looking at running eight 7200RPM 2Tb drives and after the math, I would need a 550W by your suggestions. I'm kind of looking at a u-nas 810A case and they look to only use 350w Seasonic Gold U1 Flex PSUs that wouldn't be enough any suggestions? I see Newegg has a 500 but not sure it's a Flex or if it will work. Any suggestions?
Hardware details? You must be planning a very powerful CPU. If you want a powerful CPU, you need a larger chassis to house it and the power supply for it.
 

Craigk19

Dabbler
Joined
Apr 16, 2019
Messages
19
Hardware details? You must be planning a very powerful CPU. If you want a powerful CPU, you need a larger chassis to house it and the power supply for it.

CPU: E3-1230v2
MB: SUPERMICRO MBD-X9SCM-F-O
RAM: 16Gbs ECC
HDDs: eight 7200rpm 2Tb drives
SSD: one 250GB
 

tres_kun

Dabbler
Joined
Oct 10, 2015
Messages
40
The power supply should be able to handle the system
My only observation is:
The power supply has 4 SATA on 2 cables and 2 IDE on 1 cable
The case powers the hard drives using 2 IDE plugs
If you plug the IDE plugs together
One single cable will have to power all of the hard drives
You will have problems with this
You need to get a quality SATA to IDE power adaptor
Do not buy adaptors that are molded in plastic they are very bad
 

Craigk19

Dabbler
Joined
Apr 16, 2019
Messages
19
The power supply should be able to handle the system
My only observation is:
The power supply has 4 SATA on 2 cables and 2 IDE on 1 cable
The case powers the hard drives using 2 IDE plugs
If you plug the IDE plugs together
One single cable will have to power all of the hard drives
You will have problems with this
You need to get a quality SATA to IDE power adaptor
Do not buy adaptors that are molded in plastic they are very bad

Not sure I fully understand the 2 IDE on 1 cable won't be enough? I'll ned a sata to IDE than?
 

Evi Vanoost

Explorer
Joined
Aug 4, 2016
Messages
91
IDE and SATA are data busses, nothing to do with power. Most likely what he means is that the particular power supply has 2x2 SATA power plugs and 2x 4-pin Molex. If not included in your power supply, you may need to buy a set of these and perhaps even a splitter from 4 pin molex to 2x SATA power to get sufficient amount of power cables.

https://www.sfcable.com/6-in-4-pin-molex-male-to-15-pin-sata-female-power-cable.html

If you do that, you need to make sure that the line from your power supply can handle the load of the hard drives. Eg. if your line is 5A @ 12V, it can handle ~60W (~4-6 spinning hard drives) so if you daisy-chain 8 hard drives on a single 5A @ 12V line it could cause a problem. That is very particular to the power supply, so make sure you read the documentation well before purchasing as to what each of the connections can power.

But 350W overall should be sufficient to handle 8 drives and a low-power motherboard (given you don't want dual-CPU or GPU).
 

MindBender

Explorer
Joined
Oct 12, 2015
Messages
67
I'm kind of looking at a u-nas 810A case and ...
I am using an U-NAS 810A for my setup, I would recommend you to use something else.

It is a flimsy case, allowing drives to vibrate. The way the mainboard is mounted on 4 protruding lips, requiring a plastic sheet on the back to prevent shorting the back is very dubious. And the PCIe riser card that comes with it should NOT be used because it doesn't use impedance controlled transmission lines for its lanes.

I'm still hoping, that one of these days, Ablecom will grant me the privilege of purchasing an CS-T80 8-bay NAS case from them. No luck so far, but I keep on hoping.
 
Top