Dual PSU or bigger PSU?

Status
Not open for further replies.

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
You have all the answers. Go for it.
A compliment! I like to think I do my research before asking help.

But alas, not the one I actually would like some input about - namely more about dualing a PSU. So far, just non-answers. I am hopeful, though :) See OP.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
By the way, I run a triple redundant supply in two of my FreeNAS servers and a dual redundant supply in the third. If you use an actual server chassis with an actual server power supply, you don't have these questions.

Of course you do. Server power supplies aren't magic. A number of designs actually rely on the capacity of the second PSU.

If you don't believe me, try unplugging two of the PSU's on that three-PSU system of yours, and I'll give you a little script that maxxes out your CPU, revs up your fans, and repeatedly spins down and spins up all your drives at once. I think the triple 3U's were only around a 400-500W PSU module, and depending on the model of drive, that could be pretty bad. Those Seagates in particular soak it up at the start.

Some of us have done syseng with four-supply systems and had to be very careful about overhead and A/B power systems.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So anyways the basic question here is how to "go big." Since there've been a number of unproductive/unhelpful answers, I've closed the thread and am going to provide some comments.

Power supplies need to be sized such that they can handle peak surge. My own recommendations are even more conservative, documented in the Proper Power Supply Sizing guide, and are intended to provide guidance to sizing single PSU systems so that you don't get "into trouble."

Once you get past 12 drives, it becomes increasingly difficult to scale PSU size with a single PSU.

Staggered spin-up is helpful but not a cure-all, especially if there's any chance that there is anything like an idle timer that will spin down your drives, as a RAIDZ write may wake them all up at the same time.

Servers sometimes get around this through redundant PSU's, where that second PSU works in parallel and provides additional grunt to spin up loads, and under normal conditions, provides redundant operational power. A failed PSU in a system at the edge can result in a system that cannot spin its disks, browns out, or burns the second PSU. *THIS HAPPENS*.

It is possible to rely on luck and/or the good engineering in a high quality PSU to get you through those few seconds of spin-up hell. As the number of drives increases, the likelihood that all of them will peak at exactly the same moment becomes less likely.

In large NAS systems, JBOD chassis are used to power additional disk drives, attached back to the main host with SAS. These are basically a server chassis with a little controller board that tells the PSU to "power on" and runs the fans. This is perfectly safe to the equipment, but can potentially be hazardous to the pool, if the NAS comes up but the JBOD doesn't. This is the primary failure mode of this type of strategy.

There is no good reason that this cannot be duplicated on a smaller scale within other chassis. You can put two PSU's in a single chassis, kept separate except for ground and power-on signals. Some versions of the Backblaze Storage Pod did this. You can use two chassis and two PSU's and SAS to connect them as well. Again there is a failure mode here if drives do not power on.

Overall, it is better to think carefully about your power situation than it is to smoke your drives.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've re-opened the thread but let's try to keep it on-topic and trying to actually help @Stilez ...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Server power supplies aren't magic. A number of designs actually rely on the capacity of the second PSU.

If you don't believe me, try unplugging two of the PSU's on that three-PSU system of yours
I believe you. I already knew that it needed a minimum of two out of three. That system is a 16 drive model and the total output of the PSU is only rated at about 860 watts if I recall. The 24 bay chassis I have has a pair of supplies and the total rating is 1200 watts.
I was saying that the company that designed the chassis will have paired the chassis with an appropriately sized power solution.

I have a server at work with 60 drives (WD Red Pros) and the average draw per power supply is 293 watts (based on the IPMI log). It balances the load across the two, and if you pull one supply, the whole load goes on the other. They are (each) 1400 watt supplies (Zippy MTW2-5AD0B2V) and even at startup the server never comes close to max draw even with 60 drives. I didn't engineer the solution, but someone did, and a properly redundant pair of 1400 watt server power supplies will run a 60 drive server. Interestingly, the UPS reports the draw at around 330 watts per power supply, so there is some loss of efficiency somewhere.

What I say is not based on my calculations but based on my observations of systems running in the data-center at work and my own systems at home; which happen to be rack mount servers with redundant power supplies because I don't want a failed power supply to take my server down. Admittedly, I have only been doing the management of servers as a profession since 1999, and I am not an engineer, but I feel certain that a 1200 watt supply would be more than ample for your system because I have multiple Supermicro servers that are either 24 bay or 36 bay systems that are using 1200 watt power supplies. Those systems, like the 60 drive system I have at work, will boot from a single supply out of the pair of supplies that are installed. If you want to divide the load between two supplies, your proposed solution with two 750 watt supplies would likely be more than adequate.

If I were in your place, I would want a properly redundant supply, so that the failure of a single supply would not take a portion of my drives offline. Something like this would (I feel sure) work very well:
https://www.newegg.com/Product/Product.aspx?Item=N82E16817338131

Just keep in mind that these supplies are designed to connect to a drive backplane and not to have enough connectors to power all the drives directly.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Coming back to the basic point, I suppose the thread could be reframed as @jgreco says: What are the power options if one wants to be very sure of robust power delivery, and the NAS will include a modest but sizeable HDD array (say 12 - 28 HDD scale)?

Clearly it depends on the use-case so "no one answer fits all". I'm asking about my specific use-case so perhaps I should restart, and try to refocus it a bit.

This post is a bit long - if it's too detailed then please ignore the excess and skip to the bottom which poses the immediate question.


1) Background:

HDD current draw:

Manufacturer specs are clear that HDDs can at times draw extreme peak loads in some cases, on both 12v (motors/spinup) and 5v (logic/DSP circuits), especially during spinup but moreso in actual use (writing). Nobody other than the manufacturer knows the data patterns where these peaks happen most, so we can't be sure what to do to reproduce them or if they are very common in some scenarios, but they happen. 45drives has also published oscilloscope captures for a pool of 45 in-use HDDs during RW sessions. Their defining characteristics are:
  • The peaks are transient current draws that can be 2-3x more than even the HDD's normal "maximum in-use" operational current draws. An ordinary 6TB enterprise drive can demand 3.08A/37W @ 12v and 1.45A/7.25W @ 5v at peak (3 sigma).
  • It cannot be mitigated by staggered spinup, although that certainly helps and is the best known and most predictable situation, because it can happen during operational use as well (spinup from idle), it affects 5v (not just the 12v motors), and it can happen "in use" not just at spinup (as transient current peaks during heavy write sessions).
  • They cannot be detected by IPMI or the hardware itself, nor at the wall, nor really by anything except decent probes/equipment on the HDD side of the PSU and designed for measuring indirect current transients within the HDD PSU cables.
  • They pose an especial problem for the often-ignored 5v line, because larger ATX PSUs are tuned for high 12v output: a disk array suddenly needing a spec of 40-50A on 5v (1.5A x 24 HDDs x 120% for headroom x 150% for PSU degradation over time, less 25% for any non-sync of the transients?) may have much less design consideration.
  • In a large array there may be some averaging effect, and there might also be reserve power to cope briefly in the PSU capacitors, but there's no way to be sure how firmly to count on it.
Power provision for HDD arrays:

There seem to be roughly 4 ways that power supply can be done for a drive array: single PSU, multiple PSUs paralleled together in some way, specialist single voltage high current PSUs (for example instead of a typical computer PSU, having a specialist 12v-only PSU and a specialist 5v-only PSU that can provide high currents and good regulation), and eventually, some kind of federated back-end where storage is grouped with PSUs provided per group.

Parallelled PSU can probably be sub-categorised in a few useful ways for this purpose - redundant or just supplementary to each other; ad-hoc paralleled or designed that way; and power lines commoned or separated (all rails cross-wired so all PSUs attached to all devices, or each PSU powers distinct devices with only 0v in common so no devices are powered by 2 PSUs). I'm ignoring multiple rails here as a complicating factor and assuming single rail for simplicity.

If that's not exactly right, it's probably close enough for this discussion.


2) The NAS:

NAS usage patterns:

The NAS build is in the OP and my signature. I'm the main user, my main requirement is that I can pretty much take for granted that my data is safe, and get to a point where managing its safekeeping and storage is mostly "set and forget". If a drive goes, I don't have to even jump in particularly quickly. Just zfs detach, zfs attach - and carry on whatever I'm doing, data was never at risk, and I pretty much won't notice. "It just works". That sort of thing, for the next 30 years.

The wall power here is very good and I'm on 3 way mirrors with a good PSU and APC Smart-UPS behind it, so I'm not thinking much about a freak accident wiping out the whole NAS and every drive at one stroke, right now. I'll set up automated replication in a while.

I'm a data and multitask hound. The NAS contains all my ESXi VMs (which can be 500 - 900GB each), all my datasets, all my backups, everything, back to the 1990s. When I didn't have rsync/zfs snaps, I just made a recopy to the old servers each time, and I'm still in that habit, so I have a lot of repeat data. The VMs often have a lot of repeat data too. It also stores family data and it's way easier to teach them to frequently backup everything than to use rsync (or set rsync up for them when I'm not in that habit myself!), so I have about 300 copies and part-copies of my mum's 40GB photo archive for example. With all that, dedup gives 3.9x saving, so the NAS is specced for dedup from the start (128GB 2400 ECC, NVMe L2ARC, fast Xeon v4, etc).

I also take rollback seriously, so right now it's set to 15 minute snaps for a week, daily snaps for 4 months, and bimonthly snaps forever. The pool size is about 12 TB which includes 2 x 3TB zvols for iSCSI, and about 55% usage of the 22TB capacity (so that ZFS can work at its best speed), and is based on 3 way mirrors (same reason + speed + more flexible than RAIDZ).

My use pattern is moderate much of the time, very heavy at times, but with multihour- or days-long idle/low level use (think days not using them, or nights when sleeping). I mainly access data via Samba 3.x, iSCSI, and CLI. The NAS is on a 10G group with my VM server and workstation, on a 10G switch + Chelsio, for fast data between any of them. I've had it doing Samba at 1GB/sec in the past, lost it somehow, and aim to again. I don;t run SQL or anything requiring that kind of access pattern on it. But when busy I might be moving/copying/renaming a few TB of directories (of any size from tiny to huge) around, snapshotting ESXi, and using files via Samba, and resilvering/scrubbing, and I want them all fast :D The relevance is that at times, the disks will be busy and data-intense multitasked, even if there's just one major user.

Disks are set to park after 60 mins idle, although I'm unsure if it's better for their longevity to keep them spinning the whole time. The energy waste from never parking them would be considerable and that's ended up the decider. But it does mean there will be mass spinups regularly, which are not related to system boot. I don't know if those get staggered with FreeNAS + Supermicro BIOS, but suspect not.


NAS enclosure:

I'm at ease with modding/home building, where needed, and I like good quality kit where it might pay off. I looked for ages for a drive store capable of holding 20 HDDs and they're all designed for server room environments, where you cram them tight and use high power fans in a case. The cheaper ones have no real antivibration isolation either and just rely on the HDD build for that. I prefer the other approach - stack them well spaced and vertically (like fins), with good fans below and antivibration isolation on all drives, in a rack that naturally channels smooth upward airflow past them all, and let natural convection take care of dispersal. I get trivial HDD access with no screws or trays, no scope for resonance or vibration, near silence, and even in full use the drives are cool to the touch and rarely outside 27 - 35C. The cost was about £15 ($20) + fans to build, and it just sits next to the NAS. The system enclosure itself can then be an ordinary modest desktop case, as it only has to contain the baseboard, PSU, and PCIe cards, and doesn't need to make real concession to HDD cooling or airflow.


3) Where I'm at:

My critieria / priorities:

I agree even a single 1200W PSU should "generally" cope in practice. I know it, most people know it. 1200W would "probably" be fine all around for this size array and even 1600W would usually be seen as overkill. But equally, as the specs say and @jgreco points out, there will be situations where it isn't adequate, and which we cannot anticipate nor measure crudely with a wall wattmeter or IPMI, but we know they'll happen and they can cause occasional drive brownouts if they briefly lead demand to outstrip the PSU's ability to supply from the mains or via its reserve capacitors.

A user is free to decide if they want to meet the "real world probably OK" standard or the "really will be OK" standard. For my NAS, I started this thread because I'm firmly in the second camp - I want a NAS that I have no doubts about, and if that means overkill on the PSU to avoid brownout risk, so be it.

While accepting that is probably real-world overkill 99.9% of the time, I ask it be accepted for this thread as a starting point. If it wasn't, I'd have a 1200W PSU already and be done with it, instead of asking about other options.

My position can be summarised as: Temporary loss of pool access (or scrub afterwards) with no lasting damage, or even a minor rollback of a few TXG's, is unimportant. A risk of actual sizeable data loss is critical.

Initial thoughts:
  • Redundancy: I don't mind the PSU eventually failing - nothing is mission critical - so long as the pool itself survives. So redundancy as such is a complete non-concern for me. Bluntly, I wouldn't pay a penny more, to get a redundant PSU (compared to a single PSU) if both were otherwise adequate.
  • Power quality: I am interested in high quality power provision. A good or excellent build, and almost certainly single rail (for ease of current allocation), to handle the demands and peaks properly and ensure good clean supply to all the drives. EVGA/SuperFlower is my brand of choice on this, among consumer PSUs.
  • Efficiency: Because of lengthy idling, I'll probably go for Platinum/Titanium 80+, so that when the HDD array and CPU are idle and power draw drops to 100-200W, it's got a chance of not wasting much power at the wall. Titanium gets me 88-90% on a good PSU, I can accept a loss of 10-20W as pretty amazing, given the large PSU it comes from.
Implications:
  • Separate back-ends: I don't have the setup justifying a separate back-end. The HDDs will be directly attached.
  • Specialist bench PSUs: I also don't have certainty that a high power specialist 5v/12v bench supply would be tuned for this use, comply with ATX specs (such as reserve power for 16ms loss, or whatever else ATX defines/expects) able to manage the typical and peak draws, and tested for many years of PC/NAS use, and I'm frankly a lot more comfortable with PSUs actually designed for a computer. While this might be one optimal solution, my apprehensions rule it out.
  • Server enclosure PSUs: I don't have any experience with server enclosures at all, and my impression is that multiple PSU server enclosures aren't a great match for my needs, which are adequate peak power and not redundancy. (If peak power wasn't the concern then a 750W-1200W PSU would be great). I also couldn't afford them new but only 2nd hand - I'm happy with some gear 2nd hand but not a PSU. While someone with professional experience might be very happy, and I accept this would be the "usual" commercial answer, I'm much less comfortable with this option in my use-case, which takes account of my own experience. Part of that is admittedly my lack of familiarity/experience with their operating characteristics, which is not okay given its crucial role. They will also be much more noisy, so I will have to move my NAS away from my work area, which will be inconvenient, and much less capable of extreme (or much real) energy efficiency when 10% idle, which is where the NAS spends 70-80% of its time (100-200W vs. 1600-2000W peak).
    I accept these might seem weak reasons to some, who may see this as the "real" solution and the rest as compromises one has to accept. I'm open to reasoned explanation why I don't need to worry, but the cost and practical impact would still be virtually prohibitive issues in my use-case.
  • That narrows it down to big PSU or parallel (dual) PSU, and this is roughly the point I reached when I decided to ask for help to get to a final choice. I'm comfortable with both of these, barring a couple of points which are well defined and not complex to discuss.

4) Modes of failure:

@jgreco raised a critical point I hadn't considered. I think that is worth considering first, because it isn't just a PSU issue.

Q: What are the implications for the pool if (for any reason) enough HDDs go inaccessible temporarily to kill redundancy. When they come back up, how sure is it that the pool can recognise them, and restore itself to working condition, or is there a significant chance the pool will have died?

This isn't a question that just affects PSUs. Half my HDDs are on an HBA. If the entire HBA dies, I can swap in a spare. But it's the exact same scenario. Half the HDDs suddenly vanish. The data on them is intact, but they suddenly cannot be acessed. When I replace the HBA or whatever it was, with another identical spare, and the disks become visible again, will the ZFS pool necessarily be recoverable or rollbackable to a recent valid txg?

Someone else already asked this, and the replies they got were that there's a difference between disk failure and disk unavailability, and not to worry about HBA loss - it wouldn't lead to pool loss. If that's correct then loss of one of 2 parallel PSUs, rendering some of the pool HDDs suddenly unavailable (but undamaged) until the PSU was replaced, would be identical.

Is that correct? I think that has to be the starting point.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Q: What are the implications for the pool if (for any reason) enough HDDs go inaccessible temporarily to kill redundancy. When they come back up, how sure is it that the pool can recognise them, and restore itself to working condition, or is there a significant chance the pool will have died?
To be honest, I didn't read the whole post, as you said, it is quite long, so I skipped to the bottom.
I have some experience with that failure mode, didn't have a power failure, but my NAS has two SAS controllers and one of them failed taking half the drives offline. The pool became inaccessible and the OS freaked out and rebooted. After troubleshooting the issue and replacing the SAS controller, I was able to boot back into the system and I didn't loose any data because all the drives came back online and the pool (all drives now present) was imported without any trouble. No guarantee that would always work, but it can work in some cases.
As I understand it, your pool is made of mirrors, so you might want to put one side of the mirror on one power supply and the other side of the mirror on the other power supply. This way, if either supply fails, you still have the pool up, unless it is the supply that is running the system board.
 
Joined
May 10, 2017
Messages
838
The peaks are transient current draws that can be 2-3x more than even the HDD's normal "maximum in-use" operational current draws. An ordinary 6TB enterprise drive can demand 3.08A/37W @ 12v and 1.45A/7.25W @ 5v at peak (3 sigma).

Where do you get these values from? They are usually much lower, even the pdf you linked has the peak DC starting amps for the 6TB drive at 2.04A@12V + 0.62A@5V
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I went back and read the whole post again. Again, you are over thinking all of this and making some assumptions that I think are not based in reality. You are also totally discounting my input, so I won't provide any additional input.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Guys, I've already asked for this to be restrained to posts that are trying to help @Stilez ... we do not need any additional unhelpful opinions of the form "my Kill-a-watt says my system averages N watts, why do you want a 5*N watt PSU" or "dude that's crazy big." Or:

Where do you get these values from? They are usually much lower, even the pdf you linked has the peak DC starting amps for the 6TB drive at 2.04A@12V + 0.62A@5V

I mean, really, the Seagate technical documentation was LINKED in the post you responded to, and it is CLEARLY indicated on page 23 of the document that the "Peak operating current (sequential read)", "Maximum (peak) DC", for 6Gbps mode, is 41.04 watts. And there's a big fat warning:

General DC power requirement notes.
[...]
3. Where power is provided to multiple drives from a common supply, careful consideration for individual drive power requirements should be noted. Where multiple units are powered on simultaneously, the peak starting current must be available to each device.

Because people can and do smoke drives from undersizing PSU's, but generally this isn't an issue for people who oversize PSU's. "Usually much lower" is not a comfort if you smoke drives.

So. Um. How do I put this politely? Please keep on track. The OP realizes that this is probably excessive, but sometimes we do things for reasons that essentially boil down to "because we can."
 
Joined
May 10, 2017
Messages
838
I mean, really, the Seagate technical documentation was LINKED in the post you responded to, and it is CLEARLY indicated on page 23 of the document that the "Peak operating current (sequential read)", "Maximum (peak) DC", for 6Gbps mode, is 41.04 watts.

Yes, my bad, first time a see a drive indicating an higher peak for usage rather than for startup current.

3. Where power is provided to multiple drives from a common supply, careful consideration for individual drive power requirements should be noted. Where multiple units are powered on simultaneously, the peak starting current must be available to each device.

This refers to the startup current, the values I mentioned, thought it might also apply to the peak usage, since I don't now how likely it is for all drives to be at peak at the same time and how long to peaks last.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This refers to the startup current, the values I mentioned, thought it might also apply to the peak usage, since I don't now how likely it is for all drives to be at peak at the same time

State: all drives spun down due to inactivity

Read request on RAIDZ -> cached metadata in ARC provides block numbers to read

ZFS -> issues parallel read requests for data it reasonably expects to need

State: all drives suddenly spinning up simultaneously, with pending reads

So this isn't even an unlikely hypothetical. It's something that ZFS would automatically do if your drives happened to be set for a no activity spindown.

and how long to peaks last.

The length of the peak is not too relevant, unless it is literally so short that there is no significant statistical likelihood that there will be substantial overlap of a bunch of drives. So, like, maybe if it was a twentieth of a second ... maybe. However, many/most drives require several seconds. @Bidule0hm graphed the drive spin current consumption for all to see, even showing my what-I-thought-were-conservative numbers to be sometimes potentially insufficient.
 
Joined
May 10, 2017
Messages
838
However, many/most drives require several seconds. @Bidule0hm graphed the drive spin current consumption for all to see, even showing my what-I-thought-were-conservative numbers to be sometimes potentially insufficient.

Completely agree with the startup scenario and that they last for a few seconds, Seagate also publishes those graphs, they don't however show any graphs for this maximum peak during normal operations, in fact the graphs they have in normal operation don't show any of these peaks, so unlike the startup current which you do need to consider I'm still not sure how frequent these peaks are and if you need to plan for them.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Wow. Keeping up with this thread is teaching me a lot. Not to throw an off topic post in, but I’m just here to say this is quit interesting and regardless of some emotions and confusion, good thought is being put in. Thankfully my system only has 10 5400 rpm discs in it so I don’t fall into the camp of “is my PSU sized properly”, but this is invaluable knowledge for future builds.


Sent from my iPhone using Tapatalk
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
To be honest, I didn't read the whole post, as you said, it is quite long, so I skipped to the bottom.
I have some experience with that failure mode, didn't have a power failure, but my NAS has two SAS controllers and one of them failed taking half the drives offline. The pool became inaccessible and the OS freaked out and rebooted. After troubleshooting the issue and replacing the SAS controller, I was able to boot back into the system and I didn't loose any data because all the drives came back online and the pool (all drives now present) was imported without any trouble. No guarantee that would always work, but it can work in some cases.
As I understand it, your pool is made of mirrors, so you might want to put one side of the mirror on one power supply and the other side of the mirror on the other power supply. This way, if either supply fails, you still have the pool up, unless it is the supply that is running the system board.
Someone else asked that and got much the same response - that temporary loss of availability (as opposed to actual failure) is not a problem. Anecdotally it was for you, which is reassuring. I also agree on gut feeling that splitting mirrors between PSUs would be a sensible choice. That said, thinking about "how could it fail badly", I came up with a pool loss mode for that scenario that's worth asking in its own thread, if curious.

Where do you get these values from? They are usually much lower, even the pdf you linked has the peak DC starting amps for the 6TB drive at 2.04A@12V + 0.62A@5V
Example sources:
  1. Seagate ST6000NM0024 datasheet - I'm using 5 or so of these. Page 10, table 2, under Peak operating current (random write) -> "Maximum DC (peak)". Page 13-15 has a gorgeous example of the actual in-use current profile for an HDD - just look at those repeated transient bursts and spikes!
  2. In the pdf I linked it's on page 23, table 1 and gives 1.53A @ 5v and 3.02A @ 12v as peak.
It's easy to forget that in-use can be more extreme than startup, people overlook the demands on the 5v line, and staggering only helps with 12v and then (unconfirmed/maybe) only at boot. @jgreco 's resource on PSU sizing says much the same - allow 35W or so for each HDD for peak demand, even though normal usage only draws 9 - 12W.

I went back and read the whole post again. Again, you are over thinking all of this and making some assumptions that I think are not based in reality. You are also totally discounting my input, so I won't provide any additional input.
I've acknowledged them but I'm not happy assuming them for *my* NAS build. They are probably okay on average, and I respect that they may be assumptions that are usually safe to make. But I would like, for my build, to be guided by the manufacturer datasheet on HDD peak demand and size it accordingly, and in a manner appropriate for my use-case, and what they say can be seen in the dramatic current-draw graphs on p.13-15 of this pdf (which I also linked above).

I think you can understand that, even if disagreeing, and I appreciate the helpful points you have made.
 
Last edited:

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
... they don't however show any graphs for this maximum peak during normal operations, in fact the graphs they have in normal operation don't show any of these peaks, so unlike the startup current which you do need to consider I'm still not sure how frequent these peaks are and if you need to plan for them.
Oh yes they do. Linked you them above (and again for ease, see p.13-15). They are... let's say "visually dramatic", to say the least.

(Also I admit, I had been thinking more in terms of millisecond/centisecond scale transients, not solid bursts. Solid bursts like these, look prone to occur on several HDDs at the same time, and to coincide more often rather than averaging out from slight timing differences.)
 
Last edited:

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
@Bidule0hm graphed the drive spin current consumption for all to see, even showing my what-I-thought-were-conservative numbers to be sometimes potentially insufficient.
Do you have a link? All I've found for graphical profile insight, is 45drives' graph and the in-use current graphs I linked @Johnnie Black to, in the manufacturer datasheets.
 
Joined
May 10, 2017
Messages
838
Oh yes they do. Linked you them above (and again for ease, see p.13-15). They are... let's say "visually dramatic", to say the least.

Thanks for that, the other pdf didn't show those, though at least on those graphs and if I'm reading them correctly they never go above startup current and also last for much less time.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
A bit late in the discussion so I quote a post from page 1 (and I've not read page 2 right now):

An oscilloscope would be the right tool, you could measure 4 or 5 HDDs at a time by clamping the multi-plug cable at its base, or even clamp all 5 cables together, if it's capable of measuring these kinds of DC transients via clamp without directly sitting in the electrical pathway. But for a once-off it's excessive, it's the same price as a 1200W - 1600W PSU itself.

Actually you can use a cheap current clamp (like the 50$ one, link in my thread that @danb35 just linked before this post) and one of the analogue input of the arduino or even the audio input of a PC. The sampling rate is plenty enough for what you want to do and it's a very cheap solution ;) (links to how to have a better sampling rate on the arduino if you're interested in this solution: http://yaab-arduino.blogspot.fr/2015/02/fast-sampling-from-analog-input.html and http://www.instructables.com/id/Girino-Fast-Arduino-Oscilloscope/)
 
Status
Not open for further replies.
Top