So you want some hardware suggestions.

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Exactly what jyavenard said. When I built my pfsense box I had 2 choices. Reuse an old P4 system that was horribly overpowered and drew 67w idle or buy all the parts for the Intel Atom system. After some quick math I came to the conclusion that if I used the Atom system for something like 2 years the new hardware would pay for itself in cost savings. I'm on month 15 now. :)

Using 60w box for home is not a smart use of electricity. If you had a 150 machine business then that's a good deal. But for home use... rip off.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, I'd love for that to be true, but 960Mbps implies that this is still a software trick. Or that it is being done in hardware but off a single gigabit port. Ecch.
 

Mr_B

Dabbler
Joined
Dec 3, 2012
Messages
30
(having said that, would need to route at gigabit speed at home?)
Me? Currently i got a 100/100Mbit hookup, by the middle /end of next summer the ISP is upgrading the service to gigabit speeds. I'm quite sure i never use it for extended periods of time, but i'm more then happy to be able to push it when i need it.

Takes a fraction of the space, makes no noise.
Trust me. Even if i could hear the PSU fan over the darn fan in the powerconnect 2724, noise would be the least of my worries.

In regards to requirements, personally, I don't see any reasons why two different distributions of essentially the same thing (by that I mean same OS kernel and drivers) would have much different hardware requirement.... The recommendations provided here are just that: recommendations and best practice. Anything less and it is going to have a negative impact on performance. Doesn't mean that you can't use much less..
I actually have no idea. Thats why i posted here. By the looks of things, it all boils down to reliability, and, more to the point, ECC memory. Something i don't have. Looks more and more like i'll be going with the suggestion here:
Let me reiterate that everything I'm about to say is personal opinion.

Personally, I'd rank things in this order:

1. ZFS with ECC RAM
2. Other file system with hardware RAID with non-ECC RAM
3. ZFS with non-ECC.
While running windows home server on a "enthusiast" motherboard, without ECC isn't the perfect solution for data integrity, from my understanding it helps a bit to at least be using a (RAID) controller that does indeed work with ECC memory. The only thing i'm really walking away with is something of a feeling of amazement. As far as i know neither the DataVault X310, or NV+, that are my current storage solutions, use ECC memory.

I will note that "route" used in this context is usually deceptive, what you mean is "NAT". It is disappointing that CPE manufacturers co-opted a term that means something fairly specific and used it to mean something rather different.
Meh, every "router" manufacturer the world over has "built in firewalling" (NAT) "routing abilities" (NAT) and so on. Bottom line is they took words the tech's knew what they meant, and used them to market the product to a market that had no clue. Nobody spoke up, and now people have "learned" what it means. The perception of what it does has simply changed, a lot. Now, This wasn't really supposed to be about my Smoothwall, it was just backstory, showing where i came from, and the expectations i had. Smoothwall hardware requirements starts out at around 233MHZ PII, and scale upwards depending on what sort of addons you stick in to it. For my purposes i would probably be just fine with a low end PIII, but just as with a ECC system, i simply don't have one, in this case, one that i'd trust to remain in service for the next 5 years. My current system is an old P4, S775, Prescott 3GHz. That motherboard didn't provide me with any ability to tweak CPU'speeds, and it has only a single PCI slot. I stuck a 4 port D-link DFE-570TX in it, and used the two onboard gigabit ports for my 2 primary network segments. Leaves me 2 unused ports on the 570TX, but i didn't have any 2 port PCI cards... Again, using what i had at the time.

Those of us with multiple network segments may actually be wanting to route. I haven't actually come across a competent low-wattage solution to handle multiple gigE interfaces, small packet traffic, and a dynamic routing protocol without also falling over in some way. I mean, yes you can theoretically set up multiple networks on an OpenWRT system (for example) but the lack of CPU means you aren't moving packets real fast.
Truth be told tho. I could close down the WiFi hotspot, kill the VPN, and remove the proxy, and put the media units, and PC's on the same LAN as the media servers. It's not like i'd run out of space on the subnet, i got what, 20 units total? At the very most.
I'd do just fine with a single segment, but, i don't want to. This way i can keep the neighbors of my personal files, while still being able to access them from anywhere, and providing anyone in range with a free WiFi. I don't even limit the bandwidth on the WiFi, i do how ever use QoS to give certain members of the network priority. Yeah. Me, and my toys. He has somewhat of a point tho. a OpenWRT device would be what, 5-15watt depending on what base hardware you manage to obtain, and what services / loads you put on it. I've set up a few for others,

Bottom line. I run Smoothwall, coz i want to. For the same reason i'm building a "storage server". I could buy something of the shelf, and stick a couple of large drives in it. But i want to design a system that is a tad more aimed at my specific needs. I need something that is a bit faster then the NV+ but primarily allows for a lot more storage. If i start of with the 4 2TB drives it has, and add another 4 to a new built system, i get double the capacity, at least. If i decide to run Raid5 over 8 drives, i get a bonus drive that expands my storage, but i'm feeling more like RAID5 over 4 drives x2, or Raid5 over 7 drives with 1 hot spare. As of yet undecided, pending the decision on exactly what i'm buying. With the Perc 6/i, i think i'd go RAID6 over all 8 drives. And i still got space to add 2 more controllers in the future. The more drives i add to it, the better the power to storage ratio gets. And in 5 years, if it lives that long, i'll have some newer hardware to start over with, and migrate the stuff out of storage from this system, to the new one. (Or, we all gave up on storage servers at home, and have a port in the neck.
B!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
A guy I knew had one of those RT-N56U routers. He was all proud because of the whole "this thing can do full 1Gb throughput" stuff. He's a benchmark nazi and it's all about benchmarks to him. My pfsense box with an Intel Atom only does about 300Mbps throughput based on my tests with 2.0.1 when I setup the box. Noteworthy with the RT-N56U from what I had read and done myself:

1. The firmwares were complete crap for several revisions. It's hit and miss as to how "good" a particular revision is on the RT-N56
2. Yes, we got basically full 1Gbps through the router. But oh boy did latency suffer if you want to do an MMORPG and do something else that took up bandwidth. Comparatively, my pfsense box consistently gave very low latency for gaming.
3. As a result of the latency difference, actual real world download speeds of files(not speed tests) were much better on my pfsense box than his despite my internet having just 50Mbps down.
4. His box used less than 1/2 the watts of my box under idle and full load conditions.
5. Naturally, my pfsense box has WAY more options available than the RT-N56U.
6. If you start doing data transfers across the 4 port LAN ports, that appeared to directly affects the ability of the router to pass through the full 1Gbps. If you start doing enough traffic, you can end up with speeds below 100Mb through LAN to WAN and vice versa.

In short, he's built an indentical box to mine because he realized it wasn't all it was cracked up to be.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
While running windows home server on a "enthusiast" motherboard, without ECC isn't the perfect solution for data integrity, from my understanding it helps a bit to at least be using a (RAID) controller that does indeed work with ECC memory. The only thing i'm really walking away with is something of a feeling of amazement. As far as i know neither the DataVault X310, or NV+, that are my current storage solutions, use ECC memory.

Nope. The only thing that ECC RAM on your RAID controller does is prevent the RAID controller's cache from eating your RAID alive. It doesn't do anything to compensate for using non-ECC in your server... at all. If stuff in your system RAM gets garbled with non-ECC it's going to be GIGO to your RAID.

The DataVault X310 runs Windows Home Server and the NV+ is custom built(some might say garbage). So in the list of how I would rank things(and if you'd assume I'd put a ReadyNAS last on that list) it would appear that the manufacturer's followed my recommendation.
 

Mr_B

Dabbler
Joined
Dec 3, 2012
Messages
30
Reuse an old P4 system that was horribly overpowered and drew 67w idle or buy all the parts for the Intel Atom system. After some quick math I came to the conclusion that if I used the Atom system for something like 2 years the new hardware would pay for itself in cost savings. I'm on month 15 now.
I went with the assumption that a Atom based system would pull 7w. It was the first figure i found when i searched for a watt figure for any atom system, i have NO idea if thats good, or bad for a ATOM. I then went and looked for the equivalent system compared to the Smoothwall build. Which pretty much boils down to 4 Gbit ports. Since that isn't going to happen onboard, it means either 2 Gbit ports onboard, and a x4 PCI-E port, or 2 x4 PCI-E ports. I then ran that info through my available shops, and came back with a pair of Supermicro systems. X7SLA-L & X7SLA-H. Both would without a doubt be great for a Smoothwall. The L is the equivalent of pretty spot on, 3 years of power for the "extra" my current build will draw, The H add's another 8 months, and thats without the PSU i would have to spring for, to even get to 7w in the first place. With a load that small my current PSU isn't exactly at a great efficiency. It's barely tolerable in the just over 50w range. (Don't get me started on the Antec PSUi'm using for the x3, it's even worse. But then, it's older, and hasn't got a 80+ rating) I'm just guessing, but at the math i'm looking at, it's near a dead race depending on how near, or how much past the 5 year mark i can get. 4 seams to be a break even point, so, ask me again in 4 years, and see if you can gloat.

Using 60w box for home is not a smart use of electricity. If you had a 150 machine business then that's a good deal. But for home use... rip off.
If you say so. It's my wallet, and currently the math it does doesn't scream "Buy new stuff", if it did, i'd be shopping. As a guy, nothing only one thing excites me more then new toys, and it's drawing near x-mass and all...
Yeah. If i have to make a choice, new toys or sex, then i'd have sex, and go shopping later.

Edit
Nope. The only thing that ECC RAM on your RAID controller does is prevent the RAID controller's cache from eating your RAID alive. It doesn't do anything to compensate for using non-ECC in your server... at all. If stuff in your system RAM gets garbled with non-ECC it's going to be GIGO to your RAID.
I know. But since a RAID/controller is one small investment, one i'd have to do either way, and not a complete system replacement, it seams to make more sense going with the RAIDcard, and WHS2011 then no -ECC & FreeNAS. At least if i look at your list.

The DataVault X310 runs Windows Home Server and the NV+ is custom built(some might say garbage). So in the list of how I would rank things(and if you'd assume I'd put a ReadyNAS last on that list) it would appear that the manufacturer's followed my recommendation.
Yeah, thats just it. The X310 is a atom, and i don't think it has ECC memory, and i know the NV+ doesn't have ECC memory, since i replaced it with a 1GB module. It's a 256MB DDR400 SO-DIMM stock.
That said, both system might very well use a integrated ECC equipped controller. No, wait. The X310 seams to use a ICH9R controller? At least thats the loaded driver... While i was over there, i ran CPU-Z, No ECC near the system... *shrugs* My statement still stands. I'm amazed at how little these things really cross ones mind while you use them... I mean, they push these products as "perfect for the home, and workplace."
Also, i have no idea how error prone, or how sensitive the filesystem on NV+ is, but it's been running for nearly 4 years without pause, early on 1 drive flaked out (within the first couple of months) and since then it's just been rock solid. Slow, but steady seams to be it's deal.
/Edit

B!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I went with the assumption that a Atom based system would pull 7w. It was the first figure i found when i searched for a watt figure for any atom system, i have NO idea if thats good, or bad for a ATOM. I then went and looked for the equivalent system compared to the Smoothwall build. Which pretty much boils down to 4 Gbit ports. Since that isn't going to happen onboard, it means either 2 Gbit ports onboard, and a x4 PCI-E port, or 2 x4 PCI-E ports. I then ran that info through my available shops, and came back with a pair of Supermicro systems. X7SLA-L & X7SLA-H. Both would without a doubt be great for a Smoothwall. The L is the equivalent of pretty spot on, 3 years of power for the "extra" my current build will draw, The H add's another 8 months, and thats without the PSU i would have to spring for, to even get to 7w in the first place. With a load that small my current PSU isn't exactly at a great efficiency. It's barely tolerable in the just over 50w range. (Don't get me started on the Antec PSUi'm using for the x3, it's even worse. But then, it's older, and hasn't got a 80+ rating) I'm just guessing, but at the math i'm looking at, it's near a dead race depending on how near, or how much past the 5 year mark i can get. 4 seams to be a break even point, so, ask me again in 4 years, and see if you can gloat.

If you say so. It's my wallet, and currently the math it does doesn't scream "Buy new stuff", if it did, i'd be shopping. As a guy, nothing only one thing excites me more then new toys, and it's drawing near x-mass and all...
Yeah. If i have to make a choice, new toys or sex, then i'd have sex, and go shopping later.

Actually, I can already gloat. I've already done the math with the real hardware and at month 24 I will hit the break-even point. I've since built 4 more identical boxes for friends. :)

For a PSU I bought http://www.mini-box.com/picoPSU-80-60W-power-kit for $35. Pulls a whole 14w at the wall under load. I think they claim "over96% efficiency" or something else that is absurdly high.

Edit
I know. But since a RAID/controller is one small investment, one i'd have to do either way, and not a complete system replacement, it seams to make more sense going with the RAIDcard, and WHS2011 then no -ECC & FreeNAS. At least if i look at your list.

Yeah, thats just it. The X310 is a atom, and i don't think it has ECC memory, and i know the NV+ doesn't have ECC memory, since i replaced it with a 1GB module. It's a 256MB DDR400 SO-DIMM stock.
That said, both system might very well use a integrated ECC equipped controller. No, wait. The X310 seams to use a ICH9R controller? At least thats the loaded driver... While i was over there, i ran CPU-Z, No ECC near the system... *shrugs* My statement still stands. I'm amazed at how little these things really cross ones mind while you use them... I mean, they push these products as "perfect for the home, and workplace."
Also, i have no idea how error prone, or how sensitive the filesystem on NV+ is, but it's been running for nearly 4 years without pause, early on 1 drive flaked out (within the first couple of months) and since then it's just been rock solid. Slow, but steady seams to be it's deal.
/Edit

B!

I don't know about you, but I don't consider a RAID controller a small investment. My last one cost me over $1000. Total system cost was far more than I wanted at the time.

As for making more sense to go with a RAID card + WHS2011 versus FreeNAS I just take it as "wait a month and then I can afford it" item. I don't do 1/2 a** jobs. I do it right or not at all. I've had to wait on buying new toys so I could get the exact one I wanted. It's hard to do, but I do it.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
I actually have no idea. Thats why i posted here. By the looks of things, it all boils down to reliability, and, more to the point, ECC memory. Something i don't have. Looks more and more like i'll be going with the suggestion here:


There's one thing to keep in mind to keep things in perspective...

ZFS has been the default file system for solaris related system for years.
ZFS has been available on FreeBSD for over 5 years.

The vast majority of computers do not use ECC memory nor can they.

As such, it is safe to assume that the vast majority of ZFS users do not use ECC memory and you're going to see more and more plain desktops using ZFS thanks to ZFS on Linux being finally available.

Despite what some have said here, I don't buy the "ZFS without ECC memory is less reliable than other file system without ECC memory".
All filesystems and OS make use of cache extensively (look at the http://forums.freenas.org/threads/new-build-benchmarks-and-reviews.16247/ post I made; and see the md+ext4 extensive use of caching apparent in the benchmark results)...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
The vast majority of computers do not use ECC memory nor can they.

As such, it is safe to assume that the vast majority of ZFS users do not use ECC memory and you're going to see more and more plain desktops using ZFS thanks to ZFS on Linux being finally available.

Despite what some have said here, I don't buy the "ZFS without ECC memory is less reliable than other file system without ECC memory".
All filesystems and OS make use of cache extensively (look at the http://forums.freenas.org/threads/new-build-benchmarks-and-reviews.16247/ post I made; and see the md+ext4 extensive use of caching apparent in the benchmark results)...

Yep. And you miss out on one very big major fact. ZFS isn't "another file system". ;)

As for the whole ZFS versus ECC thing, not even gonna comment. We have plenty of first hand reports of the consequences. It's your choice and your data. Take it or leave it. You don't have to buy it either. I don't sell anything. I make no money on your purchases.
 

Mr_B

Dabbler
Joined
Dec 3, 2012
Messages
30
Actually, I can already gloat. I've already done the math with the real hardware and at month 24 I will hit the break-even point.
I'm happy for you, but it doesn't applly to my situation. I don't know if the hardware was a lot cheaper where your at, or if power is a lot cheaper here, but my break even point isn't anywhere near 24 months. Gloat, and tell me "i told you so" in 4 years, or there about.

For a PSU I bought http://www.mini-box.com/picoPSU-80-60W-power-kit for $35. Pulls a whole 14w at the wall under load. I think they claim "over96% efficiency" or something else that is absurdly high.
Not bad. I counted the break even point based on a 24/7 idle, 7w from the wall.

I don't know about you, but I don't consider a RAID controller a small investment. My last one cost me over $1000. Total system cost was far more than I wanted at the time.
A used Perc 6/i is available, with battery, shipped from 50 USD and 175 gets you a new one. For the new one i'd suggest you pick up a battery as well.

As for making more sense to go with a RAID card + WHS2011 versus FreeNAS I just take it as "wait a month and then I can afford it" item.
I'm not sure where this is coming from. I assume that when you buy a car you only buy 18 wheelers, or a Bugatti Veyron, for the same reason?

I've had to wait on buying new toys so I could get the exact one I wanted.
I want what will do precisely what i need it to, without throwing excess money on it. Since i KNOW i can do the job with a Perc 5-6/i, on the system outlined, and running WHS2011 on it, right now, that looks like the most price effective solution.

It's hard to do, but I do it.
Yeah. And your special. It's not at all what everyone else does. Difference possibly being that we want different things. Don't get me wrong, i appreciate your input, and the thread i linked to before is nearly on the same level as this one when it comes to a "must read", but don't be a dick and assume shit about people you don't know. It just makes you look knowledgeable, but stupidly single-minded.

Despite what some have said here, I don't buy the "ZFS without ECC memory is less reliable than other file system without ECC memory".
On jgreco's recommendation i read the thread i linked to. After reading that, it seams like the suggestion from cyberjock is to run anything but ZFS if i cant / wont go ECC, and thats full well knowing that "anything but ZFS" means "not FreeNAS."
I'm grateful for finding out early. Nothing like finding out after you lost something you wanna keep. I have NO prior experience with FreeNAS, or ZFS, but i do have some with Microsoft OS, and they seam less sensitive when it comes to stuff already on the drives at least. Bottom line is i have had stuff getting corrupted, but it has always been traceable to transfers, never automatic / basic maintenance.
It might be very unlikely to happen even once during the next 5 years. But that doesn't really matter in 3 years, when it has happened, and my favorite porn music is gone.
B!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's one thing to keep in mind to keep things in perspective...

ZFS has been the default file system for solaris related system for years.
ZFS has been available on FreeBSD for over 5 years.

The vast majority of computers do not use ECC memory nor can they.

As such, it is safe to assume that the vast majority of ZFS users do not use ECC memory and you're going to see more and more plain desktops using ZFS thanks to ZFS on Linux being finally available.

This is a logical fallacy. Solaris systems have typically been ECC, because they're servers or workstations. ZFS has not been the default filesystem of FreeBSD, so the sample size you are making out to look like a really large sample size is in fact pretty small. And there are other mitigating factors, see below.

Despite what some have said here, I don't buy the "ZFS without ECC memory is less reliable than other file system without ECC memory".
All filesystems and OS make use of cache extensively (look at the http://forums.freenas.org/threads/new-build-benchmarks-and-reviews.16247/ post I made; and see the md+ext4 extensive use of caching apparent in the benchmark results)...

That's irrelevant. You're missing the point here. This isn't really about cache, it's about what happens when errors occur.

1) ZFS does not have filesystem structure repair utilities. There is no "fsck" or "chkdsk" or "zfsck". The pool design is very complex and one of the design assumptions is that the data that gets written to the pool is correct. Incorrect data written to the pool is a bad thing.

2) ZFS uses aggressive error detection and correction strategies. If you've ever used a hardware RAID controller, you have probably run across "consistency check" options (LSI, for example, calls theirs "Patrol Read") which helps identify inconsistencies and failures that may indicate a failing disk. One of the problems typically associated with those systems is that they often can only notify that there's an inconsistency; there's no way to identify which copy of a block is actually correct. ZFS solves that, because all blocks are checksummed. So if ZFS reads a block and it fails the checksum, ZFS attempts to rebuild the block using whatever redundancy is available. If it is able to do so, it then updates the errant block.

But both of these points are reliant on the integrity of the host system. The ZFS designers, being Sun guys, inherently trusted their high-end, well-designed, ECC-protected Solaris platforms and chose to design a system that assumed that the host system's integrity was assumed.

Look, the long and the short of it is that ZFS is not some magical filesystem. It is an awesome storage system, but it has requirements. We're typically pretty conservative around here because we kind of assume that if you've chosen ZFS on a NAS, it is because you love your data and you don't want to lose it.

If you are just building a desktop machine, maybe ECC for ZFS isn't a requirement ... especially if you only had one hard drive in the machine, that means there were far greater risks anyways (like drive fail). That probably describes lots of desktop deployments. The typical desktop is being backed up to some other resource.

But for a NAS, the natural assumption is that it is likely that the data is valuable, that you've arranged for redundancy of the disks, and that you'd prefer not to lose the data. Maybe you don't care, and merely need to share nonvaluable data between several PC's, which as far as I'm concerned is the scenario where it is fine to non-ECC it and be happy. But if you care about your data, then you have to seriously inspect the risk factors of any given technology, and try to understand them. For ZFS, one of those risk factors is that it will happily correct errors even when performing what a sysadmin would normally consider to be a "read" operation. That's totally awesome for a NAS except if it happens to be actually corrupting your data instead...

So upon inspection your whole "perspective" falls apart.
 

Mr_B

Dabbler
Joined
Dec 3, 2012
Messages
30
For a PSU I bought http://www.mini-box.com/picoPSU-80-60W-power-kit for $35. Pulls a whole 14w at the wall under load. I think they claim "over96% efficiency" or something else that is absurdly high.
I did some more math, mostly since it amused me. The AC/DC converter claims a linear 87%. Thats what you get for your 12v line, the PicoPSU don't do much more then watch it, and shut down if it goes crazy. For 5 & 3.3v you get 94 & 93% respectively at & below 1A. (For a total of 14w from the wall, i think thats where it's at?) That would put 5v at 81,78% and 3.3v at 80,91%.
Not having any idea how much of the load to power such a platform comes from each line, i simply went with "equal parts" and took the average. 83.23%. It's of course just a ballpark figure, but that would mean the whole system actually use 11.5w. Thats freaking awesome. It still doesn't really change the numbers for me, it was just something i got curios about. The fact that they can even produce systems that can run on as little is amazing.
B!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I did some more math, mostly since it amused me. The AC/DC converter claims a linear 87%. Thats what you get for your 12v line, the PicoPSU don't do much more then watch it, and shut down if it goes crazy. For 5 & 3.3v you get 94 & 93% respectively at & below 1A. (For a total of 14w from the wall, i think thats where it's at?) That would put 5v at 81,78% and 3.3v at 80,91%.
Not having any idea how much of the load to power such a platform comes from each line, i simply went with "equal parts" and took the average. 83.23%. It's of course just a ballpark figure, but that would mean the whole system actually use 11.5w. Thats freaking awesome. It still doesn't really change the numbers for me, it was just something i got curios about. The fact that they can even produce systems that can run on as little is amazing.
B!

I tried to do the same math, and there's several problems that lead me to no answers. I'm not sure how you got your %s, but here's what I know from searching and a little voltmeter action:

1. 12v is the only voltage output from the "brick" PSU.
2. The little device that has the 24-pin ATX connector provides the alternate voltages.
3. I couldn't find a rated % efficiency(not sure where you got your 87%, but I'm curious now!)
4. I could find no rated efficiency for the 5v and 3.3v, but they are likely to be over 90% for anything but the smallest loads.
5. Since you have both AC->DC and DC->DC its not a simple conversion of power usage. As loading in the system changes, the efficiency of both the DC->DC and AC->DC will change. Depending on various parameters, it is actually possible in some situations to increase loading and have measured power usage at the wall to remain constant! At my previous job we had some big diesel generators for emergency power. Believe it or not they consumed less gas per unit time at 80% load than 50%. Diesel fuel consumption was actually lower as you put more load on the generator. Very bizarre(and was somewhat unbelievable at first for myself), but that's what the manufacturer claimed and was exactly what we saw when we did actual load testing and did the calculations.
6. If you really wanted to measure efficiency you'd have to monitor power at the wall, at the 12v into the PicoPSU, at the 5v and +5VSB and 3.3v (both ins and outs), and then do a bunch of comparisons between them to arrive at the actual efficiency. So without all of those numbers and a bunch of charts to do the comparisions its tough to give an efficiency within about 5%. Additionally, most watt meters people will use at home only do to the nearest whole watt, so fractions of a watt won't ever be known. But the difference between 10.5 and 11.5w at the wall can affect your reading by up to 8%(wowzers!) with the standard Kill-A-Watt. To add more fun to this mess the Kill-A-Watt isn't a well calibrated device(When was the last time you sent yours off to be calibrated? I never have) but they are usually within 5% at very low wattages which happens to be exactly where we are. :(
7. The PicoPSU efficiency rating(from what I've read) is somewhat a lie. Allegedly they are excluding the AC->DC conversion and only including the DC->DC conversion. This gives them an artificially high % to boast since the AC->DC is probably the lossiest part of the entire conversion. If you accept the fact that they are lying and only the DC->DC conversion is included in that 96%, it would tend to argue that your numbers of 94%, 93%, 81.78%, and 80.91%. All of those numbers can't be right if they end up with 96% efficiency average(and I assume rated based on % of total loading). Unfortunately, because of all that crap in #6, there's no easy way to figure out if they are lying about their numbers or if your numbers are truly correct/incorrect.

In short, its a really complex mess. I'd definitely be interested in looking at how you derived your numbers, I'd be willing to be that while your math is correct but the values you used to derive your numbers is probably very wrong leading to bad conclusions. It's like a scientist that gets all their data together, but because their data is already flawed the results are just as flawed.

But let's be serious with ourselves. People like you and I love these kinds of tests. If I had the ability to do further testing I'd be jumping all over it. But at the end of the day, does the efficiency really matter that much if we are happy with the 14w at full load? When I go to sleep I've tried to think up a really good test to actually give me efficiencies. It bothers me to no end to know I can't give solid numbers. One thing I know for sure, my power brick gets somewhat warm to the touch. I wouldn't be surprised if the power brick is only about 80-85% efficient just in relation to the mass of the brick as a heatsink and the temperature above ambient.
 

Mr_B

Dabbler
Joined
Dec 3, 2012
Messages
30
3. I couldn't find a rated % efficiency(not sure where you got your 87%, but I'm curious now!)
I cheated. I followed your link, and used the links for the 2 parts the kit was made up of, and then they have a neat PDF with the values hidden in the text.
2-4. Efficiency
87.0% average at normal line input and 25%,50%,75%&100% of max output current.
Link

4. I could find no rated efficiency for the 5v and 3.3v, but they are likely to be over 90% for anything but the smallest loads.

Efficiency Ratings, 3.3 and 5V rail
CH1=5V Efficiency (%) CH2=3.3V Efficiency (%)
1A 94% 1A 93%
2A 96% 2A 96%
5A 94% 5A 92%
7A 86% 7A 86%
Input Requirements:
12V regulated, min=1A, max=10A (load
dependent). Over-voltage shutdown will occur at ~13
-13.5V.

Bottom, page 3

7. The PicoPSU efficiency rating(from what I've read) is somewhat a lie. Allegedly they are excluding the AC->DC conversion and only including the DC->DC conversion. This gives them an artificially high % to boast since the AC->DC is probably the lossiest part of the entire conversion. If you accept the fact that they are lying and only the DC->DC conversion is included in that 96%, it would tend to argue that your numbers of 94%, 93%, 81.78%, and 80.91%. All of those numbers can't be right if they end up with 96% efficiency average(and I assume rated based on % of total loading). Unfortunately, because of all that crap in #6, there's no easy way to figure out if they are lying about their numbers or if your numbers are truly correct/incorrect.

In short, its a really complex mess. I'd definitely be interested in looking at how you derived your numbers, I'd be willing to be that while your math is correct but the values you used to derive your numbers is probably very wrong leading to bad conclusions. It's like a scientist that gets all their data together, but because their data is already flawed the results are just as flawed.
As i said, i don't really know how much of what a system like this actually use. For a "regular" PC, most the power for the CPU & Memory comes of the 12v rail, making the 87% very dominant. 5v is hardly used at all on the motherboards, but is included for legacy reasons, if my understanding is right. (5v PCI) Not having the slightest idea what holds true for a system like this, i used a blanket, and said "equal" use. It also made the math real easy. Add them all up, and divide by three. Get an average. Pretend it's good enough. Walk on smiling

(When was the last time you sent yours off to be calibrated? I never have)
Never. My cheap junk unit doesn't even have a option to get it sent anywhere for calibration The manufacturer claims:
V≤ ± 5,0 %
A≤ ± 5,0 %
Hz≤ ± 0,2 %
W≤ ± 5,0 %
VA≤ ± 5,0 %
kWh≤ ± 1,0 %
Any measuring-device like this will always be limited to a ballpark figure, so 5% is good enough, i guess.

But let's be serious with ourselves. People like you and I love these kinds of tests. If I had the ability to do further testing I'd be jumping all over it. But at the end of the day, does the efficiency really matter that much if we are happy with the 14w at full load?
I was somehow tossed in to a circle where i had a hard time figuring out exactly how much power the device would be using, seeing as (we all know) the PSU's seam to crap their pants when the hit to low loads. I suspect thats why my X3 doesn't get less on idle, The old Antec PSU had like 68% efficiency when it was brand new, and never was good at low loads. Anyway, i was sort of thinking. "If it's 14w from the wall... What is it actually USING?" As it turns out the losses were less then i expected. It's still crazy how they can power it at such minuscule amounts of power, but i was thinking it could end up being "a lot less" then 14w, but losses wasn't that bad.
B!
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
7. The PicoPSU efficiency rating(from what I've read) is somewhat a lie. Allegedly they are excluding the AC->DC conversion and only including the DC->DC conversion. This gives them an artificially high % to boast since the AC->DC is probably the lossiest part of the entire conversion. If you accept the fact that they are lying and only the DC->DC conversion is included in that 96%, it would tend to argue that your numbers of 94%, 93%, 81.78%, and 80.91%. All of those numbers can't be right if they end up with 96% efficiency average(and I assume rated based on % of total loading). Unfortunately, because of all that crap in #6, there's no easy way to figure out if they are lying about their numbers or if your numbers are truly correct/incorrect.


When you look at the certification house results like the one found on supermicro web site, they are certainly testing the AC-DC conversion efficiency
Like here:
http://www.supermicro.com/products/powersupply/80PLUS/80PLUS_PWS-501P-1R.pdf

You can see that all measurements were 230V/60Hz input (weird choice as the only place I know with 230/60Hz is Korea) for input.

So 265W in (AC), 250W out (DC); being made of 12V/20.06A and 5V/1.92A
 

Dmitry

Cadet
Joined
Nov 21, 2013
Messages
7
Hi,
will ServeRAID M1015 sata board work at raid5 system?

Im planing to have 5-10 HDDs
Thank you!
 

Dmitry

Cadet
Joined
Nov 21, 2013
Messages
7
Hi,
will ServeRAID M1015 sata board work at raid5 system?

Im planing to have 5-10 HDDs
Thank you!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
When you look at the certification house results like the one found on supermicro web site, they are certainly testing the AC-DC conversion efficiency
Like here:
http://www.supermicro.com/products/powersupply/80PLUS/80PLUS_PWS-501P-1R.pdf

You can see that all measurements were 230V/60Hz input (weird choice as the only place I know with 230/60Hz is Korea) for input.

So 265W in (AC), 250W out (DC); being made of 12V/20.06A and 5V/1.92A

A PicoPSU is nothing even remotely close to what you linked. In fact, you can't even buy them at anything larger than 160watts.

Have you ever even seen or heard of a picoPSU?

If you just look at the picture you'll see that what you linked to isn't much more than your standard PSU on the inside. And of course those are tested from the wall to the output side of the PSU. There's no distinction like there is with the PicoPSU where there's 2 very distinct halves to the actual PSU.

Supermicro doesn't make PicoPSUs. PicoPSUs are made by minibox(the same website I linked). Why you'd link a Supermicro PSU, or even any other PSU that isn't a PicoPSU is totally beyond me. PicoPSU has a very unique design that is nothing like your standard case PSU. I'm not sure I've even seen another design quite like the PicoPSU in the retail computer channels.

I really do urge you to read up before you talk. This is what, the 3rd time in a week you've started talking about things that have no bearing on anything related to the discussion or been totally out to lunch with your response.

Edit: And if you check out the webpage and manual and are particularly astute, you will notice they do NOT claim to have 80+ certification as they claim a percent efficiency and nothing else. And the manual lists the efficiency for the +5v and +3.3v. They conveniently leave off the 12v efficiency(most likely because they don't want to include it so that the efficiency looks higher than it actually is). I will say that if the power brick was 90% efficiency then my brick should be cold to the touch as it would be dissipating 1.5w of heat. But its warm to the touch for me, which leads me to believe its efficiency is much lower(aka more heat losses) than 90%. Unfortunately I don't have access to the necessary equipment to do an efficiency analysis myself.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
FYI: I think I have proven that the efficiency from wall to motherboard is NOT the 90%+ they claim. In fact, it looks like it is probably more like 77% to 84% overall.

Check out: http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=207

I'm a little confused as I ordered the "PicoPSU-80 and 60W adapter KIT". My 12v adapter is an EDAC. Not quite the same as the one in the review but this one is mine based on the model number in the picture. About all the official information I can find is that it is a "high efficiency Level V" device with "87.0% average at normal line input and 25%, 50%, 75% and 100% of max output current". So if you have an 87%-ish PSU and then a 93+%-ish PSU you're still going to be far lower than they claim.

Kind of disappointing. The trick is in the details of how they sell the device. They consider the PicoPSU to be the DC->DC converter(the stupid little jobby that has the ATX connector on it) only. Keyword: only. The power brick (AC->DC adapter) is simply to allow you to use your computer with 120VAC and is considered an "accessory". Since they effectively claim it as an accessory they can dismiss its inefficiencies since the expectation is that you will use the device in a system that is already on 12VDC(such as a vehicle). Big disappointment IMO. But lets be honest, if it really is just 80% and we want to go complain to the manufacturer, we're really talking about 2w or so of extra heat loss. Enough to make the power brick a few degrees warmer than you might expect but not enough you'd ever be able to argue it affects your electric bill. If you are very astute you'll notice they only claim 96% efficiency when you look at the DC->DC converters. The kits (they contain the AC->DC and DC->DC adapters) don't say any efficiency at all or even that they are "high efficiency". Very very sneaky if you ask me. ;)

So a big deal that they aren't completely honest with the potential customer, but they definitely don't try to correct the potential assumption that the average customer might make regarding it. But for those of us that are cursed with an understand of efficiency we'll be disappointed that they appear to be dishonest. Life goes on. I'm sure it won't be the last time people are duped by semi-false advertising.
 
Status
Not open for further replies.
Top