(having said that, would need to route at gigabit speed at home?)
Me? Currently i got a 100/100Mbit hookup, by the middle /end of next summer the ISP is upgrading the service to gigabit speeds. I'm quite sure i never use it for extended periods of time, but i'm more then happy to be able to push it when i need it.
Takes a fraction of the space, makes no noise.
Trust me. Even if i could hear the PSU fan over the darn fan in the powerconnect 2724, noise would be the least of my worries.
In regards to requirements, personally, I don't see any reasons why two different distributions of essentially the same thing (by that I mean same OS kernel and drivers) would have much different hardware requirement.... The recommendations provided here are just that: recommendations and best practice. Anything less and it is going to have a negative impact on performance. Doesn't mean that you can't use much less..
I actually have no idea. Thats why i posted here. By the looks of things, it all boils down to reliability, and, more to the point, ECC memory. Something i don't have. Looks more and more like i'll be going with the suggestion here:
Let me reiterate that everything I'm about to say is personal opinion.
Personally, I'd rank things in this order:
1. ZFS with ECC RAM
2. Other file system with hardware RAID with non-ECC RAM
3. ZFS with non-ECC.
While running windows home server on a "enthusiast" motherboard, without ECC isn't the perfect solution for data integrity, from my understanding it helps a bit to at least be using a (RAID) controller that does indeed work with ECC memory. The only thing i'm really walking away with is something of a feeling of amazement. As far as i know neither the DataVault X310, or NV+, that are my current storage solutions, use ECC memory.
I will note that "route" used in this context is usually deceptive, what you mean is "NAT". It is disappointing that CPE manufacturers co-opted a term that means something fairly specific and used it to mean something rather different.
Meh, every "router" manufacturer the world over has "built in firewalling" (NAT) "routing abilities" (NAT) and so on. Bottom line is they took words the tech's knew what they meant, and used them to market the product to a market that had no clue. Nobody spoke up, and now people have "learned" what it means. The perception of what it does has simply changed, a lot. Now, This wasn't really supposed to be about my Smoothwall, it was just backstory, showing where i came from, and the expectations i had. Smoothwall hardware requirements starts out at around 233MHZ PII, and scale upwards depending on what sort of addons you stick in to it. For my purposes i would probably be just fine with a low end PIII, but just as with a ECC system, i simply don't have one, in this case, one that i'd trust to remain in service for the next 5 years. My current system is an old P4, S775, Prescott 3GHz. That motherboard didn't provide me with any ability to tweak CPU'speeds, and it has only a single PCI slot. I stuck a 4 port D-link DFE-570TX in it, and used the two onboard gigabit ports for my 2 primary network segments. Leaves me 2 unused ports on the 570TX, but i didn't have any 2 port PCI cards... Again, using what i had at the time.
Those of us with multiple network segments may actually be wanting to route. I haven't actually come across a competent low-wattage solution to handle multiple gigE interfaces, small packet traffic, and a dynamic routing protocol without also falling over in some way. I mean, yes you can theoretically set up multiple networks on an OpenWRT system (for example) but the lack of CPU means you aren't moving packets real fast.
Truth be told tho. I could close down the WiFi hotspot, kill the VPN, and remove the proxy, and put the media units, and PC's on the same LAN as the media servers. It's not like i'd run out of space on the subnet, i got what, 20 units total? At the very most.
I'd do just fine with a single segment, but, i don't want to. This way i can keep the neighbors of my personal files, while still being able to access them from anywhere, and providing anyone in range with a free WiFi. I don't even limit the bandwidth on the WiFi, i do how ever use QoS to give certain members of the network priority. Yeah. Me, and my toys. He has somewhat of a point tho. a OpenWRT device would be what, 5-15watt depending on what base hardware you manage to obtain, and what services / loads you put on it. I've set up a few for others,
Bottom line. I run Smoothwall, coz i want to. For the same reason i'm building a "storage server". I could buy something of the shelf, and stick a couple of large drives in it. But i want to design a system that is a tad more aimed at my specific needs. I need something that is a bit faster then the NV+ but primarily allows for a lot more storage. If i start of with the 4 2TB drives it has, and add another 4 to a new built system, i get double the capacity, at least. If i decide to run Raid5 over 8 drives, i get a bonus drive that expands my storage, but i'm feeling more like RAID5 over 4 drives x2, or Raid5 over 7 drives with 1 hot spare. As of yet undecided, pending the decision on exactly what i'm buying. With the Perc 6/i, i think i'd go RAID6 over all 8 drives. And i still got space to add 2 more controllers in the future. The more drives i add to it, the better the power to storage ratio gets. And in 5 years, if it lives that long, i'll have some newer hardware to start over with, and migrate the stuff out of storage from this system, to the new one. (Or, we all gave up on storage servers at home, and have a port in the neck.
B!