Can FreeNas match performance/power figures of dedicated NAS?

Status
Not open for further replies.

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Hi,

I've been a FreeNas user for 2 years now and love it. I've been running it on a HP MicroServer 40L with 8GB RAM and 3TB WD drives.

I'm now been lured away to the realm of dedicated NAS boxes. What is drawing me there is the performance figures and the power consumption, not to mention the price.

I've started running many VM instance and I want to offload the disk space from my PC to the NAS but its just too slow. The box i'm eyeing up is the Synology DS-412+ (or 414+ when it comes out)

They claim the following figures
  • 205.68 MB/sec Reading, 182.66 MB/sec Writing1
  • 2 LAN with Failover and Link Aggregation Support
  • Features SuperSpeed USB 3.0
  • CPU Passive Cooling Technology & System Fan Redundancy
  • Hot-swappable Hard Drive Design
  • Windows® AD and ACL Support
  • VMware® / Citrix® / Microsoft® Hyper-V® Compliance
  • Running on Synology DiskStation Manager (DSM)

All running at 44W (access) or 15W (HDD hibernation), 19.3 dB(A) and only $500 odd USD

Now I'm getting only from my 102.1 MB/Sec, 90.08 MB/sec FreeNas, and it drawing 100W odd
power.

I'm not giving up on FreeNas and if someone could recommend a spec that beats or equals these figures, I may just stick with it?

Is it possible?
 

ftpmonkey

Dabbler
Joined
Aug 2, 2013
Messages
13
On either setup, most of the power usage will be from the hard disks. I would be surprised if the overall usage varied much between them.

Those read/write speeds are pretty respectable (I get a little better from the N54L). Remember that your network will start to become the bottleneck. Folks on here are far more knowledgeable than me but, I gather, if you are to make use of the aggregated NICs then everything on your network must also be compatible if you are to see the speed gains.

I have had a couple of older synologys (a 107 and a 209) so know that their kit is solid and the OS is very well implemented. (My older ones certainly weren't power-houses, though. CIFS was very slow.)
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Not sure if this is of any help, however, I notice you mention you are starting to run a number of Virtual Machines on your system, this is something I do (but for development/testing purposes only, not in a production environment). I can quite happily run 3 to 4 VM's. Haven't tried running more than that as I have never had the need to.

In summary I have two systems, one runs VMWare ESXi5, no hard drives, just boots from USB, with iSCSI. The other system runs FreeNAS with two pools. TANK and TANKBACKUP. TANK stores all sorts of stuff (movies, music, backups of personal "home" folders from laptops etc etc), plus, it has a 1TB ZVOL as an iSCSI target for Virtual Machines. I use a mixture of rsync and snapshots to keep TANKBACKUP as close to TANK as possible.

The FreeNAS system is on 24X7, the ESXi server only gets switched on when I need it.

Don't get me wrong, it's not perfect and I'm already looking at ways of tweaking the networking side of things (Link Aggregation for example), however, I have been running this setup for over two years now and am very pleased with it.

If you would like more specific information please let me know.
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Hi guys, thanks for the responses.

On the power consumption, the Synology's figures of 44W are with WD drives installed, 22W when they are asleep. So as you can appreciate, these dedicated boxes run on quite significantly less power. My NAS is on 24 x 7 and I pay the power bill which are ridiculous here in Australia.

As for my current system being OK for VM, it probably is, and may be I've just been spoilt by running them off 500MB/s SSD and 200MB/s USB 3.0 Sticks. But when I start up a VM from the network I may as well go get a cup of tea rather than wait compared to the 5 sec start time I've got accustomed too.

Also with the Link Aggregation, to buy a NIC for my 40L is 1/3 the way to the price of the dedicated NAS?! $180 USD here.

The Gen 8 Server comes with dual ports, more costly than a DS 214+, probably more power consumption and unlikely to beat the transfer speeds?

Are there other hardware specs out there that will compete on power consumption and data transfer?
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
The gen8 has been reported by many as being under 50w. Mine is aggregated with a bunch of other items so I can't give you my own power usage.

I was able to saturate both LAN ports with FreeNAS and 4x4TB drives.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Quite frankly, you're comparing apples and oranges. ZFS is an extremely heavy duty filesystem that requires significant resources to run but provides lots of features like data integrity and repair. Any vendor-sourced consumer NAS appliance is probably built out of the cheapest SoC out there, or maybe if you're lucky an Atom chip, and usually built on top of a Busybox Linux and an ext3 filesystem, which has the saving grace of being a very lightweight filesystem. Data integrity validation and repair are not features you'll get with such a device. You can get redundancy, which might seem "almost as good," but it really isn't.

Go ahead and compare a Ford F-350 with a VW Bug. It is really that sort of comparison. Yes, the VW Bug gets great gas mileage relative to the Ford, but it is not good for heavy hauling, off-road, or overall comfort if you're a big guy. You just kind of have to decide whether the features you're losing (potentially lots) are offset by the modest gains (lower power).
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
MartynW,

I understand exactly where you coming from when you talk about costs, electricity is sooooooo expensive in the UK now. This is one of the advantages of running a diskless ESXi server, switching the system on when needed and off when finished with is not going to cause any storage problems as the VMDK's are all located inside a 1TB iSCSI ZVOL on the FreeNAS Server.

When you say "But when I start up a VM from the network I may as well go get a cup of tea", do you mean you have the VM stored on a network drive (via CIFS for example) and have your Virtual Machine software running locally ? Also, what is your VM software ?
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Thanks again for the responses.

@Leenux_tux
Maybe you've hit on something there, I am trying to run on CIFS shares. The VM Software I'm running is VM Player (for a Mac OS VM), Oracle Virtual Box and recently Hyper-V all on top of a Super quick Win 8 desktop. As I say, running the VM's locally and it works a dream, is just I'm running out of diskspace on the SSD and the VM's aren't being protected from failure at all.

I might try iSCSI and see how that goes?

@jgreco
I get your point, and good point about the ZFS integrity versus standard RAID, but you're being a bit hard on the DS-412+/VW Bug. Its a very capable unit with a similar spec'd CPU (benchmark wise) to my N40L even though its the 2012 model. It trancodes videos on Plex fine, runs all sorts of add-ins like CrashPlan.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Don't be ridiculous. The 412+ is Little League. It is an Atom based system. The G8 Microserver could go a lot farther given the right networking and disk setup.

ZFS hurts you in that scenario because I'd expect a G8 with some SSD's and a UFS or ext3 based NAS to be pushing much more than the 2 gigabits the built in connectivity allows for.

ZFS however can shine compared to a craptacular NAS toaster because you can actually throw resources at the problem to make it fast. Your Synology will never have 16GB of RAM and a 60GB L2ARC to cache the VM working set; ZFS *can*.
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
MartynW,

I've never tried running any of my VM's in that way (via CIFS) so I can't comment on a performance comparison, however, when I boot up a Windows 7 image in ESXi5 (as I have already mentioned, VM's are stored on FreeNAS, accessed via iSCSI) it takes around 20 seconds to get to the log on screen. Linux MINT takes around 15 seconds, Windows 2008 Server takes around 20 seconds, CentOS around 25 seconds.

Also, I have tried running VM's through NFS, which seems to work (for me anyway) just as well, though I haven't tested it as much as I have with iSCSI.

There is an alternative to what you are looking at to do, and it might be more cost effective?. You could just use FreeNAS as a dumping area for your VM's and get a big USB3 device for local VM's on your PC ? You can then backup your VM's regularly to FreeNAS. You then get the "nice" features/advantages of FreeNAS, fault tolerance, stability, easy to manage etc., plus you have your "fast" Virtual Machines running locally from your PC ?

One other item to think about as well. I use FreeNAS for storing ISO images. When i want to install a fresh version of (for example) Linux MINT, or install SQL Server, or Oracle, all I need to do is "attach" the ISO image to the new VM, no messing around with CD/DVD's. Having everything stored on FreeNAS makes things sooooooo easy.
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
As others have mentioned, you're already maxing out your Gigabit ethernet hardware. The only way to improve from there would be to upgrade the NAS, client, and any networking gear in between to see any improvement. I'd advise you to check ebay for used Intel server NICs & switches... Much better bang for your buck than totally rebuilding your NAS box.

As far as the power consumption side, 100w from a MicroServer sounds pretty high. Is this actually measured? Are you running 7200rpm hard drives?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
As a reference point:

On hardware with sufficient capacity to "do it well," Windows Server 2008 came up the first time in about 15 seconds. The second time, it was largely cached and managed 11 seconds. The third and additional times, 10 seconds. NFS, with sync writes through a competent SLOG device.

The same VM comes in at a consistent 23 seconds from an iomega StorCenter ix2.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And for comparison, my Linux Mint 13(circa April 2012) boots up in about 3 minutes over a setup that has plenty of processing power and RAM, but the pool was never optimized for NFS or iSCSI. ;)

So you can see a perfect example of how if you screw up 1 item it can ruin all the other hard work(and money) you put into getting good performance.

You either do it ALL perfectly correct or nothing works. ;)
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Thanks all for your responses. Now you've really got me thinking of sticking with FreeNas. So I've been weighing up my options

FYI, i can spin up a Windows 8 VM on my desktop in 8 secs, but I'll never compete with directly connected SSD
 

HarryE

Cadet
Joined
May 27, 2011
Messages
6
My HP Microserver Gen 8 (Xeon 1230v2@3.3Ghz+16Gb Ecc RAM, 4 x1Tb Hdd, 1 SSD, 1 IBM M1015 HBA card, running ESXi 5.1/Freenas/PFsense) draws 65W at idle,max 105W, 45W with disks spun down. No way the N40L draws more than 50W! I can't imagine my NAS without ZFS...
The N40L is fully capable of saturating 1Gb connection from ZFS (zfs send over netcat). The ZFS scrub is 250MB/s on 3x3Tb RaidZ1 (N40L). The SAMBA and iSCSI implementations may decrease the transfer speed up to 50%, depending on configuration parameters.
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
Also, I was doing some more research on Link Aggregation (my motherboard has dual NICs also), and I found out that point to point connections still max out at 1Gb/s, the only benefit is that two different client pcs can get 1Gb/s simultaneously...

So no free lunch apparently, and that would be true regardless of which NAS the OP decides on.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Let me fix this...
Also, I was doing some more research on Link Aggregation (my motherboard has dual NICs also), and I found out that point to point connections still max out at 1Gb/s, the only benefit is that two different client pcs *may* get 1Gb/s simultaneously...

There is no guarantee you won't both end up on the same 1Gb link. This is why with Link Aggregation we tell people if you don't have 10 independent workstations running at the same time then this is not recommended. It's a combination of making sure you will benefit and the fact that adding complexity adds more failure modes. Need to be 100% sure that the benfits will outweight any losses. ;)
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
So in conclusion, for most SOHO/home scenarios, the only way to get point-to-point transfer speeds faster than 1Gb or 100ish MB/s is to get 10Gb hardware, which is prohibitively expensive ($300 per NIC and $800ish for a switch). Otherwise, DAS using USB3 or Thunderbolt.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Unless you do iSCSI which supports multipath, you are basically correct.

There's other drawbacks to using USB3 and Thunderbolt too. Reliability for one. USB was not intended for long term connections. But that's a different argument, and one I've discussed too many times around here. ;)
 
Status
Not open for further replies.
Top