New Build HP ProLiant N40L Ultra Micro Tower Server

Status
Not open for further replies.

VeGeTa-X

Dabbler
Joined
Mar 25, 2012
Messages
12
Hi, i am planning to build my first freenas version 8 using zfs. Below is the hardware I planning to buy to build my little nas server I have a few questions.

1. I am planning to buy 2 Seagate Barracuda ST3000DM001 3TB 7200 RPM will I issues with this model of hard drive or the size 3tb drive

2. I will be using 8GB of memory to use zfs I am planning to use 2 drives to start out with I will be adding more drives in the future 2 and 3 terabyte drives will be able
to add new drives to my pool and add to my storage size?


1 -HP ProLiant N40L Ultra Micro Tower Server
http://www.newegg.com/Product/Product.aspx?Item=N82E16859107052


2- Kingston 4GB 240-Pin DD
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139466&Tpk=KTH-PL313E/4G


2 - Seagate Barracuda ST3000DM001 3TB 7200 RPM
http://www.newegg.com/Product/Product.aspx?Item=N82E16822148844


1 - Patriot Xporter XT Boost 16GB Flash Drive
http://www.newegg.com/Product/Product.aspx?Item=N82E16820220253
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
There are some threads on that microserver here and elsewhere that you should search for.

I researched it some time ago and I don't have the threads bookmarked, though from my recollection, FreeNAS will run on the N40L but is very processor limited. Meaning it will spike the processor under heavy loads if running RAIDZ and stall transfers. I seem to recall that FreeNAS mirrors ran without that sort of processor induced slowdown.
 

peterh

Patron
Joined
Oct 19, 2011
Messages
315
There is always a limitation on any computer system.
I's say that N36 / N40 is memory limited due to maximum size.

There is no problem running raidz on a n36/n40 system , do not let that influence your choice of
zfs/mirror/raidz (although i find raidz2 less useful on these systems due to the 4 disks allowed)
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
There is always a limitation on any computer system.
Of course some hardware component is always the limiter, but in this case it's not the network interface that is limiting transfer speed, it is the processor.
I's say that N36 / N40 is memory limited due to maximum size.
I doubt it, 8GB is more than enough memory to run a 4 or 5 disc FreeNAS 8 RAIDZ-1 on modern Intel processors. There's no reason this system should require more than 8GB to run 4 or 5 discs.

There have been abundant reports that the processor in this system induces severe transfer stalls. I considered this system and researched it heavily, I could not recommend it for a RAIDZ install. I built my own Intel Sandybridge system instead.
 

peterh

Patron
Joined
Oct 19, 2011
Messages
315
I do not speak about "abundant reports" i speak as user of several of these boxes.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
I do not speak about "abundant reports" i speak as user of several of these boxes.

Interesting. I defer to you in this. As I said, I don't have this box, though I have read a great many reports of dissatisfaction with this box and Freenas 8. Specifically, the complaints of regularly spaced, persistent transfer stalls.

Did you tune your systems to fix the the transfer stalls or did it just never occur in your builds? Perhaps your use case is not terribly speed dependent and they never bothered you?
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
I have an N40L with 8Gb RAM, running FreeNAS 8 with 6x2tb drives... Outlined here --
http://forums.freenas.org/showthrea...amarks-and-Cache&p=24532&viewfull=1#post24532

I've tested FreeNAS, FreeBSD, PC-BSD and OpenSolaris/OpenIndiana and I have to concur with Trianian about the transfer stalls... peak transfer rate can get around 90Mb/s -- however, from what I can see -- the CPU tops out on a single core at 100% during large transfers and the network speed stalls to 0 for 0.5-1.0 seconds every few seconds -- resulting in a mean transfer rate of around 45-50Mb/s.

I too have several of these boxes, and am actively using them as NAS servers and WHS servers... I haven't been able to get them (using ZFS) to sustain a consistent transfer rate.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
I have an N40L with 8Gb RAM, running FreeNAS 8 with 6x2tb drives... Outlined here --
http://forums.freenas.org/showthrea...amarks-and-Cache&p=24532&viewfull=1#post24532

I've tested FreeNAS, FreeBSD, PC-BSD and OpenSolaris/OpenIndiana and I have to concur with Trianian about the transfer stalls... peak transfer rate can get around 90Mb/s -- however, from what I can see -- the CPU tops out on a single core at 100% during large transfers and the network speed stalls to 0 for 0.5-1.0 seconds every few seconds -- resulting in a mean transfer rate of around 45-50Mb/s.

I too have several of these boxes, and am actively using them as NAS servers and WHS servers... I haven't been able to get them (using ZFS) to sustain a consistent transfer rate.

Very interesting insight. From reading the thread you've linked to, you believe the lack of 4K formatting is a part of the problem?

I recall a "Force 4K" check box when I set up my system. I take it you're referring to something else entirely?
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Very interesting insight. From reading the thread you've linked to, you believe the lack of 4K formatting is a part of the problem?

I recall a "Force 4K" check box when I set up my system. I take it you're referring to something else entirely?

Certainly, 4K formatting had a major performance effect because 2 of the drives in the vdev were WD EARS drives (which have known problems)... Unfortunately, I couldn't test OpenIndiana with 4K formatting, since the current version (1.51a) doesn't have a valid patched zpool command to deal with the issue.

However, the recurring problem (which forced me to test all the different configurations) was that I simply could not get a consistent transfer speed (no matter which combination of protocol (CIFS, NFS, iSCSI), Network Adapter, block formatting or Operating System I chose).

The only common variable between all of those things that I saw was that the CPU was topping out during heavy transfers.

ZFS-RaidZ2 must certainly be a contributor to that... testing with a ZFS-Stripe resulted in much faster network transfer rates, but was still 'peaky' and inconsistent -- resulting in the same 'two seconds of shit-fast, followed by a 1/2 second of nothing'. I also tested with Jumbo frames (not documented there) with a slight increase in speed, but the same 'peaky' results.

Unfortunately, I don't have any spare hardware to test the same disks, controller, nic with a larger CPU.
 

peterh

Patron
Joined
Oct 19, 2011
Messages
315
Interesting. I defer to you in this. As I said, I don't have this box, though I have read a great many reports of dissatisfaction with this box and Freenas 8. Specifically, the complaints of regularly spaced, persistent transfer stalls.

Did you tune your systems to fix the the transfer stalls or did it just never occur in your builds? Perhaps your use case is not terribly speed dependent and they never bothered you?
These are used as "home-storage" using a mix of protocols ( mostly AFS where timemachine is one of the apps)
Bear in mind that this is cheap low-end systems and as such i find nfs speeds of 30-40MB/s acceptable, AFS will peak on 25MB
and settle at 10-15MB/s
I have not done any tweaking as i find the performance ok ( it's faster then an USB-disk, and it's substantially larger and may
be shared amongs several computers).
I thnk that the largest obstacle for better speed is the limitation of 4 discs, as a raidzgroup it will be only one vdev
and as such be limitet to the speed of a single disc.
If i would opt for speed i would use at least 3 vdev's ( 9-12 discs to start with) possibly more and much more memory.
I have been impressed by a thumper with 48 discs, but stability issues made this system use sunk-os instread.
 

topping

Dabbler
Joined
Mar 29, 2012
Messages
14
I just built one of these last weekend, but did so with the base box, a 2GB memory stick, 2x 2TB Hitachi disks, 8GB really cheapo RAM. It's running as a mirror for now, when I need more space I'll degrade the mirror, add a raidz pool on the old mirror disk and two additional 2TB disks, move the data, then add the original disk to the pool and scrub it. That will take my 1.8TB usable to 5.4TB usable (or more, depending on compression).

I've noticed some of these hiccups every so often on sustained writes, but it was before I had 8GB of RAM. Not sure whether it still does that, don't really care since it's just for a home server. Since I had the disks and the memory stick sitting around, I feel like the $300 I spent on this project was a spectacular value and I tell any geek friend that will listen about the box. It's way cheaper than a Drobo, uses a non-proprietary filesystem, and is very flexible and robust.

If one were using this for a workgroup server, it would still probably do fine for less than 10-20 users with light workgroup filer needs.
 

b1ghen

Contributor
Joined
Oct 19, 2011
Messages
113
I just built one of these last weekend, but did so with the base box, a 2GB memory stick, 2x 2TB Hitachi disks, 8GB really cheapo RAM. It's running as a mirror for now, when I need more space I'll degrade the mirror, add a raidz pool on the old mirror disk and two additional 2TB disks, move the data, then add the original disk to the pool and scrub it. That will take my 1.8TB usable to 5.4TB usable (or more, depending on compression).

If I understand what you want to do correctly, then you can't add a single drive to an existing pool, you need to add another vdev to your pool. So growing a RAIDZ with one disk at a time is a no no with ZFS.

Edit:

Well technically you could add a single drive to your pool, but if that drive dies your whole pool is screwed.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
The only common variable between all of those things that I saw was that the CPU was topping out during heavy transfers.

ZFS-RaidZ2 must certainly be a contributor to that... testing with a ZFS-Stripe resulted in much faster network transfer rates, but was still 'peaky' and inconsistent -- resulting in the same 'two seconds of shit-fast, followed by a 1/2 second of nothing'. I also tested with Jumbo frames (not documented there) with a slight increase in speed, but the same 'peaky' results.

Unfortunately, I don't have any spare hardware to test the same disks, controller, nic with a larger CPU.

So no idea as to what exactly is causing the spiking and stalling?

I just picked up 3 of these today at a crazy bargain, less than $400 for the trio.

I'll probably set them up as ZFS mirrors until the RAIDZ-1 hesitation issue is solved. Given that FreeNAS 7 RAIDZ-1 reportedly works well on this same hardware, I have to suspect it is a solvable issue.

I'll do what I can to help nail it down. The sooner the issue can be identified, the sooner a bug report can be filed and it can be fixed.
 

topping

Dabbler
Joined
Mar 29, 2012
Messages
14
If I understand what you want to do correctly, then you can't add a single drive to an existing pool, you need to add another vdev to your pool. So growing a RAIDZ with one disk at a time is a no no with ZFS.

Edit:

Well technically you could add a single drive to your pool, but if that drive dies your whole pool is screwed.

By breaking the mirror, I have three out of four drive slots available. There is supposed to be a way to build a degraded raidz pool, so I would do that in a new vdev in the remaining three slots, then copy everything over. With everything moved to the new vdev, I would destroy the old pool (which was once a part of the mirror), then "replace" the drive that never existed in the first place. After resilvering, everything should be good.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
So no idea as to what exactly is causing the spiking and stalling?

I'm 90% positive that its an issue with the CPU being underpowered...

There appear to be couple of threads popping up across this forum now, with similar "on-off" or inconsistent transfer rates -- the common factor appears to be low end CPU's...

http://forums.freenas.org/showthread.php?6666-Strange-problem-with-ZFS
http://forums.freenas.org/showthread.php?3117-slow-network-throughput-on-HP-microserver


I'm going to try playing around with making a striped mirror instead of RAIDZ2, and turning off some of the more CPU intensive operations (like checksumming)...
I've got a spare SSD sitting around, I might try moving the Log to there too -- see if that does anything.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
I'm 90% positive that its an issue with the CPU being underpowered...

I'm going to try playing around with making a striped mirror instead of RAIDZ2, and turning off some of the more CPU intensive operations (like checksumming)...
I've got a spare SSD sitting around, I might try moving the Log to there too -- see if that does anything.

I've read that checksumming has a very low processor overhead, at least the basic level of checksumming.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Further testing

OK, some last testing before I go to bed...

The hardware is as follows
-------------------------

FreeNAS 8.2-Beta-2
HP Microserver N40L
8GB RAM
Intel Pro/1000 NIC
6 x 2TB Drives (1 x WD-EARS, 2 x Hitachi, 3 x Seagate)

WHS 2011
HP Microserver N40L
4GB RAM
RoseWill 10/100/1000 NIC
1 x 250Gb Drive (stock)

Integrated NIC on both Microservers is connected to the house network
PCIe NIC on both Microservers is DIRECTLY connected with a brand new crossover cable 1.5ft long.
Both iSCSI Initiator and Target are configured to ONLY direct traffic through the dedicated NICs


Code:
Test 1
ZFS, All drives formatted 4096K (gnop)
ZFS-Striped Mirror (3 x (1 1) x 2Tb)
-----------------------------------------------------------------
dd if=/dev/zero of=test.dat count=50k bs=2048k
107374182400 bytes transferred in 499.747187 secs (214857002 bytes/sec) -- 204.90Mb/s

dd if=test.dat of=/dev/null count=50k bs=2048k
107374182400 bytes transferred in 232.819067 secs (461191533 bytes/sec) -- 439.82Mb/s

iSCSI        -- UP (35Mb/s) peaky
SCSI (Jumbo) -- UP (63Mb/s) peaky

iSCSI_Jumbo_ZFS_StripedMirror.png



Code:
Test 2 --

ZFS, All drives formatted 4096K (gnop)
ZFS-RAID-Z2, 6x2TB (gnop)
Checksum=OFF
-----------------------------------------------------------------
dd if=/dev/zero of=test.dat count=50k bs=2048k
107374182400 bytes transferred in 579.061135 secs (185428059 bytes/sec) -- 176.83Mb/s

dd if=test.dat of=/dev/null bs=2048k count=50k
107374182400 bytes transferred in 319.719823 secs (335838364 bytes/sec) -- 320.28Mb/s

iSCSI (Jumbo) -- UP (64Mb/s) peaky

iSCSI_Jumbo_ZFS_RAIDZ2.png



And finally

Code:
Test 3 --

UFS Formatted
6 x 2Tb Stripe Set
-----------------------------------------------------------------
dd if=/dev/zero of=test.dat bs=2048k count=50k
107374182400 bytes transferred in 277.500740 secs (386932959 bytes/sec) -- 369.00Mb/s

dd if=test.dat of=/dev/null bs=2048k count=50k
107374182400 bytes transferred in 360.924159 secs (297497908 bytes/sec) -- 283.71Mb/s

iSCSI (Jumbo) -- UP (87Mb/s) (no peaking)  (File Extent -- couldn't work out how to do a device extent with UFS)

iSCSI_Jumbo_UFS_Stripe.png



Things to take away from this...


Enabling JUMBO frames (MTU 9000) increased the iSCSI throughput by almost double in the first test -- so I left it on for Test 2 and 3.

Disabling ZFS checksumming had no discernible effect.

Somewhere along the pipeline, ZFS is causing these network stalls. UFS just blasts through 8GB of data without a stutter, whereas ZFS appears to stall network activity while it does something (cache flushing or something?).

No additional tuning was done, except for what was generated with the Autotune option in this version of FreeNAS
 
Status
Not open for further replies.
Top