New Build HP ProLiant N40L Ultra Micro Tower Server

Status
Not open for further replies.

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Unfortunately, I was unable to confirm gswa's configuration with a 3xdrive RAID-Z1 array...

Code:
ZFS, All drives formatted 4096K (gnop)
ZFS-RAID1, 3x2TB (gnop)
-----------------------------------------------------------------
dd if=/dev/zero of=test.dat count=50k bs=2048k
107374182400 bytes transferred in 666.079431 secs (161203270 bytes/sec) -- 153.73Mb/s

dd of=/dev/null if=test.dat count=50k bs=2048k
107374182400 bytes transferred in 529.579634 secs (202753610 bytes/sec) -- 193.36Mb/s

iSCSI (Jumbo) -- UP (Avg 26Mb/s, Pk 90Mb/s) -- Peaky

ZFS_iSCSI_3Drives_RAIDZ1.png
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
There are some threads on that microserver here and elsewhere that you should search for.

I researched it some time ago and I don't have the threads bookmarked, though from my recollection, FreeNAS will run on the N40L but is very processor limited. Meaning it will spike the processor under heavy loads if running RAIDZ and stall transfers. I seem to recall that FreeNAS mirrors ran without that sort of processor induced slowdown.

I have the N36L, which is a slightly older model, and I can say without a doubt that it handles 4-disk ZFS RAID "5" without issue...I've not done ZFS RAID "6"...but it will saturate and sustain a gigabit nic without peaking the processor with large files. With smaller files it does slow down some, but not much. generally the problem has been my desktop (which has a hell of a lot better stats, with exception to not having RAID)...
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Well, just for shits and giggles -- I decided to try something a bit different... I made the 60Gb SSD bootable, and tried a couple of other OS's

Code:
Windows XP
6 x 2TB formatted NTFS-STRIPE
-------------------------------
Avg SMB throughput -- 48Mb/s -- Consistent and smooth


Code:
Windows Server 8
5 x 2TB formatted NTFS virtual device   2TB Parity through the new Windows Storage Spaces system
-------------------------------------------------------------------------------------------------
Avg SMB throughput -- 44Mb/s -- Smooth and consistent


Code:
Windows Home Server 2011
(took a bit of fiddling to get it to install to a 60Gb drive)
6 x 2TB drives formatted with NTFS, and snap-shot raided with 1 drive as a PPU through Flexraid
--------------------------------------------------------------------------------------------
Avg SMB Throughput -- 43Mb/s -- Smooth and consistent




I think for the next test - I'm going to swap out all of those 2TB drives -- I have 4 or 5 older identical 750Gb drives from an old ReadyNAS kicking around somewhere -- I want to see if the mix of 5200, 7200, and 4K sector drives is causing the stalling problem.
 

DWZ

Cadet
Joined
Apr 6, 2012
Messages
3
Hexland: Presumably you're using the Intel NIC rather than the onboard one?

(sorry, I'm just stalking your progress, I'm thinking of buying a Microserver but haven't pulled the trigger yet mostly because of this network issue).
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Yes, I'm using the Intel NIC. I'm not convinced its a network issue though, because I get pretty good throughput from FreeNAS with a UFS stripe set.

I noticed a thread earlier today about CIFS throughput when using a registered copy handler -- I'm using Teracopy instead of native Windows to transfer files, and measuring the iSCSI throughput of the read/write operation. If others are experiencing buffering issues with 3rd party copy handlers -- then this also might be part of my issue.

There appear to be plenty of other people with N40 and the older/slower N36 based Microservers who aren't seeing this issue -- so I suspect its something funny with my combination of hardware/software/cables... I just need to narrow it down to which components are causing problems.

I thought perhaps it might be an underpowered CPU -- but there are some threads in the hardware section where people are using Dual Core Atom based systems (which appear to be about 1/2 - 2/3 power of the N40L) who are getting > 100Mb/s to a RAIDZ2.

Next steps are
1 - Test with 4 identical 750Gb drives
2 - I take delivery of an M1015 8-port SAS controller at the end of the week... going to see if it makes a difference
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
So, having tested with 4 x 750Gb Baracuda 7200.10 drives... I can report: No difference :(

One of the drives had some bad sectors, so there was a bit of logspam -- but it shouldn't have affected it too much.

The overall effect was the same tho... 2-3 seconds of 80-100Mb/s write speeds, followed by about 2 seconds of nothing.

What was interesting was that with these older drives, you can hear them reseek during big writes... so it was interesting watching the dips in the network monitor correspond with the clicking/grinding noise from the drive (and the solid 2 seconds of HDD-Light ON)... the HDD Light would only pulse during the 2-3 seconds of optimal writing.

So, it doesn't appear to be a problem related to the drives, spin speeds, capacities, mix of manufacturers, or mix of 512/4K sector alignments.

Next thing to test I suppose, is restoring the Microserver BIOS back to HP stock bios (it's currently the Russian hacked one, in order to enable ACHI capabilities on ports 5/6).

After that, my M1015 SAS controller should arrive -- and its all rinse-and-repeat.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
So, having tested with 4 x 750Gb Baracuda 7200.10 drives... I can report: No difference :(

After that, my M1015 SAS controller should arrive -- and its all rinse-and-repeat.

Thanks for the update.

I'm guessing your Intel NIC is PCIe 1 x? The SAS controller will need the 16x slot.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I've got a dual core Atom D510 with 8GB of RAM, which I just bumped up a couple of days ago. Up until this point I hadn't paid much attention to performance but have decided to see what I could push it to. I did have some settings tweaked for loader.conf AND auxiliary parameters set for CIFS. Last night I upgraded to 8.04 to play with the new python utils to look at memory etc. I started out with stock settings and my performance SUCKED. I was getting the much complained about slow directories loading etc. and about 40MB/s writes and 20-25MB/s reads with CIFS. I've slowly been restoring some of my tweaks as well as changing others. Right now I just got a decent jump with these auxiliary CIFS parameters along with the following loader/sysctl settings using 8.04 x64. I'm still getting about 40MB/s writes with an 8GB file, but I'm getting 80MB/s reads with a 20GB file, which is 4x faster. (Oh, and no delay opening folders)

CIFS Auxiliary Parmeters: (a couple of these are commented out for testing)
Code:
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=98304 SO_SNDBUF=98304
# 65536 seems to work better for above values
read raw = yes
write raw = yes
max xmit = 65536
getwd cache = yes


EDIT: The following settings were on version 8.0.3-p1

Loaders/Sysctls (I just put them all in the Loaders section of the GUI)
Code:
	
vfs.zfs.prefetch_disable	0		
vfs.zfs.zil_disable	0		
vfs.zfs.txg.timeout	5		
vfs.zfs.txg.synctime	1		
vfs.zfs.vdev.max_pending	35		
vfs.zfs.vdev.min_pending	4		
vfs.zfs.txg.write_limit_override	1073741824		
vfs.nfsrv.async	0		
vfs.zfs.vdev.cache.size	8M		
kern.maxvnodes	250000		
net.inet.tcp.sendbuf_max	16777216		
net.inet.tcp.recvbuf_max	16777216

#THESE ARE FROM MY loader.conf

kern.coredump=0
kern.dirdelay=4
kern.filedelay=5
kern.ipc.maxsockbuf=16777216
kern.ipc.nmbclusters=32768
kern.ipc.shmall=32768
kern.ipc.shmmax=67108864
kern.ipc.somaxconn=8192
kern.maxfiles=65536
kern.maxfilesperproc=32768
kern.metadelay=3
#net.inet.tcp.cc.default.algorithm=htcp
net.inet.tcp.delayed_ack=0
net.inet.tcp.inflight.enable=0
net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.recvspace=131072
net.inet.tcp.rfc1323=1
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.sendspace=65536
net.inet.udp.maxdgram=57344
net.inet.udp.recvspace=65536
net.local.stream.recvspace=65536
net.local.stream.sendspace=65536


I need to mention I'm still tweaking stuff and while running "top", my available memory does drop quite low, ~230MB. The last group of tweaks are from my actual loader.conf and haven't been entered into the GUI yet. Also, I don't remember what most of these do as I found a post about some of them over 1.5 years ago when I installed 8.0.

EDIT: I think I borked something up, now my reads are back around 40MB and I'm not sure what changed.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Thanks... I'll look into playing with some of these tuning values. I think I may have already put some of them in; there are a few that definitely look familiar.

I'm not using CIFS right now -- I'm testing the read/write speeds from a Windows Home Server 2011 machine which has the ZFS volume mounted through iSCSI.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
So much for the short lived excitement, after upgrading to 8.0.4-p1 things hosed up again. If I find the magic recipe I'll post back again.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
So, a little pottering about on Sunday morning while I wait for my M1015 SATA controller to arrive... It strikes me that the one thing that's different about my Microserver config is that I'm using an older hacked Russian BIOS from back in October when I first bought the unit.

So I googled a bit, and came across this link over at homeservershow.com and bios-mods.com

http://homeservershow.com/hp-microserver-n40l-build-and-bios-modification.html

I downloaded the a newer modified version of the AMIBIOS for N40, and changed the settings as recommended...

On the 'dd' read/write test -- write speeds were a little faster while read speeds remained the same

Code:
ZFS, All drives formatted 4096K (gnop)
ZFS-RAID-2, 6x2TB (gnop) -- New BIOS with SB settings
----------------------------------------------------------------------------------
dd if=/dev/zero of=test.dat bs=2048k count=50k
107374182400 bytes transferred in 628.539118 secs (170831344 bytes/sec) -- 162.92Mb/s

dd if=test.dat of=/dev/null bs=2048k count=50k
107374182400 bytes transferred in 316.648436 secs (339095887 bytes/sec) -- 323.39Mb/s



However, on the iSCSI / Network speed test (reading/writing from Window Home Server 2011 via a dedicated Intel NIC connected via a crossover cable and MTU 9000 set)

iSCSI_Jumbo_ZFS_RAIDZ2_NewBios.png


Network read/write performance still showed ON/OFF/ON/OFF stalling as before, but transfer speeds were MUCH improved, and overall average speed evened out to something a bit more acceptable (~75Mb/s).
I'm still not happy with the fact that it writes, stalls, writes again, etc... but I'll keep playing
 

DWZ

Cadet
Joined
Apr 6, 2012
Messages
3
Out of interest, had you done any tests with an official HP, non-hacked, BIOS?
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Not yet. I have 6 x 2Tb drives in there, hooked up to the main board. I wouldn't be able to use the official BIOS because of the speed restrictions on 2 of the SATA channels.

However, my IBM M1015 SATA controller should arrive any day now -- I was planning on moving all 6 drives over to the M1015 and not using the onboard controller at all -- I'll just need to see if that has any effect.

In the mean time, if I have any time to spare, I think I'm going to try installing Ubuntu and seeing what kind of performance I get from the native ZFS on Linux project... just for laughs I guess.

Failing all of this, I might just go back to Windows Home Server and try FlexRAID with 2 x PPU's for parity (and just get it to do the parity update once or twice a day or so... for the most part, my data is pretty much static -- its mostly pictures, music and project archives)
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
Same here.

I'm having very similar problems with my Microserver N40L.

Writing large files to the volume causes lots of stalls. I get 3-5s peaks of ~60Mbps and 5-10s troughs of 0 if I copy to a CIFS share, and similar 3-5s peaks of ~130Mbps followed by 5-10s troughs if I copy to an NFS share.

My server has two drives, the 250Gb it shipped with and an additional 70Gb that I found kicking about. I upgraded the memory to 8Gb. I have not changed the BIOS.

I started off running under VMWare ESXi 5, as I wanted to use this box for other things as well as a NAS. But I dropped back to a native install of FreeNAS on a little 2Gb USB stick that's installed internally.

When running under ESXi I had the same issue, except the peaks were only ~30Mbps. I only tested with CIFS under ESXi.

Looking forward to a solution!

NB - has anybody seen similar probs with previous versions of FreeNAS? I'm using 8.0.4-multimedia-amd64.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
It seems that throughput is directly related to the nature of the disks. I created a single disk ZFS volume and was able to complete a ~13Gb copy at an overall throughput of ~70Mbps to an NFS share.

Ten times as fast as my previous NAS (a Drobo and DroboShare). It was still bursty, but this time I'd see 5-8s peaks of ~120Mbps with troughs of 1-10s.

I'm getting tired of this now - I need my NAS up and running today really. It's just a home project. I've gotta go back to work tomorrow :)
Is there any ZFS tuning I should be doing? Or is FreeNAS handling that for me?
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
I haven't found any way around it... It would appear to be related to ZFS though. If I create a UFS stripe set instead of a ZFS stripe set -- I can get almost 100Mb/s constant transfer rate with no drop outs.
Which sucks balls, because the whole reason for choosing FreeNAS was that I wanted ZFS-RaidZ2 for redundancy and resiliency. Otherwise, I would have stuck with Windows.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
I would have stuck with Windows.

Agreed. I'm thinking about trying a Linux distro instead - I don't have access to any Windows server releases.

Although, I am seeing pretty good real world performance with just a single disk ZFS volume. I've got a copy job and two movies running simultaneously to and from three boxes plus the FreeNAS.
The Drobo would have exploded by now.

Oh - as I was typing that both of the movies stalled :(
Has anybody had better luck with the older FreeNAS releases?
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
A few of us are having the same issues with our Atom systems. I've been playing around with tuning like a madman and haven't found any silver bullet. Someone else said they think that someone changed between 8.02 and 8.04 (Samba) and they were getting 90+MB/s before that. I'm certain there is a solution, I just haven't found it yet.

Before throwing in the towel and leaving FreeNAS, you could always create a pool of 3 sets of mirrors, those should give the best performance.

I was watching a Sun/Oracle video given by one of the ZFS developers and he was talking about "ZFS breathing" or what we seem to be experiencing. He said it was "completely normal" and supposed to happen, but if that's the case it would seem that there is some buffer that could be tuned to even things out. It's suggested to reduce vfs.zfs.txg.timeout from 30 to 5, but this hasn't helped me at all. My transfers start out at 120-150MB/s, quickly drop to 90, and then slowly drop off to 30-40MB. I've been able to get steady 50/50 reads/writes, but I know it can be better. I haven't reached my saturation point for messing with this yet, but it is pretty discouraging. If I come up with anything I'll let people know, but in the mean time simple observations about single disks working better are meaningful too. There's a clue somewhere.
 
Status
Not open for further replies.
Top