HPE ProLiant DL390 performance questions

eRJe

Cadet
Joined
Oct 3, 2019
Messages
7
Dear community,

Since a few weeks I am running all my VM's on a HPE ProLiant DL380 Gen9 machine with ESXi 6.7 U2 as hypervisor. On my previous server I had created a Linux based fileserver with 4x3TB HDD, configured in RAID5 with mdadm. This time I decided to give FreeNas a shot as the ZFS filesystem should (or can) be superior in terms of data safety. I've been testing now for a few weeks and some questions have come up and I hope you be able to help me with them.

HPE ProLiant DL380 Gen9
2x Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz
64GB DDR4 ECC
Smart Array P440ar Controller
Supermicro -USAS2-L8i 8-Port SAS/2
HP Ethernet 1Gb 4-port 331i Adapter (Broadcom)

The P440ar HW Raid controller hosts 4x500SSD Samsung 840 EVO in RAID10 that hosts all VM's
The Supermicro hosts 4x12TB WD Gold in RAIDz
ESXI boots from an internal USB key

FreeNas VM setup
FreeNas-11-2-U6
4x CPU(s)
16GB memory
Hard disk 1 25GB
Network adapter VMXNET3
Supermicro HBA in passthru

Code:
root@Freenas[/mnt/RAIDz-4HDD]# zpool status
  pool: RAIDz-4HDD
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        RAIDz-4HDD                                      ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/e74c9924-df18-11e9-9fef-00505684703c  ONLINE       0     0     0
            gptid/e8114859-df18-11e9-9fef-00505684703c  ONLINE       0     0     0
            gptid/e8d4c816-df18-11e9-9fef-00505684703c  ONLINE       0     0     0
            gptid/e999a3d1-df18-11e9-9fef-00505684703c  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:06 with 0 errors on Wed Oct  2 03:45:06 2019
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0


After building the RAIDz storage in the FreeNas webinterface, I created samba shares and connected to them with several VM's running on the same host. The first speed comparison I did was copying a 14GB file from and to the RAIDz storage between WIndows10, Ubuntu19 and FreeNas11.2.

write to the FreeNas about 130MiB/s.
Read from the FreeNas after a 180MiB/s burst down to 30MiB/s or less, average 20~50MiB/s although Ubuntu was more stable and faster with 80MiB/s.

iperf3 tests
FreeNas to Slackware 19,6Gb/s
Slackware to FreeNas 6,65Gb/s
FreeNas to Win10 2,43Gb/s
Win10 to FreeNas 1,64Gb/s
Win10 to Slackware 3,65Gb/s
Slackware to Win10 2,44Gb/s

I also did dd tests
Code:
dd if=/dev/zero of=/mnt/RAIDz-4HDD/testfile bs=4M count 10000
dd of=/mnt/RAIDz-4HDD/testfile if=/dev/zero bs=4M count=10000


Unfortunately I lost the notes of the results but they were in line with 220MiB/s write and 60MiB/s read.

After reading some posts about performances, I disabled the compression. This had no measurable changes. I then changed the network adapter for the Win10 VM from VMXNET to E1000. iperf3 improved a significantly in one direction:

FreeNas to Win10 2,54Gb/s
Win10 to FreeNas 4,32Gb/s

Going back to VMXNET also reduces the speed again.

Sorry for dragging this but now it will get interesting. Due to lack of time, I didn't test for a few days and only today I continued in order to write this post.

iperf3
FreeNas to Slackware 15,3Gb/s
Slackware to FreeNas 20,5Gb/s
FreeNas to Win10 2,50Gb/s
Win10 to FreeNas 4,20Gb/s
Win10 to Slackware 3,71Gb/s
Slackware to Win10 2,12Gb/s
Ubuntu to Slackware 22,2Gb/s
Slackware to Ubuntu 20,5Gb/s

dd test on FreeNas RAIDz pool
Code:
dd if=/dev/zero of=/mnt/RAIDz-4HDD/testfile bs=4M count 10000
dd of=/mnt/RAIDz-4HDD/testfile if=/dev/zero bs=4M count=10000


write: 248MiB/s
read: 375MiB/s

Is this even possible? The RAIDz pool should perform as fast as one (the slowest) single disk in the array, right? That should be 255MiB/s. And how is it possible that after several consistent speeds tests, the server performs much faster. I did not change anything in the configurations.

File copy using the same file from the dd above:
From a Ubuntu shell
RAIDz to local 39Gb file: fails. The file name is written, 0 bites copied
Local to RAIDz 39Gb file: 253MiB/s

From Win10 explorer (VM)
RAIDz to local 39Gb file: 217MiB/s
Local to RAIDz 39Gb file: 341MiB/s

I've run most tests today several times and all performance is consistent. Did I do something wrong previously, could something have changed (improved) on the zpool or is my server on steroids?

And why can't I copy a 39Gb file from the RAIDz to another disk, locally or via network? I did this a few days before successfully.

Initially when I was copying test files, I used a 14Gb mkv. Would this have make a difference as in file content (compression) and memory buffers?

The network speed between Windows and any (L)UNIX machine is much slower then between (L)UNIX machines. Any thoughts about his? I guess I need to start looking at VMware for this.

Other then hosting some movies and music on the RAIDz, I am planning to host multiple user data and a large photo catalog on it. The photo's (40MiB) are edited on a VM with Lightroom. With the latest test results I think this is just fine, even without L2ARC? Alternatively I am considering a 2nd zpool with 2x or 4x 4TB HDDs in (stripped)mirror if the single RAIDz would not perform with multiple users accessing it. Thoughts are welcome!

Can someone recommend a replacement for that Supermicro HBA that is supported by HPE?

btw, all critical data is being synced to a second NAS locally and into a cloud backup service.

Thanks for any help!

Robbert
 

eRJe

Cadet
Joined
Oct 3, 2019
Messages
7
bump
 

Pitfrr

Wizard
Joined
Feb 10, 2014
Messages
1,531
About the HBA, I'm using a similar setup (HP DL380 G6) and I use the integrated controller as datastore for vmWare.
Then I added an LSI 9211 (actually a DELL H200 reflashed if I'm correct) and use it with 6 SATA drives and it works great.

Although since I have a home setup I didn't look into performances since for a single user (or almost) it's more than enough (I have a 1 Gb/s network and I get transfer rates in the 100MB/s range so that's fine).
I didn't do any tests from VMs to FreeNAS... Only from a windows client (on the network) to FreNAS.
But I did tests from my backup server to FreeNAS (so on the network with iperf) and I saturate the Gb link.

I'm not very familiar with an L2ARC (never tried/used it) but my understanding is that for most common uses you won't get any benefits. Only in some very intensive use (i.e. performances or lots of users) it might be useful. But I'd rather let other forum members develop on that.
And about you pool structure, that is depending on the performances you need if you want to increase iOPs or so. I won't be much help here either because I never dig into that (I should but since I don't have a need too, I'm a bit lazy! :-D).

To give you a comparison, I did some quick tests (using 2GB files) between a windows VM and FreeNAS and here are the results:
FreeNAS -> Win: around 70MB/s -> not so terrific, a bit surprised here... :smile:
Win -> FreeNAS: around 110MB/s

Here is on FreeNAS's side the reporting:
1570809358396.png

What is reported by FreeNAS is a bit different to what I measured but still within the same range.
 
Top