Poor performance with better hardware

neohhector

Cadet
Joined
Mar 16, 2020
Messages
6
Hi.

Currently I have a SUPERMICRO server with FreeNAS installed and due to the good performance of it, I acquired 2 more servers, in this case, DELL, to expand the infrastructure.

On paper, the DELL servers are more powerful, but I am detecting that they are performing much worse than the SUPERMICRO.

Can you help me to know the cause of that performance?

I'll show you all the data:

--

Hardware:

SUPERMICRO:

SUPERSERVER SUPERMICRO 2U SYS-5X9FSAS-BCR922U
INTEL E3-1240v5
2 x 16 GB DDR4 2133
10 x 3.5" SATA 4TB ST4000NM0024 7.2K
REAR HOT SWAP DRIVE BAY FOR 2x2.5" DRIVES
2 x 2.5" SSD INTEL DC S3510 80GB (OS)
LSI HBA 9207-8i
CHELSIO T520-CR

DELL:

DELL POWEREDGE R730XD
INTEL E5-2637v3
4 x 16 GB DDR4 2133
10 x 3.5" SAS 6TB ST6000NM0034 7.2K
2 x 2.5" SSD INTEL 520 240GB (OS)
DELL HBA330
INTEL X520

---

OS and Drivers:

SUPERMICRO:

FreeNAS 11.2

mps0: <Avago Technologies (LSI) SAS2308> port 0xe000-0xe0ff mem 0xdf340000-0xdf34ffff,0xdf300000-0xdf33ffff irq 17 at device 0.0 on pci2
mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd
mps0: IOCCapabilities: 5a85c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,MSIXIndex,HostDisc>

DELL:

FreeNAS 11.3

<Avago Technologies (LSI) SAS3008> port 0x3000-0x30ff mem 0x92000000-0x9200ffff,0x91f00000-0x91ffffff irq 26 at device 0.0 numa-domain 0 on pci2
mpr0: Firmware: 16.00.08.00, Driver: 18.03.00.00-fbsd
mpr0: IOCCapabilities: 7a85c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,MSIXIndex,HostDisc,FastPath,RDPQArray>

---

Performance:

SUPERMICRO:

root@freenas:/mnt/zfs-raidz2 # dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 20.868347 secs (5145313144 bytes/sec)

root@freenas:/mnt/zfs-raidz2 # dd of=/dev/zero if=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 8.124405 secs (13216252470 bytes/sec)

DELL#1:

root@nas01-r730[/mnt/nas01-r730]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 51.209646 secs (2096756983 bytes/sec)

root@nas01-r730[/mnt/nas01-r730]# dd of=/dev/zero if=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 14.377715 secs (7468097893 bytes/sec)

DELL#2:

root@nas02-r730[/mnt/nas02-r730]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 50.512694 secs (2125687089 bytes/sec)

root@nas02-r730[/mnt/nas02-r730]# dd of=/dev/zero if=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 17.853459 secs (6014194971 bytes/sec)

---

Tips:

The DELL servers have a replication task activated between them.
Of course, performance tests are performed when this task is not active.

---

Thanks for all.

Regards,
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
First, we should look at the pool layout on both servers. (zpool status -v)
 

neohhector

Cadet
Joined
Mar 16, 2020
Messages
6
Hi.

SUPERMICRO:

root@freenas:~ # zpool status -v
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:37 with 0 errors on Tue Mar 17 03:45:45 2020
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0

errors: No known data errors

pool: zfs-raidz2
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0 in 0 days 19:40:29 with 0 errors on Sun Mar 8 19:40:32 2020
config:

NAME STATE READ WRITE CKSUM
zfs-raidz2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/935d5068-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/93fde7f5-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/94a3b181-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/95472e62-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/95e66f61-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/967d176b-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/971d15b4-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/97be6f4d-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/98617e91-595f-11e6-b233-000743361210 ONLINE 0 0 0
gptid/99046699-595f-11e6-b233-000743361210 ONLINE 0 0 0

errors: No known data errors

DELL#1:

root@nas01-r730[~]# zpool status -v
pool: NAS01-R730
state: ONLINE
scan: scrub repaired 0 in 0 days 03:17:11 with 0 errors on Sun Feb 23 03:17:14 2020
config:

NAME STATE READ WRITE CKSUM
NAS01-R730 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/1ee7e0b4-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/22715647-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/263abf86-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/29c78551-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/2d531595-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/30f40b71-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/3490c178-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/3832196c-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/3bd27895-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0
gptid/3f78b8ec-0d26-11ea-97a1-ecf4bbe1d410 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:13 with 0 errors on Fri Mar 13 03:45:13 2020
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da10p2 ONLINE 0 0 0
da11p2 ONLINE 0 0 0

errors: No known data errors

DELL#2:


root@nas02-r730[~]# zpool status -v
pool: NAS02-R730
state: ONLINE
scan: scrub repaired 0 in 0 days 02:14:25 with 0 errors on Sun Mar 1 02:14:27 2020
config:

NAME STATE READ WRITE CKSUM
NAS02-R730 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/2165116b-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/24e19dcb-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/286ca93a-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/2bfbca24-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/2f8f1db0-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/3316f670-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/36c3ba6e-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/3a522127-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/3de1a4f4-103b-11ea-9555-246e96117890 ONLINE 0 0 0
gptid/417366e3-103b-11ea-9555-246e96117890 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:13 with 0 errors on Sun Mar 15 03:45:13 2020
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da10p2 ONLINE 0 0 0
da11p2 ONLINE 0 0 0

errors: No known data errors

---

Thanks.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
OK, so nothing to surprise us in terms of different layouts. All 3 are RAIDZ2 pools, 10 wide.

Writing zeroes, you will have highly compressible data, do you have compression turned on for some or all of the datasets you are writing to?
 

craig51

Dabbler
Joined
Oct 29, 2017
Messages
19
Hi,

did you ever find anything contributing to your difference in performance?
 

dirtyfreebooter

Explorer
Joined
Oct 3, 2020
Messages
72
dd is probably the worse possible benchmark for IO, It's queue depth of 1, unless i you want to benchmark is that... try using fio (installed by default on TrueNAS RC1, not sure about FreeNAS, only recently starting using FreeNAS/TrueNAS and started with the RC1 since its for a Homelab)

There are plenty of websites that show variety of fio benchmarks.

Also another thing about your tests, they are not using direct IO, so its all going through the ZFS arc cache which is greatly skew your numbers... And direct IO on zfs isn't really a thing (yet), i think it got into OpenZFS 8.0.0 (maybe).

Again, fio is a lot of complex then dd, but something basic could look like

Code:
root@behemoth[/mnt/vol0/tmp/fio]# fio \
  --name=seq-write \
  --ioengine=posixaio \
  --rw=write \
  --bs=1m \
  --iodepth=32 \
  --numjobs=1 \
  --size=512m \
  --fsync=64 \
  --end_fsync=1 \
  --runtime=10 \
  --time_based


That will run a sequential write, 1mb in size, queue depth == 32, will sync the file every 64 writes and then at the end (i think this is the best you can do with ZFS not supporting direct IO).
 
Top