Similar FreeNAS servers -- Vastly different Performance; Appropriate?

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
The units have many identical components:
8x 7200rpm HGST UltraStar HD (tested & 'known to be good')
The same exact model Chelsio 10GbE SFP+ NIC.
The same RAIDz2 array configuration.

There's no extra hardware to 'accelerate' the Dell T320's performance (L2arc / SLOG, etc.)

Write Perf.
Computers
CPUs
Cores x Clock
RAM
NICs
HDDs
SAME Config
Source Data
350 - 650 MB/s
Dell T320​
1x E5-2403​
4c @ 1.80 GHz32GB DDR3Chelsio 10GbE SFP+
8x HGST HUS 7.2K
HP SAS H220 in RAIDz2
Same Data
120 - 200 MB/s
MacPro 3,1​
2x X5355​
8c @ 2.66 GHz16GB DDR2Chelsio 10GbE SFP+
8x HGST HUS 7.2K
HP SAS H220 in RAIDz2
Same Data

Given the specs & performance details ... what explains the MP's slow write-speeds..?

Components which are identical:
UltraStar HDs, HP H220 HBA, Chelsio SFP+ NIC, RAIDz2, identical source data from identical SSD.

Variables:
System
RAM (amount + type)
PCIe Gen
CPU Model
Mac Pro 3,1​
DDR2 -- 16GB​
PCIe 2.0 (HBA = x16)​
X5355​
Dell T320​
DDR3 -- 32GB​
PCIe 3.0​
E5-2403​


Does it make sense to you guys that the Mac Pro should be this much slower?
Or could there be another cause...? Or would increasing the RAM potentially make a difference (seems irrelevant to me)

Has a DDR2 machines never exceeded 200MB/s in FreeNAS..?
Has an X5355 never exceeded 200MB/s in FreeNAS..? If the clock speed is the "all important" aspect of the CPU ... sure it's 2 generations older ... but it's also faster.

Does RAM actually even effect the write speed (especially of an empty array) ..?
Synthetic Benchmarking the Mac Pro with these 8 UltraStar HGSTs yielded ~700MB/s Write // ~800 Read (OS X)


Thanks...


ARC Requests prefetch data + metadata.png
ARC Requestws demand_data.png





ARC Size + Hit Ratio.png
CPU1.png



CPU2.png
MP da0 - da4.png

MP31 memory.png
Networking info.png


MP31 ada1 - ada4.png
Physical Memory Utilization + Swap Utilization.png
 
Last edited:

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Hi,

Just a quick thought - what generation of PCIe bus are the systems using? And how are the PCIe slots for the NICs wired? It might be something as simple as not enough bandwidth on the bus to the NICs.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Hi,

Just a quick thought - what generation of PCIe bus are the systems using? And how are the PCIe slots for the NICs wired? It might be something as simple as not enough bandwidth on the bus to the NICs.

Smart question!
Dell: PCIe 3.0
MP: PCIe 2.0
Do you think that's enough to limit it ..?

The HBA in an x16 slot actually ...
-- and --
The other 4 drives are connected directly to the MB, but I don't know how it's sharing those resources.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Hmmm, looking at wikipedia even x1 lane on PCIe 2 should be able to do more than what you see, so might not be that problem - unless Apple has done some weird stuff with the lanes, which I can't rule out.
Right now I'm out of ideas (too early, and no coffee yet).
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
@TrumanHW, I would have said it's a Mac, so there is your problem.
Jokes aside, I think the XEON X5355 is quite old and has FSB (I recall someone saying FSB would be the bottleneck).
There is also no advertised information related to hardware acceleration for encryption, so it has to be done via software I think, hitting up CPU performance.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I don't see any evidence that the network is running at 10gb speeds.... verify network, read from cache, read from disk, then write.
How are you running the write tests? Sync or not?
 
Top