Is ~75MB/s write speed expected on 3 RAIDZ1 vdevs of 4 HDDs each?

Status
Not open for further replies.

Han Sooloo

Dabbler
Joined
Dec 23, 2015
Messages
16
Have a Cisco UCS C-240 with 12 2TB 6G SAS HDDs, connected through a Cisco 12G SAS HBA (no RAID, LSI based).
64 GB RAM, and 2x 24 core (hyper-threaded) Xeon E5-2670 v3 CPUs at 2.3 GHz.

1 SSD (16GB reserved) for ZIL in the pool.

Setup with LZ4 and no de-dupe as below:

Code:
tank
  tank
	raidz1-0
	  4x HDDs
	raidz1-1
	  4x HDDs
	raidz1-2
	  4x HDDs
logs
  stripe
	1x SSD


I have 2 datasets on this pool. When I try to copy a large file (~25 GB) from one dataset to the other, I am getting around 75MB/s speed, as reported by rsync.

Code:
rsync -a --human-readable --progress /mnt/tank/Dataset1/Folder /mnt/tank/Dataset2/Folder


Is this expected performance? Should I not be seeing higher throughput?
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
A couple things could be in play here.

One option, assuming you have sync=always set on the destination dataset, is that 75MB/s is the maximum write speed of your SLOG device. What model is that SSD?

The other possible option is that copying from and to the same set of spindles is just causing too much random access and seeks, and that's the limit of the underlying vdevs to do simultaneous reads and writes.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
By the way, you seem like a prime candidate for a 12-wide RAIDZ3 vdev, or even RAIDZ2.

It would also be nice to know the fragmentation number for that pool, as well as its fill percentage. zpool list
 

Han Sooloo

Dabbler
Joined
Dec 23, 2015
Messages
16
A couple things could be in play here.

One option, assuming you have sync=always set on the destination dataset, is that 75MB/s is the maximum write speed of your SLOG device. What model is that SSD?

The other possible option is that copying from and to the same set of spindles is just causing too much random access and seeks, and that's the limit of the underlying vdevs to do simultaneous reads and writes.
The SLOG is an Intel SSDSC2BB120G4 S3500 120 GB 2.5 SATA drive. Is there a way to do a performance test without destroying the entire pool?
 

Han Sooloo

Dabbler
Joined
Dec 23, 2015
Messages
16
By the way, you seem like a prime candidate for a 12-wide RAIDZ3 vdev, or even RAIDZ2.

It would also be nice to know the fragmentation number for that pool, as well as its fill percentage. zpool list
Code:
NAME		   SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  15.9G  6.99G  8.89G		 -	  -	44%  1.00x  ONLINE  -
tank		  21.8T  19.1T  2.61T		 -	32%	87%  1.00x  ONLINE  /mnt
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
That's an okayish SLOG device, but why do you need it? What would be issuing sync writes? What's your workload?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Code:
NAME		   SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  15.9G  6.99G  8.89G		 -	  -	44%  1.00x  ONLINE  -
tank		  21.8T  19.1T  2.61T		 -	32%	87%  1.00x  ONLINE  /mnt
Oh yeah, expect crap performance from a pool that is as full as yours is. In fact, it's getting close to "dangerously full". 80% is where you should add more storage, at the latest. 50% if you're doing block devices.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You need to maintain a minumum of 50% free space on the pool if you are doing block storage. I am guessing it is block storage because of the SLOG device. ZFS performance degrades as it becomes full and the storage algorithm changes when the pool goes over 90% so that it is even slower.
 

Han Sooloo

Dabbler
Joined
Dec 23, 2015
Messages
16
Thank you for the insights and recommendations RE: pool utilization %. I am thinking about switching out all the 2TB HDDs with 4K 10TB ones.

RE: the workload of this system:
1. NFS datastore for ESXi VMs.
1. NFS shares for various things, e.g., Movies, Photos, Downloads.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thank you for the insights and recommendations RE: pool utilization %. I am thinking about switching out all the 2TB HDDs with 4K 10TB ones.

RE: the workload of this system:
1. NFS datastore for ESXi VMs.
1. NFS shares for various things, e.g., Movies, Photos, Downloads.

Those are two very different workloads with very different performance needs. VMs need low-latency random-access, media downloads just need big sequential transfer speeds. You at least have them on separate datasets, but they're still sharing vdevs.

Unfortunately that C240 only has twelve internal bays, so you don't really have an easy venue to create a new pool while keeping yours intact.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626

Han Sooloo

Dabbler
Joined
Dec 23, 2015
Messages
16
All C240's are 2-socket servers - the C4xx series is the quad-socket. I'm inferring that it's an LFF model since the OP is considering 10TB HDDs which as of yet don't come in 2.5" size.

But I could be wrong.

Typo on original post corrected ... 2x 24 core Xeon's.

Also, this is the 12 bay LFF / 3.5" HDD version.
 
Status
Not open for further replies.
Top