Request for Benchmarking Guidance / Best Practices [New Pool Deployment}

Status
Not open for further replies.

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
I have two ESXi hosts, both running FreeNAS-11.1-U6, with bulk storage as follows: (1) ESXi-01 has 12 x WDC WD100EMAZ - 10TB (2) ESXi-02 HGST Deskstar NAS HDD - 6TB [raidz2 6x2]. Both hosts have 2 x Optane 900p - 280GB, with 1 of those drives used for ESXi boot + Local Datastore and the other dedicated to FreeNAS, i.e. FreeNAS boot + 2 x 16 GB vDisks ("Opt1log1" & "Opt1log2" as mirrored SLOG). Note: I can't pass through the Optane 900p as the workaround doesn't work for me (possibly because of two cards?). [my signature was just updated for this post]

I just completed burn in of 12 x WDC WD100EMAZ - 10TB and I wanted to take the opportunity to revisit benchmarking, learn a bit more, and ask the community for how to best conduct this. When previously determining how to configure my existing pool on the other host (raidz2 6x2) I used the commands shown below - "Prior Benchmarking Approach."

I'm thinking I will either deploy those 12 x WDC WD100EMAZ - 10TB as another raidz2 6x2 pool or 6 mirrors, but I'd like to incorporate testing random IO and testing IOPs this time around, and of course have an understanding of baseline performance for an empty pool.

My questions:

  • Are those the correct dd commands to use (for sync write / read)? (see "Prior Benchmarking Approach" below)
  • How can I test random IO?
  • How can I test IOPs? (I suppose it is easily enough backed into to from the dd commands used before)
  • How can I test latency?
In an attempt to answer my own question, I thought of the following solutions:
  • Create VM via NFS / iSCSI, install fio et all. But since I'm testing the same pool where the dataset/zvol would exist, would this solution produce a "clean" result?
  • Create a jail, install the fio port. I would think still some impact from that jail existing on the same pool, but less that the first solution.
  • I like using dd to test sync write / read as it doesn't incur any overhead, but AFAIK can't bench all of the items I'm looking for, and prefer a suite that can handle everything.
Anyway, I look forward to your comments. Since I'll likely be learning any solution proposed from the ground up, if you have a link to a tutorial, etc handy, and it is no trouble to pass along it would be appreciated.

  • Reduce RAM provided to FreeNAS to 8GB
  • Execute commands in code tags
  • Results @ 128k recordsize (encrypted) RaidZ2 [6x2x6TB] (no SLOG): sync always = 59 MB/s | sync disabled = 568 MB/s
  • Results @ 128k recordsize (encrypted) RaidZ2 [6x2x6TB] (mirrored SLOG): sync always = 622 MB/s | sync disabled = 566 MB/s
  • Results @ 1M recordsize (encrypted) RaidZ2 [6x2x6TB] (no SLOG): sync always = 61 MB/s | sync disabled = 735 MB/s
  • Results @ 1M recordsize (encrypted) RaidZ2 [6x2x6TB] (mirrored SLOG): sync always = 645 MB/s | sync disabled = 788 MB/s
  • [I tested at 1M recordsize in addition to the default 128k as I have a number of datasets comprised exclusively of multi-GB files]
Code:
zfs create Tank1/disabled
zfs set recordsize=128k compression=off sync=disabled Tank1/disabled
	
zfs create Tank1/standard
zfs set recordsize=128k compression=off sync=standard Tank1/standard
	
zfs create Tank1/always
zfs set recordsize=128k compression=off sync=always Tank1/always
	
dd if=/dev/zero of=/mnt/Tank1/disabled/tmp.dat bs=2048k count=25k
dd of=/dev/null if=/mnt/Tank1/disabled/tmp.dat bs=2048k count=25k
	
dd if=/dev/zero of=/mnt/Tank1/standard/tmp.dat bs=2048k count=25k
dd of=/dev/null if=/mnt/Tank1/standard/tmp.dat bs=2048k count=25k
	
dd if=/dev/zero of=/mnt/Tank1/always/tmp.dat bs=2048k count=25k
dd of=/dev/null if=/mnt/Tank1/always/tmp.dat bs=2048k count=25k

zfs destroy Tank1/disabled & zfs destroy Tank1/standard & zfs destroy Tank1/always
 

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
Did you find any recommendations for this?

No new information beyond what I presented initially ... sync write and read commands should be in order, but I'm looking for (ideally) a veteran to weigh in with their knowledge so I can memorialize some additional benchmarks and learn a thing or two (it isn't every day we have a 120 TB raw pool that is zero filled to poke and prod - ;)).

Thanks for the follow up, sir!
 
Status
Not open for further replies.
Top