Expected performance 12 bay Mirrored vs RaidZ2

wdp

Explorer
Joined
Apr 16, 2021
Messages
52
So I'm inching closer to my first TrueNas build. And just wanted to have an idea of baseline performance and bottlenecks I should expect.

SuperStorage 6028R-E1CR12L
Dual Xeon E5-2603v3 1.6Ghz
LSI / Broadcom 3008 IT
64Gb RAM
12x 18TB WD SAS
Intel 10GbE

The unit is for video editing, so 98% if it's purpose is reads to edit bays.

It seems highly suggested to use mirrored vdevs, so I wanted to test that out, as well as a 6x RaidZ2. And then test some form of caching options just out of morbid curiosity, although most ZFS people I know are telling me not to bother. This community appears to be very polar when it comes to configurations and opinions.

Last night I seated the drives, everything is working fine, built out the pool, mirrored vdevs, setup the datastore, configured the networking, and ran read write tests, which planes out at about 630 Mb/s Write and 670 Mb/s read. Not horrible, but slower than I expected. I see higher reads on a shelf bought synology out of the box with 12x drives. Now I haven't tested simultaneous reads. If it can hold higher speeds across multiple clients, then overall high speed is less critical vs sustained reads on 3-4 edit bays. But I don't currently have a way to test across multiple devices.

Problem 1: Slow CPU?
Problem 2: Older HBA?
Problem 3: Tuning TrueNas for better performance?

What is an ample expectation for a 12x server running TrueNAS/ZFS? no frills, out of the box basic deployment?
 

wdp

Explorer
Joined
Apr 16, 2021
Messages
52
I apologize in advance for the long winded ramble with test samples...

Well, to come back and answer this, I've been attempting some bench marking, with results that caught me by surprise. I'm sure there is a high probability of flaw in my testing method.

Over 10GbE, I get 600/600 out of the box from the client/edit bay, pretty common for any aquantia ports I have on a computer without tuning. iperf3 shows a solid 10gbe connection though, nothing to seem alarmed over. I've never had much luck with tuning these. Jumbo frames seem less and less common as a solution these days. Buffer sizes don't seem to ever fix it.

But with dd and bonnie++, I expected mirrored vdevs to f'n smoke a 4+2 RaidZ2 setup. This wasn't the case though.

4+2 RaidZ2

Code:
root@anton[~]# dd if=/dev/zero of=/mnt/tank/Share1/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 153.184751 secs (700945632 bytes/sec)

root@anton[~]# dd if=/mnt/tank/Share1/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 122.100346 secs (879392941 bytes/sec)


Mirrored vDev (6)

Code:
root@anton[~]# dd if=/dev/zero of=/mnt/tank/Share1/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 107.400316 secs (999756671 bytes/sec)

root@anton[~]# dd if=/mnt/tank/Share1/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 251.934330 secs (426199091 bytes/sec)


So I figured, that can't be right...maybe dd zero/null with compression off is is just a really not accurate, Maybe that's a return from a low number of spindles.

So I fired up Bonnie++ and just watched zpool iostats for a bit.

Mirrored Write...

Code:
----------------------------------------------  -----  -----  -----  -----  -----  -----
boot-pool                                       2.32G   474G      0      0      0      0
  mirror                                        2.32G   474G      0      0      0      0
    ada0p2                                          -      -      0      0      0      0
    ada1p2                                          -      -      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                             330G  97.8T      0  1.10K  3.19K   980M
  mirror                                        55.3G  16.3T      0    190    408   164M
    gptid/9605c4ce-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     96      0  81.9M
    gptid/981b5f6b-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     93    408  81.9M
  mirror                                        54.4G  16.3T      0    182    408   163M
    gptid/9743487f-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     92    408  81.4M
    gptid/98cf3dfb-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     90      0  81.4M
  mirror                                        55.4G  16.3T      0    184    408   166M
    gptid/9837e6f4-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     93    408  82.8M
    gptid/99438322-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     91      0  82.8M
  mirror                                        55.3G  16.3T      0    184  1.20K   164M
    gptid/99dfa9ee-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     95    816  81.8M
    gptid/9a03a75d-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     89    408  81.8M
  mirror                                        54.9G  16.3T      0    192    408   160M
    gptid/99cc02d0-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     97    408  80.1M
    gptid/99aeeb5b-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     95      0  80.1M
  mirror                                        54.9G  16.3T      0    187    408   164M
    gptid/9a206a5b-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     93    408  81.9M
    gptid/9a4a60f1-ad76-11eb-ad0b-0cc47a6ea7ec      -      -      0     94      0  81.9M
----------------------------------------------  -----  -----  -----  -----  -----  -----


mirrored Read...

Code:
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
boot-pool                                       2.32G   474G      0      0      0      0
  mirror                                        2.32G   474G      0      0      0      0
    ada0p2                                          -      -      0      0      0      0
    ada1p2                                          -      -      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                             357G  97.8T  3.69K     49   476M  5.03M
  mirror                                        59.9G  16.3T    630      1  79.3M   814K
    gptid/9605c4ce-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    343      0  43.1M   407K
    gptid/981b5f6b-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    286      0  36.2M   407K
  mirror                                        58.9G  16.3T    630      4  79.5M   837K
    gptid/9743487f-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    256      2  32.4M   419K
    gptid/98cf3dfb-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    374      2  47.0M   419K
  mirror                                        59.7G  16.3T    629      8  79.3M   849K
    gptid/9837e6f4-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    271      3  34.1M   425K
    gptid/99438322-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    358      4  45.2M   425K
  mirror                                        59.7G  16.3T    628     14  79.0M   903K
    gptid/99dfa9ee-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    263      7  32.7M   452K
    gptid/9a03a75d-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    365      7  46.3M   452K
  mirror                                        59.3G  16.3T    626     11  78.6M   893K
    gptid/99cc02d0-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    255      5  32.1M   446K
    gptid/99aeeb5b-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    370      6  46.5M   446K
  mirror                                        59.3G  16.3T    635      8  80.0M   855K
    gptid/9a206a5b-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    310      4  39.1M   427K
    gptid/9a4a60f1-ad76-11eb-ad0b-0cc47a6ea7ec      -      -    325      4  40.9M   427K
----------------------------------------------  -----  -----  -----  -----  -----  -----



Pretty stock setup, fresh install, 12.0-U3, same hardware as listed in the first post. No tuning done to the pool, compression off.

So why do mirrored vdevs read at half the speed they're writing?

As a sanity check, I drained the pool and set it back to RaidZ2 and will setup a jail to test Bonnie++ again.

4+2 Rz2 Write
Code:
boot-pool   2.32G   474G      1      0  1.54K      0
tank         257G   196T      3  2.41K  79.6K   734M
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      0      0      0      0
tank         278G   196T      0  2.45K  2.80K   730M
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      0      0      0      0
tank         285G   196T      0  2.11K  1.59K   618M
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      0      0      0      0
tank         291G   196T      0  2.43K    811   730M
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      1      0  1.65K      0
tank         298G   196T      0  2.26K  1.20K   687M
----------  -----  -----  -----  -----  -----  -----


Code:
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
boot-pool                                       2.32G   474G      0      0    170      0
  mirror                                        2.32G   474G      0      0    170      0
    ada0p2                                          -      -      0      0    170      0
    ada1p2                                          -      -      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                             357G   196T      0  2.35K    818   696M
  raidz2                                         179G  98.0T      0  1.17K    409   344M
    gptid/16315a51-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    199      0  57.4M
    gptid/17ff83d0-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    201    136  57.4M
    gptid/1afa8f61-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    200      0  57.4M
    gptid/1ac0a335-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    202    136  57.4M
    gptid/1b734393-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    199    136  57.4M
    gptid/1b3e29cf-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    198      0  57.4M
  raidz2                                         178G  98.0T      0  1.18K    409   352M
    gptid/1943bc9b-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    202    136  58.6M
    gptid/19a8a6a1-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    207    136  58.6M
    gptid/1c33b775-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    199      0  58.6M
    gptid/1cd76937-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    199      0  58.6M
    gptid/1c787e4c-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    202      0  58.6M
    gptid/1cc48b52-ae2d-11eb-93dc-0cc47a6ea7ec      -      -      0    199    136  58.6M
----------------------------------------------  -----  -----  -----  -----  -----  -----


4+2 Rz2 Read...

Code:
boot-pool   2.32G   474G      0      0      0      0
tank         535G   196T  16.7K     30   553M   158K
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      0      0      0      0
tank         535G   196T  18.0K     24   595M   133K
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      0      0      0      0
tank         535G   196T  18.2K     60   601M   331K
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      1      0  1.90K      0
tank         535G   196T  17.9K     56   593M   739K
----------  -----  -----  -----  -----  -----  -----
boot-pool   2.32G   474G      0      0      0      0
tank         535G   196T  17.5K     31   581M   155K
----------  -----  -----  -----  -----  -----  -----


Code:
----------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                             535G   196T  18.0K     35   598M  1.04M
  raidz2                                         269G  97.9T  9.02K     20   300M   566K
    gptid/16315a51-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  2.02K      3  67.0M  93.9K
    gptid/17ff83d0-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  2.05K      3  67.8M  94.1K
    gptid/1afa8f61-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.36K      3  46.2M  94.0K
    gptid/1ac0a335-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.60K      3  53.5M  94.7K
    gptid/1b734393-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.12K      3  36.9M  94.4K
    gptid/1b3e29cf-ae2d-11eb-93dc-0cc47a6ea7ec      -      -    887      3  28.9M  94.5K
  raidz2                                         267G  97.9T  8.99K     14   298M   501K
    gptid/1943bc9b-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.98K      2  66.4M  83.4K
    gptid/19a8a6a1-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.81K      2  60.3M  84.3K
    gptid/1c33b775-ae2d-11eb-93dc-0cc47a6ea7ec      -      -    704      2  23.0M  84.2K
    gptid/1cd76937-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.12K      2  37.2M  83.5K
    gptid/1c787e4c-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.82K      2  59.6M  82.6K
    gptid/1cc48b52-ae2d-11eb-93dc-0cc47a6ea7ec      -      -  1.56K      2  51.5M  82.7K
----------------------------------------------  -----  -----  -----  -----  -----  -----



The only reason it caught me by surprise, is that a vast majority the forums seems so pro mirrored vdev for standard performance. And almost every pro enterprise video production system I've seen is a larger array of multiple raidz2s, like a 4+2 or 16+2.

Are these tests remotely accurate? Did I do something wrong or are my expectations of mirrored vdev reads completely misunderstood. I can't find anywhere that doesn't say mirrored vdevs should yield stronger results on reads.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Mirrored VDEVs (similar to a R10 array) are supposed to be better for VM hosting performance as all disks can be read while the writes get striped across half the set; xVDEV. Generally with VM workloads you get death by IOPS before you get anywhere near the ends of throughput.

For your workload, which would generally large file IO, Z2 or Multiple Z2 VDEV, should work well, as that's generally a throughput issue more than IOPS.
 

wdp

Explorer
Joined
Apr 16, 2021
Messages
52
Okay, so it's all up and running iperf is good, read write tests are good. But I can't seem to play back high resolution video files at all. Very stuttery, inconsistent, until they're loaded into cache it seems, then they play back fine.

So I assume I'm moving on to tunables next? Time to do some research, but it was relatively unexpected behavior. We have a stock synology on the network which plays the same files back fine.
 

ClimbingKId

Cadet
Joined
Aug 25, 2021
Messages
6
@wdp I know this is an older thread, but I am having the same issue, going from 2 to 4drives in a dual vdev mirror, my reads are half my writes. from 4x WD Gold drives, im only getting 101M across a 10GB network. Had to go with RAIDZ2 to get 250M+ on reads. zpool iostat shows that the reads on each drive are really low.

Did you find out what the issue was here with your setup?

Thanks
CC
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
dd if=/dev/zero of=/mnt/tank/Share1/tmp.dat bs=2048k count=50k
Not a valid or useful test unless you turn off compression on the target.

dd if=/mnt/tank/Share1/tmp.dat of=/dev/null bs=2048k count=50k
Only semi-valid as a test if you have rebooted between writing and the test and didn't have compression on.

You really should look into fio
 
Top