H200 performance concern

enka

Cadet
Joined
Jun 27, 2022
Messages
9
I have a Dell H200 performance issue. My HW is:
Poweredge T710 - PCIe gen2
CPU 2x Intel(R) Xeon(R) CPU X5675 @ 3.07GHz
RAM 128GB ECC
H200 in IT mode in a dedicated storage port (identifies as PERC with NO BIOS) -> pool0 on VDEV with 6 ST6000NM0034 in RAIDZ2
H200 in IT mode in slot 4 PCIex8 x8 routing (identifies as 2008 with NO BIOS) -> pool1 on VDEV with x8 ST6000NM0034 in RAIDZ2

When i replicate snapshots form pool0 to pool1 I am getting ~53MB/s read on pool0 per drive and ~57MB/s write on pool1 per drive in disk I/O TrueNAS Report
When coping files (like 120GB) with Midnight Commander directly in shell from pool0/pool1 to pool1/pool0 I am getting ~250MB/s

I expected higher rates as multiple devices read or write a part of the file so it should transfer more that a single drive as it is with ie in classic RAID stripe set

I have tested single drive operation in Windows and a single drive gave me around 240MB/s so single drive in either pool is able to read or write around 240MB/s
It looks much similar to https://forums.servethehome.com/index.php?threads/help-with-h200-itmode-r510-disk-performance.28169/

My concern is why a summary load is not higher than single drive?


*consolidated my posts
 
Last edited:

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
I expected higher rates as multiple devices read or write a part of the file so it should transfer more that a single drive as it is wit ie in classic RAID stripe set
Well, first problem is this is not true. First, you said raid stripe set. Raidz2 is first and foremost, not traditional raid. Secondly, it's not a stripe. To have a Raidz2 stripe would mean having multiple Raidz2 vdevs. A Raidz2 vdev, and from your OP you have 1 of those in each pool, will perform generally up to the speed of a single disk, the slowest disk in that vdev.

Replication does more than simply copy.

Other factors include encryption on one or both pools, what ZFS properties are set, etc.

Speeds should be monitored via zpool iostat with intervals.
 

enka

Cadet
Joined
Jun 27, 2022
Messages
9
Well, first problem is this is not true. First, you said raid stripe set. Raidz2 is first and foremost, not traditional raid.
I meant that I have created software RAID0 in Windows on the same HW to check throughput increase
Replication does more than simply copy.
So shouldn't I expect to have ALL I/Os reflected in a report?
Other factors include encryption on one or both pools, what ZFS properties are set, etc.

Speeds should be monitored via zpool iostat with intervals.

No compression or encryption. I will try iostat
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
I meant that I have created software RAID0 in Windows on the same HW to check throughput increase

So shouldn't I expect to have ALL I/Os reflected in a report?


No compression or encryption. I will try iostat
ZFS iostat, not simply iostat. so a command such as "zpool iostat 5 5" while running your command.

You cannot compare a software raid0 with a raidz2, well, you can I guess but it's not even remotely similar. As mentioned a raidz2 will have the IO of a single disk, at most.

But you have to check all properties, there are many more, I'd post them here. "zfs get all poolname"
 

enka

Cadet
Joined
Jun 27, 2022
Messages
9
As mentioned a raidz2 will have the IO of a single disk, at most.
Really? So there is no benefit from data distribution across more drives?


While copping from pool1->pool0
Code:
zpool iostat 5 5
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      3     35  55.1K   603K
pool0       15.7T  17.0T      0    212  2.36K  31.8M
pool1       14.0T  29.6T    968     83  22.2M   735K
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     31  2.39K   506K
pool0       15.7T  17.0T      4  2.09K  20.0K   307M
pool1       14.0T  29.6T  9.62K     71   231M   465K
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     36      0   529K
pool0       15.7T  17.0T      0  2.07K  7.98K   315M
pool1       14.0T  29.6T  9.29K     77   219M   552K
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     28    817   524K
pool0       15.7T  17.0T      0  2.11K      0   331M
pool1       14.0T  29.6T  9.47K     75   223M   605K
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     34      0   530K
pool0       15.7T  17.0T      0  2.20K      0   338M
pool1       14.0T  29.6T  9.67K     74   222M   490K
----------  -----  -----  -----  -----  -----  -----


While copping from pool0->pool1
Code:
zpool iostat 5 5
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      3     35  54.5K   603K
pool0       15.7T  17.0T      9    275   345K  41.5M
pool1       14.0T  29.6T  1.22K     83  28.4M   846K
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     35      0   486K
pool0       15.7T  17.0T  12.5K      0   405M      0
pool1       14.0T  29.6T      0  2.79K      0   311M
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     31      0   487K
pool0       15.7T  17.0T  7.94K      0   257M      0
pool1       14.0T  29.6T      0  3.30K      0   326M
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     35      0   500K
pool0       15.7T  17.0T  7.38K      0   239M      0
pool1       14.0T  29.6T      0  3.33K      0   334M
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      3     36  19.2K   545K
pool0       15.7T  17.0T  6.95K      0   225M      0
pool1       14.0T  29.6T      0  3.09K      0   304M
----------  -----  -----  -----  -----  -----  -----

Code:
zpool iostat -v pool0 1
                                            capacity     operations     bandwidth
pool                                      alloc   free   read  write   read  write
----------------------------------------  -----  -----  -----  -----  -----  -----
pool0                                     15.7T  17.0T     55    249  1.88M  37.5M
  raidz2-0                                15.7T  17.0T     55    249  1.88M  37.5M
    5d77d1e5-4192-4d98-96e9-ccba4ea431e2      -      -      7     42   253K  6.24M
    8a488c27-f20e-4cd1-9f97-c1428c4c7bae      -      -     10     40   389K  6.25M
    f01be43e-a10c-45d4-a5be-f7c57f93d39e      -      -      9     41   308K  6.25M
    0554b2e5-c28f-46e4-9f30-b824ad0f175f      -      -      6     42   230K  6.24M
    740dc24d-5338-4a01-9e90-ffa91f5dceb6      -      -     11     41   403K  6.25M
    425b30c5-3875-4368-bcda-c48c3146ff5b      -      -     10     41   346K  6.25M
----------------------------------------  -----  -----  -----  -----  -----  -----


Code:
zpool iostat -v pool1 1
                                            capacity     operations     bandwidth
pool                                      alloc   free   read  write   read  write
----------------------------------------  -----  -----  -----  -----  -----  -----
pool1                                     14.0T  29.6T      0  3.35K      0   342M
  raidz2-0                                14.0T  29.6T      0  3.35K      0   342M
    4c78b9b1-1aaa-4631-aee2-d10cf3fe6b2b      -      -      0    442      0  43.5M
    35a33480-36eb-41a1-8114-89a945ba5fff      -      -      0    423      0  41.4M
    2b9d952f-be3d-4f22-8e59-5ea017200c09      -      -      0    432      0  42.6M
    d53b7900-32b0-498e-aca4-fc59f69ddeb8      -      -      0    411      0  41.2M
    a7d43421-5445-4e9f-9590-b8c2378ae1e4      -      -      0    440      0  42.7M
    bf7f13c2-deda-432f-82d2-fa339c9e346e      -      -      0    470      0  43.1M
    34df02c8-8d99-4c32-9d76-c30b5c693e51      -      -      0    392      0  43.7M
    717ee099-6c03-4e75-8d7e-9d6a19fae6e4      -      -      0    422      0  43.4M
----------------------------------------  -----  -----  -----  -----  -----  -----


poolInfo

Code:
zfs get all pool0
NAME   PROPERTY              VALUE                  SOURCE
pool0  type                  filesystem             -
pool0  creation              Tue May 23  6:50 2023  -
pool0  used                  10.5T                  -
pool0  available             11.2T                  -
pool0  referenced            240K                   -
pool0  compressratio         1.00x                  -
pool0  mounted               yes                    -
pool0  quota                 none                   local
pool0  reservation           none                   local
pool0  recordsize            128K                   local
pool0  mountpoint            /mnt/pool0             default
pool0  sharenfs              off                    default
pool0  checksum              on                     local
pool0  compression           off                    local
pool0  atime                 off                    local
pool0  devices               on                     default
pool0  exec                  on                     local
pool0  setuid                on                     default
pool0  readonly              off                    local
pool0  zoned                 off                    default
pool0  snapdir               hidden                 local
pool0  aclmode               discard                local
pool0  aclinherit            passthrough            local
pool0  createtxg             1                      -
pool0  canmount              on                     default
pool0  xattr                 on                     default
pool0  copies                1                      local
pool0  version               5                      -
pool0  utf8only              off                    -
pool0  normalization         none                   -
pool0  casesensitivity       sensitive              -
pool0  vscan                 off                    default
pool0  nbmand                off                    default
pool0  sharesmb              off                    default
pool0  refquota              none                   local
pool0  refreservation        none                   local
pool0  guid                  3818826936774409018    -
pool0  primarycache          all                    default
pool0  secondarycache        all                    default
pool0  usedbysnapshots       0B                     -
pool0  usedbydataset         240K                   -
pool0  usedbychildren        10.5T                  -
pool0  usedbyrefreservation  0B                     -
pool0  logbias               latency                default
pool0  objsetid              54                     -
pool0  dedup                 off                    local
pool0  mlslabel              none                   default
pool0  sync                  standard               local
pool0  dnodesize             legacy                 default
pool0  refcompressratio      1.00x                  -
pool0  written               240K                   -
pool0  logicalused           10.5T                  -
pool0  logicalreferenced     50.5K                  -
pool0  volmode               default                default
pool0  filesystem_limit      none                   default
pool0  snapshot_limit        none                   default
pool0  filesystem_count      none                   default
pool0  snapshot_count        none                   default
pool0  snapdev               hidden                 local
pool0  acltype               posix                  local
pool0  context               none                   default
pool0  fscontext             none                   default
pool0  defcontext            none                   default
pool0  rootcontext           none                   default
pool0  relatime              off                    default
pool0  redundant_metadata    all                    default
pool0  overlay               on                     default
pool0  encryption            off                    default
pool0  keylocation           none                   default
pool0  keyformat             none                   default
pool0  pbkdf2iters           0                      default
pool0  special_small_blocks  0                      local



Code:
root@store1[~]# zfs get all pool1
NAME   PROPERTY              VALUE                  SOURCE
pool1  type                  filesystem             -
pool1  creation              Thu Jun 29  6:34 2023  -
pool1  used                  9.95T                  -
pool1  available             21.0T                  -
pool1  referenced            256K                   -
pool1  compressratio         1.03x                  -
pool1  mounted               yes                    -
pool1  quota                 none                   local
pool1  reservation           none                   local
pool1  recordsize            128K                   local
pool1  mountpoint            /mnt/pool1             default
pool1  sharenfs              off                    default
pool1  checksum              on                     local
pool1  compression           off                    local
pool1  atime                 off                    local
pool1  devices               on                     default
pool1  exec                  on                     local
pool1  setuid                on                     default
pool1  readonly              off                    local
pool1  zoned                 off                    default
pool1  snapdir               hidden                 local
pool1  aclmode               discard                local
pool1  aclinherit            passthrough            local
pool1  createtxg             1                      -
pool1  canmount              on                     default
pool1  xattr                 on                     default
pool1  copies                1                      local
pool1  version               5                      -
pool1  utf8only              off                    -
pool1  normalization         none                   -
pool1  casesensitivity       sensitive              -
pool1  vscan                 off                    default
pool1  nbmand                off                    default
pool1  sharesmb              off                    default
pool1  refquota              none                   local
pool1  refreservation        none                   local
pool1  guid                  14781240144893158555   -
pool1  primarycache          all                    default
pool1  secondarycache        all                    default
pool1  usedbysnapshots       0B                     -
pool1  usedbydataset         256K                   -
pool1  usedbychildren        9.95T                  -
pool1  usedbyrefreservation  0B                     -
pool1  logbias               latency                default
pool1  objsetid              54                     -
pool1  dedup                 off                    local
pool1  mlslabel              none                   default
pool1  sync                  standard               local
pool1  dnodesize             legacy                 default
pool1  refcompressratio      1.00x                  -
pool1  written               256K                   -
pool1  logicalused           10.3T                  -
pool1  logicalreferenced     50.5K                  -
pool1  volmode               default                default
pool1  filesystem_limit      none                   default
pool1  snapshot_limit        none                   default
pool1  filesystem_count      none                   default
pool1  snapshot_count        none                   default
pool1  snapdev               hidden                 local
pool1  acltype               posix                  local
pool1  context               none                   default
pool1  fscontext             none                   default
pool1  defcontext            none                   default
pool1  rootcontext           none                   default
pool1  relatime              off                    default
pool1  redundant_metadata    all                    default
pool1  overlay               on                     default
pool1  encryption            off                    default
pool1  keylocation           none                   default
pool1  keyformat             none                   default
pool1  pbkdf2iters           0                      default
pool1  special_small_blocks  0                      local
 

Attachments

  • T710HW.txt
    21.4 KB · Views: 48
Last edited:

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Here's some reading for you:


Download the PDF and read it for tips on pool layout and performance. This is from IX Systems.

Your zpool iostat 5 5 shows pretty believeable speeds. Perhaps other things are running at the same time, or perhaps how you were measuring the speed is wrong. zpool iostat is the definitive how much zfs is doing command.

If you require more speed, this is a use of multiple vdevs and mirrors, as each mirror is striped.

Note, you can somewhat exceed the single drive performance for a time on newer pools with file copies, but eventually, it degrades unless you create a new pool and copy over.

And if you want no parity, you can make a stripe with ZFS, just don't use Raidz or mirrors. It will be fast,
 
Last edited:

enka

Cadet
Joined
Jun 27, 2022
Messages
9
Thanks @sfatula for your answer.

I meant ie this one
Code:
4c78b9b1-1aaa-4631-aee2-d10cf3fe6b2b      -      -      0    442      0  43.5M
    35a33480-36eb-41a1-8114-89a945ba5fff      -      -      0    423      0  41.4M
    2b9d952f-be3d-4f22-8e59-5ea017200c09      -      -      0    432      0  42.6M
    d53b7900-32b0-498e-aca4-fc59f69ddeb8      -      -      0    411      0  41.2M
    a7d43421-5445-4e9f-9590-b8c2378ae1e4      -      -      0    440      0  42.7M
    bf7f13c2-deda-432f-82d2-fa339c9e346e      -      -      0    470      0  43.1M
    34df02c8-8d99-4c32-9d76-c30b5c693e51      -      -      0    392      0  43.7M
    717ee099-6c03-4e75-8d7e-9d6a19fae6e4      -      -      0    422      0  43.4M


Each drive writes around 40M/s which is below drive ability as from my observations a single drive can provide ~240MB/s.
There was no other activities at the time.
From some benchmarks I know that this drive can perform at around 240 IOPS read and write. Dunno exactly how to count it to MB/s yet so now I will focus on this and H200 IOPS. From lspci I know thaat H200 DevCap: MaxPayload is 4096 bytes. Now I have to figure out how it affects a zvol/pool performance.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
This is a migrating thread. You started by comparing totally different things. But it's turning into something different. I will just ask if you read the document, your OP was totally off base when comparing performance. Now you're talking about theoretical sequential write speeds. Zfs replication is not file copies and will not likely be sequential writes (esp on a COW filesystem that's been around with fragmentation), they will be random, you can't just compare copying a large single file with replication which is dealing with all changes. It's done based on transaction order, which is more likely totally random and you've provided no data as far as what exactly is being replicated, is it a first time replication, etc. It can depend on so many factors. For example, are the actual files in the replication set mostly small or large? Lots of small files will be slow, and there is metadata to be updated, parity to calculate and write, replication is single threaded, it is doing extra integrity checks, etc. What are the commands used to do the replication (both send and receive and any buffer in the middle, what options, etc)? That matters too. How much data is being replicated (the changes or initial load)? What as the ashift of the pools? While a copy of a single file may be faster, replication doesn't generally copy that much data, it's only copying changed blocks, it doesn't have to go through files that are unchanged. So, a lower MB/s send/recv will likely still be faster than say an rsync which copies whole files but faster.

I'll let someone else chase this down the rabbit hole but a lot more details are needed to not just guess. But the main point I was making is this - it will never ever be as fast as you maintain in the OP on other systems / filesystem types / pool types as those were not comparable to zfs raidz2. Your original question I was answering was "My concern is why a summary load is not higher than single drive?". The document answers that question for you. Raidz2 is not a stripe, and, replication is not a file copy on a simple stripe. I'm just saying I've seen "low" numbers like this on replications, not surprising to me.
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Also please use proper ZFS terminology.

The following reading is also suggested:

As @sfatula wrote there are other things to consider beside the pool's (and VDEVs) layout and a possible HBA bottleneck (I really doubt it's the case btw, to me this looks a IOPS bottleneck): network limitations, CPU limitations and mostly recordsize/iops limitations. I haven't looked deep into this thread, but it's pretty normal to get 40MB/s per drive with small files due to the low IOPS factor of HDDs: basically, the 240MB/s refers to streaming reads of large files; this can be somehow improved by setting the dataset's recordsize propriety to an adeguate value for the files you intend to store, lowering the IOPS required for each file.
This alongside a pool layout that matches your needs (ie for database use it's suggested to use mirrors because, again, better IOPS performance) is the only way to improve performances withouth going touching the ARC part of ZFS: increasing the RAM (and by extension, especially on CORE, the ARC) is a good way to improve reads; depending on your requirements and workloads, L2ARC or metadata VDEVs might be solutions... but are specialized tools for professional use.

The point is, you have to carefully study your needs in order to get the performance you want since ZFS is not forgiving from a layout point of view (once you set something you rarely get the option of backpedalling whithout data loss).

About how to get the IOPS value if not stated by the manufacturer:
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
No compression
The default LZ4 compression should actually increase performance in most conditions, even on relatively "older" systems like your T710. Is there a specific reason you disabled it?
 

enka

Cadet
Joined
Jun 27, 2022
Messages
9
My first observation was that replication of source pool0/datasetx to destination pool1/datasetx was slow. So have created another datasets under temp name on both pools separate from shapshots for both datasetx and copied 200GB file to both temp datasets and then copied from one to another under changed name certainly. There was nothing in the background and only one copy at time. As copying operations were conducted on a space separate from other datasets in both pools my assumption is that copying is done to continuous free spaces in pools and drives.
I know all about this additional activities done when operations take place on existing dataset with data while something is running in the background so minimised all interferences.

Is it possible, that ZFS adds so much overhead that only 50MB/s is possible on a drive capable over 200 or there is a bottleneck??
My concern is why a summary load is not higher than single drive?
More precise - why a sum of bandwidth is not much higher than single drive drive can achieve?

PS
COB should not affect my test scenario as I create new file NOT modify existing one.
My box is NOT used by users.

smartctl -x /dev/sdb
Logical block size: 512 bytes
Physical block size: 4096 bytes

all drives have/are the same
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
More precise - why a sum of bandwidth is not much higher than single drive drive can achieve?
As per written in the first resource that was linked to you:
I/O operations on a RAIDZ vdev need to work with a full block, so each disk in the vdev needs to be synchronized and operating on the sectors that make up that block. No other operation can take place on that vdev until all the disks have finished reading from or writing to those sectors. Thus, IOPS on a RAIDZ vdev will be that of a single disk. While the number of IOPS is limited, the streaming speeds (both read and write) will scale with the number of data disks. Each disk needs to be synchronized in its operations, but each disk is still reading/writing unique data and will thus add to the streaming speeds, minus the parity level as reading/writing this data doesn’t add anything new to the data stream.​

Add in what has already been told you about recordsize & co: you get your lower speed.
 

enka

Cadet
Joined
Jun 27, 2022
Messages
9
I changed recordsize on both test datasets and getting reasonable throughput
Code:
                                            capacity     operations     bandwidth
pool                                      alloc   free   read  write   read  write
----------------------------------------  -----  -----  -----  -----  -----  -----
pool0                                     15.3T  17.5T    966      0   716M      0
  raidz2-0                                15.3T  17.5T    966      0   716M      0
    5d77d1e5-4192-4d98-96e9-ccba4ea431e2      -      -    239      0   179M      0
    8a488c27-f20e-4cd1-9f97-c1428c4c7bae      -      -      4      0  39.9K      0
    f01be43e-a10c-45d4-a5be-f7c57f93d39e      -      -      0      0      0      0
    0554b2e5-c28f-46e4-9f30-b824ad0f175f      -      -    234      0   176M      0
    740dc24d-5338-4a01-9e90-ffa91f5dceb6      -      -    234      0   181M      0
    425b30c5-3875-4368-bcda-c48c3146ff5b      -      -    253      0   180M      0
----------------------------------------  -----  -----  -----  -----  -----  -----
                                            capacity     operations     bandwidth
pool                                      alloc   free   read  write   read  write
----------------------------------------  -----  -----  -----  -----  -----  -----
pool0                                     15.3T  17.5T  1.21K      0   691M      0
  raidz2-0                                15.3T  17.5T  1.21K      0   692M      0
    5d77d1e5-4192-4d98-96e9-ccba4ea431e2      -      -    354      0   173M      0
    8a488c27-f20e-4cd1-9f97-c1428c4c7bae      -      -      5      0  47.9K      0
    f01be43e-a10c-45d4-a5be-f7c57f93d39e      -      -      0      0      0      0
    0554b2e5-c28f-46e4-9f30-b824ad0f175f      -      -    265      0   174M      0
    740dc24d-5338-4a01-9e90-ffa91f5dceb6      -      -    342      0   170M      0
    425b30c5-3875-4368-bcda-c48c3146ff5b      -      -    274      0   174M      0
----------------------------------------  -----  -----  -----  -----  -----  -----
                                            capacity     operations     bandwidth
pool                                      alloc   free   read  write   read  write
----------------------------------------  -----  -----  -----  -----  -----  -----
pool0                                     15.3T  17.5T    939      0   693M      0
  raidz2-0                                15.3T  17.5T    938      0   692M      0
    5d77d1e5-4192-4d98-96e9-ccba4ea431e2      -      -    214      0   174M      0
    8a488c27-f20e-4cd1-9f97-c1428c4c7bae      -      -    157      0   102M      0
    f01be43e-a10c-45d4-a5be-f7c57f93d39e      -      -    173      0   104M      0
    0554b2e5-c28f-46e4-9f30-b824ad0f175f      -      -    222      0   174M      0
    740dc24d-5338-4a01-9e90-ffa91f5dceb6      -      -     87      0  69.9M      0
    425b30c5-3875-4368-bcda-c48c3146ff5b      -      -     81      0  68.8M      0
----------------------------------------  -----  -----  -----  -----  -----  -----
                                            capacity     operations     bandwidth
pool                                      alloc   free   read  write   read  write
----------------------------------------  -----  -----  -----  -----  -----  -----
pool0                                     15.3T  17.5T  1.04K      0   698M      0
  raidz2-0                                15.3T  17.5T  1.04K      0   698M      0
    5d77d1e5-4192-4d98-96e9-ccba4ea431e2      -      -    272      0   173M      0
    8a488c27-f20e-4cd1-9f97-c1428c4c7bae      -      -    252      0   176M      0
    f01be43e-a10c-45d4-a5be-f7c57f93d39e      -      -    266      0   175M      0
    0554b2e5-c28f-46e4-9f30-b824ad0f175f      -      -    268      0   174M      0
    740dc24d-5338-4a01-9e90-ffa91f5dceb6      -      -      5      0  47.9K      0
    425b30c5-3875-4368-bcda-c48c3146ff5b      -      -      0      0      0      0
----------------------------------------  -----  -----  -----  -----  -----  -----


Code:
zpool iostat 5 5
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      7     35   139K   593K
pool0       15.3T  17.5T  1.82K    252  75.9M  11.2M
pool1       13.8T  29.8T    679  1.79K  7.88M   109M
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     30      0   487K
pool0       15.3T  17.5T  10.1K      0   629M      0
pool1       13.8T  29.8T      1  1.17K  14.4K   766M
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     31      0   505K
pool0       15.3T  17.5T  9.71K      0   597M      0
pool1       13.8T  29.8T      7  1.29K   153K   794M
----------  -----  -----  -----  -----  -----  -----
boot-pool   14.8G  24.2G      0     27      0   470K
pool0       15.3T  17.5T  9.00K      0   574M      0
pool1       13.8T  29.8T      2  1.07K  51.9K   784M


recordsize_1M.JPG


1M recordsize on datasets which are archive like storage should serve the purpose.
Thanks for your help.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

enka

Cadet
Joined
Jun 27, 2022
Messages
9
I/O operations on a RAIDZ vdev need to work with a full block, so each disk in the vdev needs to be synchronized and operating on the sectors that make up that block.
Shouldn't it state:
I/O operations on a RAIDZ vdev need to work with a full record, so each disk in the vdev needs to be synchronized and operating on the sectors that make up that record.
 
Top