RAIDTester
Dabbler
- Joined
- Jan 23, 2017
- Messages
- 45
Hi all,
I've been breaking my head on this case, researching heavily for a good 40+ hours. I finally decided to reach out for help.
TL;DR; Getting 60-100MB/s read over iSCSI dual direct wired Intel 10GBE, when we're expecting 500+MB/s read performance.
I inherited this [unique?] case on a "Storinator"
Current Config
- 128GB Ram
- 2x Highpoint Rocket 750 HBA
- 30x6TB WD SE drives (30 additional, not used yet)
- ZPool of 3 vdevs x 10 disks in RAIDZ2
- 1 SSD for ZIL
- 1 SSD for L2ARC
- 2x 10GBE Intel NIC
- Direct wired to server
- Exported as single 64TB iSCSI target to Windows 2012 NTFS 64k block size
We are storing and processing backups for ~60 servers.
Files consist of a base file (usually 100GB - >1TB), and then a daily incremental backup
Nightly backups are verified every night.
Every Saturday, the nightly backups are rolled up into a weekly backup & every end of month, this is rolled up even further, consolidating and clearing out based on retention policy.
This is a really disk-intensive process, since the software needs to access the base files, and the path through all the past monthly/weekly/daily backups and write a new consolidated file.
We process up to 10 backups like this simultaneously
We were seeing dismal performance - peak 150MB/s max read.
After tons of research, tweaking and watching, repeat, over and over, I decided that the RAM is incapable of handling the huge iSCSI target and I made these changes:
I don't know if throwing more RAM at the machine will help, but we're ready to spend the money, if it will deliver.
Prevent too many queued commands to disk
sysctl vfs.zfs.vdev.max_active=10
I played with this one a LOT. Tried 1, 2 and finally tried leaving it on 10
https://forums.freenas.org/index.ph...ve-previously-vfs-zfs-vdex-max_pending.19212/
Only cache metadata
zfs set primarycache=metadata zpool
zfs set secondarycache=metadata zpool
I think this is the only thing that made a real difference so far
Allow prefetch
sysctl vfs.zfs.l2arc_noprefetch=0
Let L2ARC fill up faster
sysctl vfs.zfs.l2arc_write_max=67108864
sysctl vfs.zfs.l2arc_write_boost=67108864
sysctl vfs.zfs.l2arc_write_boost=67108864
Increase streams to total CPU cores
sysctl vfs.zfs.zfetch.max_streams=24
Since I made the changes - mostly to primary/secondary cache=metadata, we've been seeing a very slow rise in performance, which I attribute to the cache filling up with metadata.
I don't know if it will continue to get faster as metadata gets cached.
Am I doing this right? Should we be using CIFS instead, so that ZFS can track metadata on the actual files, instead of a giant iSCSI target that it has no idea about?
If we can't get 500-600MB/s read out of this machine, we're going to need to have to scrap this.
Anything would help at this point.
Thanks!
I've been breaking my head on this case, researching heavily for a good 40+ hours. I finally decided to reach out for help.
TL;DR; Getting 60-100MB/s read over iSCSI dual direct wired Intel 10GBE, when we're expecting 500+MB/s read performance.
I inherited this [unique?] case on a "Storinator"
Current Config
- 128GB Ram
- 2x Highpoint Rocket 750 HBA
- 30x6TB WD SE drives (30 additional, not used yet)
- ZPool of 3 vdevs x 10 disks in RAIDZ2
- 1 SSD for ZIL
- 1 SSD for L2ARC
- 2x 10GBE Intel NIC
- Direct wired to server
- Exported as single 64TB iSCSI target to Windows 2012 NTFS 64k block size
We are storing and processing backups for ~60 servers.
Files consist of a base file (usually 100GB - >1TB), and then a daily incremental backup
Nightly backups are verified every night.
Every Saturday, the nightly backups are rolled up into a weekly backup & every end of month, this is rolled up even further, consolidating and clearing out based on retention policy.
This is a really disk-intensive process, since the software needs to access the base files, and the path through all the past monthly/weekly/daily backups and write a new consolidated file.
We process up to 10 backups like this simultaneously
We were seeing dismal performance - peak 150MB/s max read.
After tons of research, tweaking and watching, repeat, over and over, I decided that the RAM is incapable of handling the huge iSCSI target and I made these changes:
I don't know if throwing more RAM at the machine will help, but we're ready to spend the money, if it will deliver.
Prevent too many queued commands to disk
sysctl vfs.zfs.vdev.max_active=10
I played with this one a LOT. Tried 1, 2 and finally tried leaving it on 10
https://forums.freenas.org/index.ph...ve-previously-vfs-zfs-vdex-max_pending.19212/
Only cache metadata
zfs set primarycache=metadata zpool
zfs set secondarycache=metadata zpool
I think this is the only thing that made a real difference so far
Allow prefetch
sysctl vfs.zfs.l2arc_noprefetch=0
Let L2ARC fill up faster
sysctl vfs.zfs.l2arc_write_max=67108864
sysctl vfs.zfs.l2arc_write_boost=67108864
sysctl vfs.zfs.l2arc_write_boost=67108864
Increase streams to total CPU cores
sysctl vfs.zfs.zfetch.max_streams=24
Since I made the changes - mostly to primary/secondary cache=metadata, we've been seeing a very slow rise in performance, which I attribute to the cache filling up with metadata.
I don't know if it will continue to get faster as metadata gets cached.
Am I doing this right? Should we be using CIFS instead, so that ZFS can track metadata on the actual files, instead of a giant iSCSI target that it has no idea about?
If we can't get 500-600MB/s read out of this machine, we're going to need to have to scrap this.
Anything would help at this point.
Thanks!
Last edited: