tryingtoflash
Cadet
- Joined
- Oct 29, 2017
- Messages
- 3
I have a newer, HP ML10 server with a couple of LSI SAS2008 cards connected to an external case full of newer 2TB to 6TB drives. Pool is organized as 5 vdevs of mirrored pairs. I have a very old IBM rack server at another site with two of the same LSI SAS2008 cards and I use zfs replication to back up the primary server to the older, IBM server. In fact I rotate disks out from the primary server when they pass their warranty date, so the older server is actually using disks that were retired from the primary server.
The problem I'm having is that the primary server takes too long to complete a zfs scrub. It's reporting scrub speeds like "60.0M/s" and takes several days to finish a scrub. The much older IBM server can scrub its pool (which is organized in the same 5x2 pattern) in 8-10 hours.
Can someone give me some ideas on what to look at to try to determine why the new server is so much slower at zfs scrubs?
The problem I'm having is that the primary server takes too long to complete a zfs scrub. It's reporting scrub speeds like "60.0M/s" and takes several days to finish a scrub. The much older IBM server can scrub its pool (which is organized in the same 5x2 pattern) in 8-10 hours.
Can someone give me some ideas on what to look at to try to determine why the new server is so much slower at zfs scrubs?
Code:
>>>> Some key information about the primary server <<<< FreeBSD 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017 CPU: Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz (3092.91-MHz K8-class CPU) real memory = 17179869184 (16384 MB) avail memory = 16569171968 (15801 MB) FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) mps0: <Avago Technologies (LSI) SAS2008> port 0x4000-0x40ff mem 0xfbef0000-0xfbef3fff,0xfbe80000-0xfbebffff irq 16 at device 0.0 on pci1 mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd mps0: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc> mps1: <Avago Technologies (LSI) SAS2008> port 0x5000-0x50ff mem 0xfbff0000-0xfbff3fff,0xfbf80000-0xfbfbffff irq 17 at device 0.0 on pci2 mps1: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd mps1: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc> ses0 at ahciem0 bus 0 scbus8 target 0 lun 0 ses0: <AHCI SGPIO Enclosure 1.00 0001> SEMB S-E-S 2.00 device ses0: SEMB SES Device dmesg shows all the disks are connected at SATA III speeds: da0: 600.000MB/s transfers da1: 600.000MB/s transfers da2: 600.000MB/s transfers da3: 600.000MB/s transfers ---and so on and ashift of all disks in the pool is 12 >>>> Same information about the backup server <<<< FreeBSD 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017 CPU: Intel(R) Xeon(R) CPU E5530 @ 2.40GHz (2400.13-MHz K8-class CPU) real memory = 21474836480 (20480 MB) avail memory = 20724740096 (19764 MB) FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 hardware threads mps0: <Avago Technologies (LSI) SAS2008> port 0x3000-0x30ff mem 0x97b40000-0x97b43fff,0x97b00000-0x97b3ffff irq 26 at device 0.0 on pci4 mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd mps0: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc> mps1: <Avago Technologies (LSI) SAS2008> port 0x2000-0x20ff mem 0x97a40000-0x97a43fff,0x97a00000-0x97a3ffff irq 32 at device 0.0 on pci6 mps1: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd mps1: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc> ses0 at mps0 bus 0 scbus0 target 9 lun 0 ses0: <LSILOGIC SASX28 A.1 7016> Fixed Enclosure Services SCSI-3 device ses0: 300.000MB/s transfers ses0: Command Queueing enabled ses0: SCSI-3 ENC Device dmesg shows shows a range of speeds, which makes sense because some of the disks are much older: da0: 300.000MB/s transfers da1: 300.000MB/s transfers da2: 150.000MB/s transfers da3: 150.000MB/s transfers ---and so on and ashift of all disks in the pool is 12