I have a ZFS server with 2 x mirrored/striped (aka RAID10) pools.
One pool is 8 x 3TB 7200rpm Toshiba drives and holds an iSCSI volume. The other pool is 12 x 4TB drives and is used for SMB shares. (I put a zpool status dump at the bottom of the post)
Currently I'm using 2 x HUSSL4010BSS600 as my SLOG devices. Each drive has 2 x 10gb partitions, and then each partition is assigned to the appropriate pool. As a result both pools have access to a 10gb partition on each SSD to use as a SLOG.
I'm planning to upgrade to 10gigE asap, and my concern is that the HGST drives will be a bottleneck for sync writes. I have a single Dell PERC H200 controller (flashed to IT mode) as the only HBA for all the drives.
So my initial questions are as follows...
1. Would the 2 x HGST drives bottleneck on a 10gigE network?
2. Would adding an additional 2 x HGST drives and maybe putting them on a separate HBA help?
3. Assuming having 4 SLOG drives won't help, what's the next step? I'm considering using the Optane 800p. This is a home system so the write loads aren't too intense so the lifespan should be decent.
One possible option I'm considering is using a QNAP QM2-4P-384 (they work fine in a regular PC) and loading it up with 4 x Optane 800p 58gb drives. This would allow me to mix and match the Optanes as SLOG or L2ARC to my hearts content while only using a single PCIe slot. Also, as far as I can tell it works out cheaper than any other option (I think...)
Anyway... I'd appreciate any advice people can provide! Thank you :)
Here's the zpool status of my current pools:
One pool is 8 x 3TB 7200rpm Toshiba drives and holds an iSCSI volume. The other pool is 12 x 4TB drives and is used for SMB shares. (I put a zpool status dump at the bottom of the post)
Currently I'm using 2 x HUSSL4010BSS600 as my SLOG devices. Each drive has 2 x 10gb partitions, and then each partition is assigned to the appropriate pool. As a result both pools have access to a 10gb partition on each SSD to use as a SLOG.
I'm planning to upgrade to 10gigE asap, and my concern is that the HGST drives will be a bottleneck for sync writes. I have a single Dell PERC H200 controller (flashed to IT mode) as the only HBA for all the drives.
So my initial questions are as follows...
1. Would the 2 x HGST drives bottleneck on a 10gigE network?
2. Would adding an additional 2 x HGST drives and maybe putting them on a separate HBA help?
3. Assuming having 4 SLOG drives won't help, what's the next step? I'm considering using the Optane 800p. This is a home system so the write loads aren't too intense so the lifespan should be decent.
One possible option I'm considering is using a QNAP QM2-4P-384 (they work fine in a regular PC) and loading it up with 4 x Optane 800p 58gb drives. This would allow me to mix and match the Optanes as SLOG or L2ARC to my hearts content while only using a single PCIe slot. Also, as far as I can tell it works out cheaper than any other option (I think...)
Anyway... I'd appreciate any advice people can provide! Thank you :)
Here's the zpool status of my current pools:
Code:
pool: iscsizpool state: ONLINE scan: scrub repaired 0 in 0 days 02:43:28 with 0 errors on Sun Jul 22 02:43:28 2018 config: NAME STATE READ WRITE CKSUM iscsizpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/a1be2608-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 gptid/a4bb11af-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/a7b5889c-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 gptid/aa99e2dc-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/ad789ffc-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 gptid/b07707c9-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gptid/b385731d-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 gptid/b6834739-6d6d-11e8-88ef-bc5ff457ff46 ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 da20p2 ONLINE 0 0 0 da21p2 ONLINE 0 0 0 errors: No known data errors pool: mainzpool state: ONLINE scan: scrub repaired 0 in 0 days 13:04:52 with 0 errors on Sun Aug 12 13:04:53 2018 config: NAME STATE READ WRITE CKSUM mainzpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/bfac677c-0366-11e8-9403-d850e6c2dc19 ONLINE 0 0 0 gptid/c0c26f20-0366-11e8-9403-d850e6c2dc19 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/64b27c2c-0388-11e8-9c5b-d850e6c2dc19 ONLINE 0 0 0 gptid/65f3d003-0388-11e8-9c5b-d850e6c2dc19 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/2c8df379-0389-11e8-9c5b-d850e6c2dc19 ONLINE 0 0 0 gptid/2d659037-0389-11e8-9c5b-d850e6c2dc19 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gptid/4d203a23-0389-11e8-9c5b-d850e6c2dc19 ONLINE 0 0 0 gptid/4e21bbb5-0389-11e8-9c5b-d850e6c2dc19 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 gptid/24be0f82-1d31-11e8-b705-d850e6c2dc19 ONLINE 0 0 0 gptid/25df158d-1d31-11e8-b705-d850e6c2dc19 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 gptid/87c25849-1e6a-11e8-81c2-d850e6c2dc19 ONLINE 0 0 0 gptid/89cd92c0-1e6a-11e8-81c2-d850e6c2dc19 ONLINE 0 0 0 logs mirror-6 ONLINE 0 0 0 da20p1 ONLINE 0 0 0 da21p1 ONLINE 0 0 0 errors: No known data errors