mpyusko
Dabbler
- Joined
- Jul 5, 2019
- Messages
- 49
This is my current build:
In the coming weeks I plan to
Now the hardware upgrades are pretty basic, but I'm not clear about upgrading the L2ARC and then using the old SSD for the Boot. As it sits, everything is running off the LSI card except the ZIL. That has and NVMe to PICe adapter. While I can use it as additional storage, the BIOS doesn't allow me to boot from it. Theis leaves me with the more traditional SATA or USB boot method. 7 of my 8i ports are populated so I don't want to add the Boot there as it will prevent me from adding a mirror for the cache later. Now I'm left with the on-board SATA-II controller that previously was popualated by the optical drive. (I pulled the optical drive and replaced it with a 2.5" drive adapter for the L2ARC). My experience has always been to image or mirror the USB flashdrive when swapping to a newer device.
So here are my questions...
Thanks.
Dell R710
2X Intel L5640 Xeon CPUs ( Dual Hexa-core, 24 threads)
96GB DDR3-1333 Reg EEC DIMM
250 GB Samsung 850 EVO (L2ARC)
1 TB Intel 660p (ZIL/SLOG) PCIe x4 bus
6x 3 TB WD RED NAS 5400 RPM (raidz2)
16 GB Sandisk Ultra USB 3.0 (internal USB 2.0 port)
LSI SAS-2011-8i HBA PCIe x8 @5GT/s
4x Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet
In the coming weeks I plan to
- upgrade the CPUs to X5660s
- increase RAM significantly (up to 160 GB RAM)
- Replace the L2ARC with a 500 GB GB 860 EVO
- Replace the Boot flash with the 250 GB 850 EVO
Now the hardware upgrades are pretty basic, but I'm not clear about upgrading the L2ARC and then using the old SSD for the Boot. As it sits, everything is running off the LSI card except the ZIL. That has and NVMe to PICe adapter. While I can use it as additional storage, the BIOS doesn't allow me to boot from it. Theis leaves me with the more traditional SATA or USB boot method. 7 of my 8i ports are populated so I don't want to add the Boot there as it will prevent me from adding a mirror for the cache later. Now I'm left with the on-board SATA-II controller that previously was popualated by the optical drive. (I pulled the optical drive and replaced it with a 2.5" drive adapter for the L2ARC). My experience has always been to image or mirror the USB flashdrive when swapping to a newer device.
Code:
root@cygnus[~]# zpool status ZFSvol pool: ZFSvol state: ONLINE scan: scrub repaired 0 in 0 days 07:05:52 with 0 errors on Sun May 10 07:05:53 2020 config: NAME STATE READ WRITE CKSUM ZFSvol ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/0018b084-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 gptid/01ed4943-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 gptid/0324d9c9-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 gptid/044f8747-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 gptid/05515586-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 gptid/0659db94-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 logs gptid/076fa604-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 cache gptid/0702d603-d4be-11e9-a222-782bcb3282a1 ONLINE 0 0 0 errors: No known data errors root@cygnus[~]#
Code:
root@cygnus[~]# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT ZFSvol 16.2T 9.78T 6.47T - - 9% 60% 1.00x ONLINE /mnt freenas-boot 14G 3.54G 10.5G - - - 25% 1.00x ONLINE - root@cygnus[~]#
So here are my questions...
- To replace the L2ARC device:
- Remove the cache in the WebGUI
- replace the hardware
- assign the new SSD to the cache in the webgui
- To replace the Boot device:
- Can I then add the old SSD to the integrated SATA-II controller,
- boot normally
- add it as a mirror to the Boot volume in the WebGUI
- once the resilvering is complete, remove the USB flashdrive from the mirror set in the WebGUI
- reboot
- configure BIOS to boot from the SSD instead
- boot to FreeNAS?
- Since the 2 VMs are always persistent, should I reduce the ARC maximum by 16GB?
- If yes, how?
Code:
root@cygnus[~]# zpool iostat -v capacity operations bandwidth pool alloc free read write read write -------------------------------------- ----- ----- ----- ----- ----- ----- ZFSvol 9.78T 6.47T 489 97 43.6M 1.28M raidz2 9.78T 6.47T 489 84 43.6M 838K gptid/0018b084-d4be-11e9-a222-782bcb3282a1 - - 30 23 11.0M 289K gptid/01ed4943-d4be-11e9-a222-782bcb3282a1 - - 34 23 11.0M 291K gptid/0324d9c9-d4be-11e9-a222-782bcb3282a1 - - 28 22 11.1M 285K gptid/044f8747-d4be-11e9-a222-782bcb3282a1 - - 34 23 11.0M 289K gptid/05515586-d4be-11e9-a222-782bcb3282a1 - - 58 24 11.0M 291K gptid/0659db94-d4be-11e9-a222-782bcb3282a1 - - 58 23 11.0M 285K logs - - - - - - gptid/076fa604-d4be-11e9-a222-782bcb3282a1 3.05M 952G 0 12 6 468K cache - - - - - - gptid/0702d603-d4be-11e9-a222-782bcb3282a1 182G 51.0G 0 11 22.9K 1.22M -------------------------------------- ----- ----- ----- ----- ----- ----- freenas-boot 3.54G 10.5G 0 12 4.42K 189K da7p2 3.54G 10.5G 0 12 4.42K 189K -------------------------------------- ----- ----- ----- ----- ----- ----- root@cygnus[~]#
Code:
root@cygnus[~]# zpool get all NAME PROPERTY VALUE SOURCE ZFSvol size 16.2T - ZFSvol capacity 60% - ZFSvol altroot /mnt local ZFSvol health ONLINE - ZFSvol guid 16569247781209315036 default ZFSvol version - default ZFSvol bootfs - default ZFSvol delegation on default ZFSvol autoreplace off default ZFSvol cachefile /data/zfs/zpool.cache local ZFSvol failmode continue local ZFSvol listsnapshots off default ZFSvol autoexpand on local ZFSvol dedupditto 0 default ZFSvol dedupratio 1.00x - ZFSvol free 6.47T - ZFSvol allocated 9.78T - ZFSvol readonly off - ZFSvol comment - default ZFSvol expandsize - - ZFSvol freeing 0 default ZFSvol fragmentation 9% - ZFSvol leaked 0 default ZFSvol bootsize - default ZFSvol checkpoint - - ZFSvol feature@async_destroy enabled local ZFSvol feature@empty_bpobj active local ZFSvol feature@lz4_compress active local ZFSvol feature@multi_vdev_crash_dump enabled local ZFSvol feature@spacemap_histogram active local ZFSvol feature@enabled_txg active local ZFSvol feature@hole_birth active local ZFSvol feature@extensible_dataset enabled local ZFSvol feature@embedded_data active local ZFSvol feature@bookmarks enabled local ZFSvol feature@filesystem_limits enabled local ZFSvol feature@large_blocks enabled local ZFSvol feature@sha512 enabled local ZFSvol feature@skein enabled local ZFSvol feature@device_removal enabled local ZFSvol feature@obsolete_counts enabled local ZFSvol feature@zpool_checkpoint enabled local freenas-boot size 14G - freenas-boot capacity 25% - freenas-boot altroot - default freenas-boot health ONLINE - freenas-boot guid 1652143396639454651 default freenas-boot version - default freenas-boot bootfs freenas-boot/ROOT/11.2-U8 local freenas-boot delegation on default freenas-boot autoreplace off default freenas-boot cachefile - default freenas-boot failmode wait default freenas-boot listsnapshots off default freenas-boot autoexpand off default freenas-boot dedupditto 0 default freenas-boot dedupratio 1.00x - freenas-boot free 10.5G - freenas-boot allocated 3.54G - freenas-boot readonly off - freenas-boot comment - default freenas-boot expandsize - - freenas-boot freeing 0 default freenas-boot fragmentation - - freenas-boot leaked 0 default freenas-boot bootsize - default freenas-boot checkpoint - - freenas-boot feature@async_destroy enabled local freenas-boot feature@empty_bpobj active local freenas-boot feature@lz4_compress active local freenas-boot feature@multi_vdev_crash_dump disabled local freenas-boot feature@spacemap_histogram disabled local freenas-boot feature@enabled_txg disabled local freenas-boot feature@hole_birth disabled local freenas-boot feature@extensible_dataset disabled local freenas-boot feature@embedded_data disabled local freenas-boot feature@bookmarks disabled local freenas-boot feature@filesystem_limits disabled local freenas-boot feature@large_blocks disabled local freenas-boot feature@sha512 disabled local freenas-boot feature@skein disabled local freenas-boot feature@device_removal disabled local freenas-boot feature@obsolete_counts disabled local freenas-boot feature@zpool_checkpoint disabled local root@cygnus[~]#
Thanks.