Sanity check on new build or rather conversion from old to new build.

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Background:

I have been running a TrueNAS (Signature = Primary TrueNAS) for a bit but have been unhappy with the HBA setup, and unable to fix this due primarily to PCIe lane availability and slots on a single cpu NAS – so I am having a do-over

The new NAS has dual CPU which provides lots of lanes for me to play with. The motherboard is a Supermicro X10DRH-CLN4 with SAS3008 (8 way – needs to be flashed to IT Mode) on board and I am adding an LSI9305-16i for the remaining ports. This is in an entirely new case. I am going to have to transplant a number of the drives. I have 10Gb fibre. The new board also lets me play with some extra stuff I have lying around as it will now fit.

I have backups of everything. One snapshot backup replicated to QNAS and one file & VM Backup on a Synology plus a remote backup of almost everything that matters. I also have enough swing storage to move the running VM's to somewhere where they can continue to run without the NAS (and some slow storage for the rest

Currently I have three pools, of which only two count – the third is a scratch pool so we can safely ignore it

Pool – BigPool, SMB, NFS, iSCSI
6 * Exos 12TB in 3 mirrored vdevs​
2 * Intel DC in mirror as special vdev​
1 * Optane SLOG Partition & L2ARC Partition​

Pool – SSDPool (VM’s, NFS, iSCSI)
6 * Crucial MX500 in 3 mirrored vdevs​
1 * Optane SLOG Partition & L2ARC Partition​

New Build - NewNAS

Pool – SSD Pool, NFS, iCSi
This will be built with 6 * 1.6TB Intel DC 3610 SSD’s​
SLOG required (transplant from old NAS)​
The L2ARC does nothing (is essentially unused, 77.2GB allocated, 72.8GB free) so won’t be repeated​

Pool – BigPool, SMB, NFS, iSCSI
The same 6 Seagate Exos 12TB​
SLOG required (transplant from Old NAS)​
The same Special vdev (transplant from Old NAS)​
The L2ARC is used but I suspect isn’t achieving much so I won’t use it, at least initially. My ARC hit rate mean is 99.87%​

Thoughts

I could (in theory at least) remove the SLOG and cache from BigPool, leaving the base disks and the special vdev which won’t change which would make for a real easy transfer however it occurs to me – should I be looking at formatting 4Kn. I do not mind a complete do over with the pool as I have good, tested backups (I need to redo the way I am using the optanes, so removing from the pool seems sensible). I am however confused

According to zdb -U /data/zfs/zpool.cache

BigPool
The 3 main mirrors are ashift: 12​
The special vdev is ashift:12​
The SLOG (Optane) is ashift: 9​
I thought ashift=12 meant 4Kn but I have never done anything special.​

Smartctl -I /dev/”Seagate drive” says:
Sector Sizes: 512 bytes logical, 4096 bytes physical​

The Optane & the special also say the same
This tells me that the drives are formatted 512 bytes and not the 4Kn that I thought ashift 12 meant

So – my solution is to completely break the pool, format the Seagates with SeaChest_Lite under windows (I can use a different machine for that) and I suspect format the other drives (Optane & Slog) under Ubuntu which has nvmeformat either built in or addable. I could do the seagates with the same but running under windows I get to check firmware as well. I plan on checking every drive for firmware updates

As for eventual OS. I can see Scale on the horizon in the medium term, but think this will be Core to start with at least till Scale is out of Beta and into full release. I might use the old NAS as a proper scale test bed rather than the current VMWare guest.

Does this sound at least moderately sensible?
 
Top