Upgrade questions.

pr1malr8ge

Dabbler
Joined
Dec 10, 2017
Messages
14
All, I've been thinking about upgrading for some time. Honestly, I'm still on freenas 11.3-u5 and that was only because 9.3 and plex was having issues. The upgrade forced me to leave sickrage in favor of sonarr also. At any rate I'm glad I left sickrage and now actually favor sonnarr.. How ever the fact sabnzbd wasn't even offered in the official plugins forcing me to make blank jails for it and a few other things I wanted kind of irritated me.. Alas all of that is now working but the thought of upgrading to truenas has been cringe-worth.

With all of the above said. This is what I'm considering doing.
My current machine is in an SM826 chassis with a bpn-sas2-el1 expander
x9dri-lnf4+ a single E5-2640 with 8x8gb 2r4x ram 64gb.
lsi 9211-8i IT hba
2x6x3tb hgst sata vdevs rz2
single os ssd

I just ordered a bpn-sas3-el1 along with a sm aoc-s3008l-l8e with 2 sff8643 cables to convert over to sas3. as I'm considering getting 10tb hgst sas3 4kn drives.

I'm also highly considering dumping the x9 board in favor of an h11 with an epyc 32core cpu or at the least upgrading the x9 to dual v2 10core Ls and doubling the ram to 128gig and installing esxi and virtualizing freenas. This would allow me to consolidate my current under powered esxi machine running on a c2000 cpu. converting the current freenas machine to truenas scale and all it will do is run smb and iscsi for esxi vm storage and nothing else. ill move the jails err create VMs for plex sonnarr radarr etc in esxi along with migrating my current esxi vms over. I can also then get a quadro p600 for hw transcoding for plex with pci-passthrough. My goals with this is to up my esxi as the current machine just isn't powerful enough for what I'm doing with it. Plus having a VM rather then a jail for my media side will allow me to keep it up-to-date much easier then how it works with freenas/truenas. Not to mention being able to update truenas with out worrying if it will break any of the plugins/jails.

I know many are going to argue against virtualizing truenas but really I don't care that I'm going to loose hardware awareness on truenas and because it will have full hba and drive access it will still give or at least should be able to run smart testing on the drives. As far as going scale on baremetal and using it as a hypervisor... I know esxi and I'm comfortable with it. I haven't a clue on how to use vms with scale's kvm, and I don't have any baremetal to test with.

Now for the main question I guess I came to ask outside of the input on what others think about the above upgrades..
since I'm planning on converting at the very least to sas3 with both an hba and expander with the current 12drives being sata and in a 2x6 set. am I able to replace a single drive at a time from sata 512e with sas3 4kn?
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
SAS expanders work just fine with SATA drives, it is part of the SAS specification. They just use SATA tunneling over SAS. (When a SAS HBA port is wired directly to a SATA drive, the port changes to SATA.)

However, ZFS vDevs made with 512 byte blocks are "stuck" using 512 byte blocks. If the vDev was made with 4096 byte blocks, using disks that also emulated 512 byte blocks, then you are good to replace with other native 4096 byte block disks.

I don't have the FreeBSD / TrueNAS method of getting the vDev low level block size. In later versions of ZFS, (and in ZFS on Linux), you use this;
Code:
zpool get all POOL | grep ashift

9 = 512 byte blocks
12 = 4096 byte blocks
 

pr1malr8ge

Dabbler
Joined
Dec 10, 2017
Messages
14
I tried zpool get all dataset1 | grep ashift and of course it will not echo the results.
going zdb -C | grep ashift reports 9
zdb -C dataset1 | grep ashift reports no such file or directory which lead me to believe it was probably polling the boot disk. Which alas some slight research shows I was correct.
so running zdb -U /data/zfs/zpool.cache seems to poll the pool which appears to be ashift 12

Also, when i put this current setup into production it was built using a sas hba through a sas expander but using hgst HUS724030ALE641 which is 512e formated. I'm assuming they are actually 4k but 512 emulated? I guess my concern now is I guess again will freenas be ok with a single drive at a time being swapped from sata to sas in the same port on the expander?
Code:
root@aeronas:/mnt/dataset1 # zdb -U /data/zfs/zpool.cache
dataset1:
    version: 5000
    name: 'dataset1'
    state: 0
    txg: 25175537
    pool_guid: 1067470466735444340
    hostid: 297451536
    hostname: ''
    com.delphix:has_per_vdev_zaps
    vdev_children: 2
    vdev_tree:
        type: 'root'
        id: 0
        guid: 1067470466735444340
        children[0]:
            type: 'raidz'
            id: 0
            guid: 8376081832086237354
            nparity: 2
            metaslab_array: 43
            metaslab_shift: 37
            ashift: 12
            asize: 17990643351552
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 36
            children[0]:
                type: 'disk'
                id: 0
                guid: 14548056729702173660
                path: '/dev/gptid/2dc7d13f-06f6-11e8-95b7-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@1/elmdesc@000/p2'
                whole_disk: 1
                DTL: 3662
                create_txg: 4
                com.delphix:vdev_zap_leaf: 37
            children[1]:
                type: 'disk'
                id: 1
                guid: 16384147264816076366
                path: '/dev/gptid/3036edf1-06f6-11e8-95b7-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@2/elmdesc@001/p2'
                whole_disk: 1
                DTL: 3661
                create_txg: 4
                com.delphix:vdev_zap_leaf: 38
            children[2]:
                type: 'disk'
                id: 2
                guid: 11052204991308789346
                path: '/dev/gptid/32c2dd07-06f6-11e8-95b7-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@3/elmdesc@002/p2'
                whole_disk: 1
                DTL: 3660
                create_txg: 4
                com.delphix:vdev_zap_leaf: 39
            children[3]:
                type: 'disk'
                id: 3
                guid: 11632999534636192173
                path: '/dev/gptid/353dd12c-06f6-11e8-95b7-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@4/elmdesc@003/p2'
                whole_disk: 1
                DTL: 3659
                create_txg: 4
                com.delphix:vdev_zap_leaf: 40
            children[4]:
                type: 'disk'
                id: 4
                guid: 7173745563506291239
                path: '/dev/gptid/379353da-06f6-11e8-95b7-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@5/elmdesc@004/p2'
                whole_disk: 1
                DTL: 3658
                create_txg: 4
                com.delphix:vdev_zap_leaf: 41
            children[5]:
                type: 'disk'
                id: 5
                guid: 2453385842323970920
                path: '/dev/gptid/3a023982-06f6-11e8-95b7-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@6/elmdesc@005/p2'
                whole_disk: 1
                DTL: 3657
                create_txg: 4
                com.delphix:vdev_zap_leaf: 42
        children[1]:
            type: 'raidz'
            id: 1
            guid: 7347150624876858310
            nparity: 2
            metaslab_array: 531
            metaslab_shift: 37
            ashift: 12
            asize: 17990643351552
            is_log: 0
            create_txg: 19255579
            com.delphix:vdev_zap_top: 103
            children[0]:
                type: 'disk'
                id: 0
                guid: 11867915104075655070
                path: '/dev/gptid/3f05372a-63f4-11eb-8177-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@8/elmdesc@007/p2'
                whole_disk: 1
                DTL: 3668
                create_txg: 19255579
                com.delphix:vdev_zap_leaf: 110
            children[1]:
                type: 'disk'
                id: 1
                guid: 17768939401613121488
                path: '/dev/gptid/41b0f368-63f4-11eb-8177-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@7/elmdesc@006/p2'
                whole_disk: 1
                DTL: 3667
                create_txg: 19255579
                com.delphix:vdev_zap_leaf: 116
            children[2]:
                type: 'disk'
                id: 2
                guid: 18383175137910154983
                path: '/dev/gptid/44868003-63f4-11eb-8177-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@9/elmdesc@008/p2'
                whole_disk: 1
                DTL: 3666
                create_txg: 19255579
                com.delphix:vdev_zap_leaf: 117
            children[3]:
                type: 'disk'
                id: 3
                guid: 8837091796454359946
                path: '/dev/gptid/474f1794-63f4-11eb-8177-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@b/elmdesc@010/p2'
                whole_disk: 1
                DTL: 3665
                create_txg: 19255579
                com.delphix:vdev_zap_leaf: 123
            children[4]:
                type: 'disk'
                id: 4
                guid: 1416460290215669581
                path: '/dev/gptid/4a10252b-63f4-11eb-8177-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@c/elmdesc@011/p2'
                whole_disk: 1
                DTL: 3664
                create_txg: 19255579
                com.delphix:vdev_zap_leaf: 529
            children[5]:
                type: 'disk'
                id: 5
                guid: 12207365459443583373
                path: '/dev/gptid/4cccce44-63f4-11eb-8177-0025902f6d10'
                phys_path: 'id1,enc@n500304800104007f/type@0/slot@a/elmdesc@009/p2'
                whole_disk: 1
                DTL: 3663
                create_txg: 19255579
                com.delphix:vdev_zap_leaf: 530
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data



Code:
root@aeronas:/mnt/dataset1 # zpool status
  pool: dataset1
 state: ONLINE
  scan: scrub repaired 0 in 0 days 11:23:07 with 0 errors on Sat Jan 15 15:23:09 2022
config:

        NAME                                            STATE     READ WRITE CKSUM
        dataset1                                        ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/2dc7d13f-06f6-11e8-95b7-0025902f6d10  ONLINE       0     0     0
            gptid/3036edf1-06f6-11e8-95b7-0025902f6d10  ONLINE       0     0     0
            gptid/32c2dd07-06f6-11e8-95b7-0025902f6d10  ONLINE       0     0     0
            gptid/353dd12c-06f6-11e8-95b7-0025902f6d10  ONLINE       0     0     0
            gptid/379353da-06f6-11e8-95b7-0025902f6d10  ONLINE       0     0     0
            gptid/3a023982-06f6-11e8-95b7-0025902f6d10  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/3f05372a-63f4-11eb-8177-0025902f6d10  ONLINE       0     0     0
            gptid/41b0f368-63f4-11eb-8177-0025902f6d10  ONLINE       0     0     0
            gptid/44868003-63f4-11eb-8177-0025902f6d10  ONLINE       0     0     0
            gptid/474f1794-63f4-11eb-8177-0025902f6d10  ONLINE       0     0     0
            gptid/4a10252b-63f4-11eb-8177-0025902f6d10  ONLINE       0     0     0
            gptid/4cccce44-63f4-11eb-8177-0025902f6d10  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:43 with 0 errors on Fri Jan 21 03:45:43 2022
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada0p2    ONLINE       0     0     0

errors: No known data errors

 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
From your ZFS debug output, both your vDevs are 4K. So, you should be good to go on replacement.
 
Top