New ZFS user

MasterCATZ

Cadet
Joined
Oct 17, 2018
Messages
4
I am currently trying to decide how to do a 16 disk array for
docker / mysql / steam games
(decided to pull them off my Mergerfs / btrfs array as files change too often makes my snapraid syncs longer)

raidz1 4 disks and 4x vdevs 36Tb ( highest IO fast re-silvering but if a disk fails during re-silver I loose it all )
raidz2 8 disks and 2x vdevs 36Tb ( medium IO slow re-silvering)
raidz2 5 disks and 3x vdevs 27Tb and 1 hot spare ( high IO medium re-silvering )
raidz1 8 disks and 1 mirror 21Tb 9 disk redundancy? ( medium IO ? how does re-silvering go in a mirror ?)
it might be over kill but as I am using suspect disks it might be the better way

The plan is to keep thrashing the zfs pool until I use up all the spare 3tb disks , then upgrade the disks in my snapraid/btrfs pool ( as it can take any size disks ) and re-purpose those old 3tb disks for the zfs pool ,

then backup the zfs pool to the snapraid pool and rebuild a fresh zfs pool
possibly raidz2 5 disk then add the other vdevs as needed



making use of 5 year old 3Tb SAS disks (I still have about 20 spare drives)
these are not playing along with my btrfs pool, 4~9 bad sectors when reading
as these disk drives were designed to have the sectors manually reallocated they are a PITA
because the disks have their auto-relocate feature disabled and will not re-enable

sudo sdparm --set ARRE=1 /dev/mapper/SRD3NA1B2
/dev/mapper/SRD3NA1B2: SEAGATE ST33000650NS SM NA01
change_mode_page: failed fetching page: Read write error recovery

sudo sdparm --long /dev/mapper/SRD3NA1B2
/dev/mapper/SRD3NA1B2: SEAGATE ST33000650NS SM NA01
Direct access device specific parameters: WP=0 DPOFUA=0
Read write error recovery [rw] mode page:
AWRE 1 [cha: n, def: 1, sav: 1] Automatic write reallocation enabled
ARRE 0 [cha: n, def: 0, sav: 0] Automatic read reallocation enabled
PER 1 [cha: n, def: 0, sav: 1] Post error
Caching (SBC) [ca] mode page:
WCE 0 [cha: y, def: 0, sav: 0] Write cache enable
RCD 0 [cha: n, def: 0, sav: 0] Read cache disable
Control [co] mode page:
SWP 0 [cha: n, def: 0, sav: 0] Software write protect
Informational exceptions control [ie] mode page:
EWASC 1 [cha: n, def: 0, sav: 1] Enable warning
DEXCPT 0 [cha: n, def: 0, sav: 0] Disable exceptions
MRIE 4 [cha: y, def: 6, sav: 4] Method of reporting informational exceptions
 

MasterCATZ

Cadet
Joined
Oct 17, 2018
Messages
4
I could not find the edit option , apparently raidz1,2,3 can not mirror ?

so
5x disk 3x vdev mirrors and 1 hot spare?
 

MasterCATZ

Cadet
Joined
Oct 17, 2018
Messages
4
I think mirroring vdevs will be best for me as I can upgrade a vdevs capacity real-time by swapping out old disks with larger ones latter on
less strain on all the disks during a re-silver

4x disk 4x vdev mirrors , 12 Tb High IO (capacity is on the low side for me possibly overkill redundancy)
5x disk 3x vdev mirrors and 1 hot spare , 15 tb Medium IO ( capacity is bearable redundancy should be good )
8x disk 2x vdev mirrors 24Tb ( capacity is about right but if 2x disks of the same mirror fail at the same time ... )
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Reading for you:

The 'Hidden' Cost of Using ZFS for Your Home NAS
https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://www.ixsystems.com/community...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Overview of ZFS Pools in FreeNAS from the iXsystems blog:
https://www.ixsystems.com/blog/zfs-pools-in-freenas/

Terminology and Abbreviations Primer
https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
 
Top