Best option for Pool & VDEV

Selassie

Dabbler
Joined
Jun 22, 2018
Messages
46
I am building Server with two chassis's
The purpose of these system is for storage I have over 100TB of Data (mostly video media), I need fast access for streaming and content editing for home access. The most important aspects to maintain are; integrity of the data, fast access the ability to add extra storage chassis later on.
First Chassis (Main chassis)
This has the motherboard and the following components;
  • A 16 bay 3U hot swap chassis
  • Supermicro X11SPH-NCTF Server Motherboard
  • Processor: Intel XEON Silver 4114
  • 96GB Ram (plan to add another 64GB)
  • Samsung EVO plus 1TB NVME drive
  • Intel DC S3610 Series 480GB, SSD
  • Intel DC S3700 Series 400GB, SSD
    • Intel DC drives may be used for ZILL / SLOG drives
Second chassis (JBOD)
  • External 12 bay, 2U hot swap chassis (JOBD) with connector to main chassis
Hard drives In-hand
12 Seagate 10TB IronWolf Nas drives
Since I already have the 12 Seagate drives I prefer to populate the JODB first as I would like to add larger size drives to the main chassis (can this work)?

Proposed Pool and VDEV’s
Pool Option 1

  • VDEV 1: Main Chassis (16 drives) – populate half with 8, 14TB drives (RAID Z2)
  • VDEV 2: JBOD (12 drives) populate all bays (RAID Z3)
  • VDEV 3: Main chassis - Purchase another 8, 14TB drives in 6-9 months, (RAID Z2)
Pool Option 2
  • VDEV 1: Main Chassis (16 drives) - 8 14TB drives (RAID Z2)
  • VDEV 2: JBOD (6 drives) (RAID Z2)
  • VDEV 3: JBOD (6 drives) (RAID Z2)
  • VDEV 4: Main chassis - Purchase another 8, 14TB drives in 6-9 months, (RAID Z2)
Question
Would putting all the drives into the JBOD first still work if I populate the Main chassis at a later date?
Would there be any performance difference between the two sets of proposed VDEV’s?
 

Attachments

  • chassis.jpg
    chassis.jpg
    312.8 KB · Views: 560
Last edited:

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Either would work fine, yes you can populate the external first.
Option 2 would offer more IOPS if both VDEV are in the same pool.
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
I don't think you can mix raidz2 & 3 in the same pool
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I don't think you can mix raidz2 & 3 in the same pool
You can mix vdevs of any topology in a single pool. The question is whether that makes sense. ZFS will spread the data more or less evenly and not take the different I/O characteristics into consideration. So you will have inconsistent performance on seemingly random occasions. This is an area actively being worked on, because it is very desirable to have the most accessed files on a "fast" vdev and the rest on a "slower" one, much like e.g. Apple's "Fusion Drive" solution.

Kind regards,
Patrick
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
You cant mix raidz2 & 3 in 11.3 that's for sure (at least with the GUI)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Code:
cd /mnt/zfs
for i in `jot 10 0`
do
  truncate -s 10g disk${i}
done
zpool create -f testpool raidz2 /mnt/zfs/disk[0-3] raidz3 /mnt/zfs/disk[4-9]
zpool status testpool

  pool: testpool
state: ONLINE
  scan: none requested
config:

    NAME                STATE     READ WRITE CKSUM
    testpool            ONLINE       0     0     0
      raidz2-0          ONLINE       0     0     0
        /mnt/zfs/disk0  ONLINE       0     0     0
        /mnt/zfs/disk1  ONLINE       0     0     0
        /mnt/zfs/disk2  ONLINE       0     0     0
        /mnt/zfs/disk3  ONLINE       0     0     0
      raidz3-1          ONLINE       0     0     0
        /mnt/zfs/disk4  ONLINE       0     0     0
        /mnt/zfs/disk5  ONLINE       0     0     0
        /mnt/zfs/disk6  ONLINE       0     0     0
        /mnt/zfs/disk7  ONLINE       0     0     0
        /mnt/zfs/disk8  ONLINE       0     0     0
        /mnt/zfs/disk9  ONLINE       0     0     0
 
Joined
Jan 4, 2014
Messages
1,644

Selassie

Dabbler
Joined
Jun 22, 2018
Messages
46
Code:
cd /mnt/zfs
for i in `jot 10 0`
do
  truncate -s 10g disk${i}
done
zpool create -f testpool raidz2 /mnt/zfs/disk[0-3] raidz3 /mnt/zfs/disk[4-9]
zpool status testpool

  pool: testpool
state: ONLINE
  scan: none requested
config:

    NAME                STATE     READ WRITE CKSUM
    testpool            ONLINE       0     0     0
      raidz2-0          ONLINE       0     0     0
        /mnt/zfs/disk0  ONLINE       0     0     0
        /mnt/zfs/disk1  ONLINE       0     0     0
        /mnt/zfs/disk2  ONLINE       0     0     0
        /mnt/zfs/disk3  ONLINE       0     0     0
      raidz3-1          ONLINE       0     0     0
        /mnt/zfs/disk4  ONLINE       0     0     0
        /mnt/zfs/disk5  ONLINE       0     0     0
        /mnt/zfs/disk6  ONLINE       0     0     0
        /mnt/zfs/disk7  ONLINE       0     0     0
        /mnt/zfs/disk8  ONLINE       0     0     0
        /mnt/zfs/disk9  ONLINE       0     0     0
Thanks for the feedback, based on your comments above what do you recommend, stick to one raid level for all my VDEVs in the pool (in this case RAIDZ2)?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Yes! By all means use the same topology for all your vdevs. (SLOG and L2ARC aside)
Or use different pools. I have one pool of two mirrored SSDs for VMs and jails and another one made of 4 spinning disks in RAIDZ2 for "storage".
 

Selassie

Dabbler
Joined
Jun 22, 2018
Messages
46
Yes! By all means use the same topology for all your vdevs. (SLOG and L2ARC aside)
Or use different pools. I have one pool of two mirrored SSDs for VMs and jails and another one made of 4 spinning disks in RAIDZ2 for "storage".
Excellent, that makes it a lot more clear.
 
Top