SOLVED 2x RAIDZ1 or 1x RAIDZ2 ?

Status
Not open for further replies.

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
Hello together,

just a few details to explain what I'm up to.

I'm currently running a small setup with 3+1 4TB disks in RAIDZ1. Since it was an initial test 2 years ago, I built it small and cheap and therefore can't expand now. I will change that tho so I can expand up to 22 disks.
The new system will have an additional 4TB drive and I will make a 4+1 RAIDZ1 vdev (a new one, I know I can't expand the old one) as well as a mirrored SSD volume for jails/plugins/.system. I will also add another 5x8TB, likewise in RAIDZ1. After that I'll have space for 10 more drives.

I'm aware of all the differences in RAIDZ1/Z2, buying from different badges, vendors etc., but in terms of future growth I've come to a design question I can't find a satisfying answer to.

Hence my question:

Would you rather add twice 4+1 disks in RAIDZ1 or straight a big pool of 8+2 as RAIDZ2. The total amount of redundancy disks for both setups is the same. The pro's for 2x RAIDZ1 would be I don't have to buy 10 disks at the same time. But Con's?

Since the total amount of 10 disks remains the same in both cases, any of them could fail. But without putting my math cap on I'd think from a statistical point of view it's probably safer to run a big 8+2 RAIDZ2, since any two drives could fail here and the vdev would still work. If two random drives would happen to fail in one of the RAIDZ1 pools, I'd be f*cked. If we go as far as assuming 3 drives would fail more or less simultaniously, I'd be doomed either way. But in the RAIDZ1 scenario I would at least lose only half of my data, while the 2nd pool would be still alive. I guess it's a tradeoff. But assuming Murphy wouldn't destroy 3 drives at the same time, a RAIDZ2 is probably the better choice.

I'd appreciate some input.

Cheers
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
You're going to have to clarify your terminology a bit to understand what you are asking. Please explain what you mean by 4+1 and 8+2. You keep mixing that in with terms like RAIDZ1 and RAIDZ2 so it's confusing what you are trying to describe.

As far as redundancy per vdev, that's a personal choice based on how valuable your data is to you. Also a description of your intended usage would be helpful to determine what your pool layout should be.

Note to mods: What happened to Cyberjocks slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs? Was going to recommend it but can't find it.

Edit: found it. OP take a look at this thread for a good read on vdev's and pools.
 
Last edited:

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
Ok, let me be more specific.

As I learned classic RAIDs years ago, the following was applicable.

RAIDZ1 is basically RAID5 and should work best in multiples of 4 plus one disk for redundancy. e.g. 5x4TB would result in 4+1 disks, usable space (gross) 16TB.
RAIDZ2 is basically RAID6 and should work best in multiples of 4 plus two disks for redundancy. e.g. 10x8TB would result in 8+2 disks, usable space (gross) 64TB.

Currently I have a non-optimal RAIDZ1 with four disks, 4x4TB, resulting in 12TB gross space.

I will make a new RAIDZ1 with 5x8TB, move all data from my old NAS to the new volume, then destroy my old 4x4TB setup, buy an additional 4TB drive and make a 2nd RAIDZ1 now comprising 5 disks as well. I will have 5x4TB and 5x8TB in the new setup, both configured as RAIDZ1.

In the future I can add 10 more disks. My thoughts are: Buying them step by step and make 2x 5x8TB in RAIDZ1 or buy 10 drives at the same time and make a RAIDZ2 of all 10 drives?!


Edit: @Jailer Why would I read that? I know how RAIDZ1, Z2 etc. works. What the ups and downs are in direct comparison. I however don't compare one RAIDZ1 vs one RAIDZ2, but as I explained above.... Reading it again still won't help me at all with my design question. Anyway, thanks.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Modern ZFS installations use compression for most, (if not all), datasets, (aka file systems). Even when
the data is already compressed, like media files. Thus, there is no longer a recommendation to have a specific
count of disks in each vDev for a RAID-Zx. In the past, without compression, certain disk counts worked
better for each of the Zx. Today, there is a minimum recommended and a maximum recommended, something
like this;

RAID-Z2 - 4 disks to 10 disks per vDev
RAID-Z3 - 6 disks to 12 disks per vDev

I've left off RAID-Z1 as it's not recommended with larger disks, like yours, 4TB & 8TB. But, if you can live
with the risk, (if you have good backups, or can re-create the data), then RAID-Z1 will work. You can then
add a second vDev of 5 disks to increase storage.

Note also, while some installations, (business, VMware, etc...), do benefit from regular vDevs of same sized
disks, there is nothing stoping a home or small business from using irregular configurations. For example;

Start - 5 x 4TB in RAID-Z1
Expand - 5 x 6TB in RAID-Z2
Or​
Start - 5 x 4TB in RAID-Z1
Expand - 6 x 6TB in RAID-Z2

As long as you understand the differences in performance and reliability, it WILL WORK.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
'Optimal sizing' doesn't matter anymore.

I'd personally not stretch the vdev as wide as 10.

Why not consider 7 and 8 wide vdevs? It's only an extra 5% gain going 10 wide. 25% to 20% parity.

Skip the raidz1 options.
 

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
Thanks guys, that was some valuable insight.
Ofc I'm aware you can deviate from optimal sizing for home installations - I do that already with my current setup and have no issues... apart from running out of space. I, howver, wasn't aware the optimal sizing is mostly obsolete by now.

Anyway, I ordered those new parts now.
  • Lian Li PC-D8000
  • Intel Xeon E3-1220 v5, 4x 3.00GHz, boxed (BX80662E31220V5)
  • Supermicro X11SSM retail (MBD-X11SSM-O)
  • 2x Samsung DIMM 16GB, DDR4-2133, CL15, ECC (M391A2K43BB1-CPB)
  • LSI SAS 9207-8i, PCIe 3.0 x8 (LSI00301)
  • 2x Samsung SSD 750 Evo 250GB, SATA (MZ-750250BW)
  • 6x Western Digital WD Red 8TB, 3.5", SATA 6Gb/s (WD80EFZX)
  • Corsair RMi Series RM1000i 1000W ATX 2.4 (CP-9020084-EU)
  • 2x SanDisk Ultra Fit V2 32GB, USB 3.0 (SDCZ43-032G-GAM46)
This should do for quite a while and I have plenty options to expand further on demand.

One last question tho. If I recall recorrectly, I once read about a maximum of 10 or 11 disks per vdev, which should't be exceeded by any means. Arwen was "recommending" now up to 12 disks for RAIDZ3. Is the former limit obsolete with Z3 setups?

Anyway, I agree on not necessarily going for 10 disks. Think longterm I will go for 8 disks max per 1 vdev.

Cheers.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
One last question tho. If I recall recorrectly, I once read about a maximum of 10 or 11 disks per vdev, which should't be exceeded by any means. Arwen was "recommending" now up to 12 disks for RAIDZ3. Is the former limit obsolete with Z3 setups?
...
No, as far as I know, there is no limit to the amount of disks in a vDev. But, I pulled 12 out from the fact
that many 2U servers have 12 x 3.5" drive bays. Thus, if someone has to get the most storage, a single 12
disk RAID-Z3 vDev would give 1 more disk's worth of storage, than 2 x 6 disk RAID-Z2 vDevs.

Basically it's a judgement call when exceeding recommended limits. More redundancy and IOPS with 2
vDevs, or a little bit more storage with 1 vDev.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
As per the FreeNAS manual, vdevs wider than 12 discs is technically possible but not recommended:
  • Using more than 12 disks per vdev is not recommended. The recommended number of disks per vdev is between 3 and 9. If you have more disks, use multiple vdevs.
  • Some older ZFS documentation recommends that a certain number of disks is needed for each type of RAIDZ in order to achieve optimal performance. On systems using LZ4 compression, which is the default for FreeNAS® 9.2.1 and higher, this is no longer true. See ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ for details.
ZFS Primer

Regarding your hardware choices, your PSU seems to be overkill and thus you're killing its efficiency. How quickly do you plan on expanding your storage? If you are going to be significantly increasing your storage in the next year, then of course you wouldn't want to buy a smaller PSU now and then have to get a larger one later to support your growth in such a short time from now. I can see your case supports 20 drives, so maybe that is your plan.

If you are not aware, there is a very helpful PSU sizing guide that will help you determine the optimal wattage for the number of drives you plan on installing: PSU Sizing Guide
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
Regarding your hardware choices, your PSU seems to be overkill and thus you're killing its efficiency. [...] I can see your case supports 20 drives, so maybe that is your plan.

If you are not aware, there is a very helpful PSU sizing guide that will help you determine the optimal wattage for the number of drives you plan on installing: PSU Sizing Guide
Thanks. I'm actually aware of the oversized PSU. I first went with a 850W model, but then decided I don't want to buy another PSU when I expand. On various occasions I read some comments and some example calculations on how much an oversized and less efficient PSU (due to being oversized) will cost you. It amounted to maybe 15-20 bucks a year. With a bottom line comment of "when it comes to power better have more than too little". That's basically the main reason I went with the 1000W device, so I'm equipped for future growth at the cost of less efficiency and a slightly higher energy bill.

And to answer your other question: well, I will also move 5x 4TB drives to the new case. So I will be at 6x8TB, 5x4TB and 2xSSD. Overall I think the PSU is sized accordingly with room for growth.

Woah, why did you buy the crap version that doesn't have IPMI?
Oh, a simple copy/paste mistake on my end. I actually ordered the X11SSM-F, which does come with IPMI2.0. But to be honest, it's gonna be a home installation, so I won't make the most use of it anyway.
 
Last edited:

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
IPMI is the bomb diggity, even at home :)
 

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
Well, since I'll have it now regardless, I can give it a shot :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
it's gonna be a home installation, so I won't make the most use of it anyway.
Most people here are home users or close to it, and IPMI is still very popular. Saves a lot of trouble even if you're sitting next to the server (no messing around with USB drives for OS installs is a big one).
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Saves my phat a$$ walking up and down 2 flights of stairs to see what is going on on my server!!

Seriously, it is very handy. Especially if you are away from home and your server needs your help.

I hope your SuperMicro system rocks for ya. I am looking at picking up a used SM system based on X8 platform.

Cheers,
 

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
Ok ok ok. I'm sold :D
I never had the use really for it and am honestly not aware of all the functionality that comes with IPMI, but ima take a look at it.
Obviously I do enjoy tinkering stuff or I'd own a Synology... :D
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
Ok, I got all my hardware and just started up the first time. IPMI is pretty nice (: ... if it only wasn't java ;)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
2x Samsung SSD 750 Evo 250GB
What's your plan for these devices? I hope it's not a SLOG. https://www.servethehome.com/project-kenko-01-update-samsung-750-evos-pass-2tb-written
The particular NAS went from running over 800MB/s using an Intel DC P3700 to around 200MB/s for the first few minutes using the Samsung 750 EVO. By the 30 minute mark, the Samsung 750 EVOs brought writes down to under 16MB/s. By one hour in this figure fell to 12MB/s. Given, we did pay about ten times the amount for the Intel DC P3700 400GB SSD, and it has features like power-loss protection making it a top pick for that type of application. It does show that all SSDs are not created equal and even a several-year-old SSD can perform 50x faster if selected properly for the workload. The Samsung 750 EVO will likely never be used as a ZIL device in a NAS server since it is client focused. It is still an interesting real-world performance data point around how read-optimized these drives are.
 

angelus249

Dabbler
Joined
Dec 19, 2014
Messages
41
No. It's for my jails and system. On my current/old setup I ran into the issue that some jails (e.g. Plex) and syslog events write constantly on the disk pool and never get to rest, even if no other access happening for hours. Also in terms of jail (I'm thinking of running a vm) and transcoding performance a SSD comes in handy. And since I don't have the protection of a Z1 or Z2 pool for my jails anymore, I got 2xSSD, mirrored.
 
Status
Not open for further replies.
Top