ZFS Pool Design for 8 2G drives

Status
Not open for further replies.

mbalsam

Explorer
Joined
Oct 9, 2015
Messages
85
Hi all,

I'm in the process of migrating from a

Nexenta SAN - 7 x 750M Sata in raidZ1

to

FreeNas - 8 x 2T WD Sata Black Drives

In this configuration:

Code:
        pool                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/8c015721-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
            gptid/8c534cbc-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
            gptid/8ca191a6-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
            gptid/8cedbecc-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/8d40fdf4-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
            gptid/8d8ff3ec-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
            gptid/8de8f85b-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
            gptid/8e359218-6eff-11e5-b80a-0030488f201a  ONLINE       0     0     0
        logs
          gptid/e58b867b-6a41-11e5-8fcb-0030488f201a    ONLINE       0     0     0
        cache
          gptid/d1787140-6a41-11e5-8fcb-0030488f201a    ONLINE       0     0     0


I was reading this blog

http://nex7.blogspot.com/2013/03/readme1st.html

and the experienced author made these suggestions.

9. Pool Design Rules
  • Do not use raidz1 for disks 1TB or greater in size.
  • For raidz2, do not use less than 6 disks, nor more than 10 disks in each vdev (8 is a typical average)
If i was to follow the authors recommendations, i cant use RAIDZ1.

When using RAIDZ2 i should not have striped two RAIDZ2 4 disk pools, but used a single RAIDZ2 8 disk pool?

I know his comments are not the gospel, and most likley for very large environments, but im just trying to understand his logic.

For his first edict, Do not use raidz1 for disks 1TB or greater in size, im assuming this is due to the time it takes to replace a failed drive is too long for disks larger than 1T?

For his second edict, raidz2, < 6 and 10>. Im guessing these are his considerations.

As my current configuration shows, using of 2 disks in each VDEV for parity is too wasteful.

This is why he's saying use 6 disk, so you get 4 data disks and 2 partity disks in each vdev. Not more than 10 must be due to the overhead in calculating parity??
 

mbalsam

Explorer
Joined
Oct 9, 2015
Messages
85
Another Question: Im trying to determine the amount of space that is available.

Code:
[root@freenas] ~# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  29.8G  1.03G  28.7G         -      -     3%  1.00x  ONLINE  -
pool          14.5T  4.46T  10.0T         -    21%    30%  1.04x  ONLINE  /mnt


Code:
[root@freenas] ~# zfs list
NAME                                                    USED  AVAIL  REFER  MOUNTPOINT
freenas-boot                                           1.03G  27.8G    31K  none
freenas-boot/ROOT                                      1.02G  27.8G    25K  none
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511040813      1.02G  27.8G   528M  /
freenas-boot/ROOT/Initial-Install                         1K  27.8G   508M  legacy
freenas-boot/ROOT/default                                36K  27.8G   517M  legacy
freenas-boot/grub                                      13.6M  27.8G  6.79M  legacy
pool                                                   3.53T  3.27T   140K  /mnt/pool
pool/

.....

.


From what i understand zpool list shows you the amount of raw storage in the vdev without parity.

So

zpool size - 14.5T - The total amount of storage presented from BSD
zpool alloc - 4.46T - The amount of space taken up with my existing storage?
zpool Free - 10.0T - Zpool size - zpool alloc

But now when i look at the zfs list results im confusted.

The line pool shows

zfs pool used - 3.53T
zfs pool avail - 3.27T

I would expect that zpool alloc 4.46T and zfs pool 3.53T used should be the same. but there's a 1T missing?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Let me ask you a question, how much RAM do you have?
What applications are you using the FreeNAS for? (Your answer could dictate the pool format)

I'm going to make some generalizations here for you...

1) If you need high speed access such as hosting VMs then mirroring is a good idea.
2) If you do not need high speed access and are only backing up your computers, hosting some video streaming, just normal home NAS type work, an 8 drive RAIDZ2 is fine.
3) The size of a hard drives will not determine the RAIDZ level you can use. The number of drives and your usage will determine it.
4) When it comes to figuring out how much storage space you have on a ZFS system, as long as the value reported is somewhat close (+/- 1TB) then don't kill yourself trying to figure ZFS out, you will drive yourself nuts.
 

mbalsam

Explorer
Joined
Oct 9, 2015
Messages
85
Ram: 18GB

I would like hi-speed access since the 20+ VM's.
Their used for customer demonstrations and some testing.

But how much space remains? I assume its zfs pool avail = 3.27T ??

I'm thinking of upgrading the super-micro cage from a 2U 8 drive to a 4U 32 drive with a motherboard that accepts more memory.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
20+ VMs? You better do some reading. You should be running striped mirrors, adding a dedicated SLOG device (SSD), and adding substantially more memory.
 

mbalsam

Explorer
Joined
Oct 9, 2015
Messages
85
Tvsjr,

How about this for a configuration

http://www.ebay.com/itm/SuperMicro-...520-Quad-Core-36GB-ASR-5805-2PS-/221945284142

X8DTE-F 2 x 2.26Ghz E5520 Quad Core -
SuperMicro 4U 24B X8DTE-F
96 GB Ram
Slog - Samsung 500G SSD Mirrored
10G Ethernet
3 x IBM M1015 controllers ReFlashed

Question: If i need Striped Mirrors, should that be something like this:

Code:
Pool
    raid-z1-0
         d1
         d2
         d3
         d4
         d5
    raid-z1-1
         d6
         d7
         d8
         d9
         d10


10 x Hitachi Ultrastar HUS723030ALS640 3TB SAS 7.2k 6Gb/s 3.5"
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The first eBay link doesn't specify the type of backplane present. The second does. However, the processors associated with the X8 series (basically, the E/X series with a front-side bus) are known for providing comparatively poor performance at high power consumption. An X9 or X10 series MB and associated processor (the E3/E5 lines) are recommended. There are some stickied threads about which MBs are most recommended. I've been pretty happy with my X9 configuration, although I admittedly have far more CPU than I need.

With an expander backplane, you only need one M1015 card, cross-flashed to IT mode. On the 10G card, do some reading and ensure you pick a card that's well supported (that seems to be specific Chelsio cards).

There is no such thing as too much memory in this sort of configuration. You also need to have a properly sized UPS and configure it to safely shut the system down before battery exhaustion.

Striped mirrors implies mirroring, not RAID-Z-anything. You'll create this using a type of "Mirror" in the WebGUI. Here's the current configuration of my Tier2 pool, a 12x450G 15K SAS array:
Code:
[root@freenas] ~# zpool status Tier2
  pool: Tier2
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Mon Dec 28 06:47:15 2015
config:

  NAME  STATE  READ WRITE CKSUM
  Tier2  ONLINE  0  0  0
  mirror-0  ONLINE  0  0  0
  gptid/94877276-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  gptid/9527e22a-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  mirror-1  ONLINE  0  0  0
  gptid/95bcade0-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  gptid/96516c4a-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  mirror-2  ONLINE  0  0  0
  gptid/96e9eda8-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  gptid/97804227-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  mirror-3  ONLINE  0  0  0
  gptid/981e3ef1-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  gptid/98b2265b-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  mirror-4  ONLINE  0  0  0
  gptid/994c310e-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  gptid/99e4fad2-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  mirror-5  ONLINE  0  0  0
  gptid/9a72b042-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  gptid/9af63227-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  logs
  mirror-6  ONLINE  0  0  0
  gptid/9b4fd993-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  gptid/9b94bf28-a9f5-11e5-b3e9-002590869c3c  ONLINE  0  0  0
  spares
  gptid/93f3cae3-a9f5-11e5-b3e9-002590869c3c  AVAIL

errors: No known data errors


Mirror-0 through mirror-5 are the data discs, mirror-6 is the mirrored SLOG, and a single global hotspare.
 

mbalsam

Explorer
Joined
Oct 9, 2015
Messages
85
Got it on the Raidz vs Mirror. Thanks..

I thought that the expander backplane was bad?

The vendor has a variety of cases and can put in the type of back plane i want..

What should i ask for?

Expander with 1 IBM M1015 card

or

Non Expander with 3 IBM M1015 cards?
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
The Server Store is a good seller, the other looks a bit shady as their info is wrong about the chassis. Mr. Rackables and Certified Servers are 2 great sellers. You want a SAS2 backplane and rails included. Otherwise it is not a good deal.

EDIT: You should look for a SC846E16-1200B (or 900B). That is the 24 bay with a SAS2 backplane. You can search that on the web or Ebay.

How much data is tied up in your VMs? If only your OS needs to be zippy then pick up 2x SSD for that and have a separate datastore for your data. (What are you using for hosts and how many?)

Get a LSI 9211-8i instead of the M1015. The cables come out of the back of the card so cabling is cleaner and they are easier to flash IMO. (I see WAY too many threads on here about people having trouble crossflashing the M1015) And you only need 1 of these cards, not 3. They are $100 on Ebay.

I agree with tvsjr, get an X9 or X10 system instead of the X8.
 
Status
Not open for further replies.
Top