larger system configuration questions and observations

Status
Not open for further replies.

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
Hi,

I've been dabbling with Freenas for nearly 2 years. My current system is a 6 core i7, 32GB Ram (non-ECC (I know)), 12-2TB Sata drives in RAIDZ2, 2 SSDs mirrored for Jails and 1 SSD for OS. I have a bunch of jails running Plex, Plex Connect, Sabnabd, Couchpotato, Sickbeard and Owncloud. I have not seen more than 8 streams off of this box yet. Most people watching are throttled to 4mb/s so lots of transcoding. (Almost) Never a hiccup. I have tried doing an iSCSI connection to a VMware server, the iSCSI connection would randomly drop, so I gave that up. Since I'm almost out of capacity, for the interim, I though I would do an iSCSI connection between a QNAP box and Freenas (direct GigE connection). The system eventually hangs and I need to do hard reboot on Freenas. So I gave this up and am just using an SMB connection for now.

I was settled on a new server. 16-4TB WD. I have flashed 2 IBM M1015. Currently a server class MB, 2 core Xeon processor with 16GB of RAM. For testing purposes, I have a (data1) volume setup as RaidZ2 on one controller using 4 drives and another (data2) volume setup on a separate controller card in RaidZ2. Over the network between the QNAP and this new server I can transfer 10GB files at roughly 330mb/s (peak 450Mb/s). I believe this to be the limitation of the QNAP. My Observations: While copying from the Qnap to data1, I'm getting ~330Mb/s. During this time if I copy a 10GB file from data1 to data2 I get 1.63Gb/s Without the transfer from the Qnap to Freenas data1, I can copy from data1 to data2 at 1.63Bb/s. I noticed compression was enabled by default and read that there should be no performance hit. I disabled compression and copying between data1 to data2 when up to 1.85Gb/s. CPU and RAM requirements when up drastically, but still had extra to go.

I originally wanted to merge the old volume with the new volume to create one large volume. When playing last night, this does not work well from a storage perspective.
Q1: Is there a way to merge 2 volumes together, "scrub" them and create 1 large volume with RAIDZ3, not losing any data (from what I have read, it looks like a NO).
Q2: Is there a limitation join the number of drives I can put on to the same volume because I was thinking of just going out and getting a Supermicro chassis that can take 24 drives and putting all 24 drives into a single volume with RAIDZ3 (this is strictly for storage, jails drives are mirrored SSD and the boot drive is a single SSD)?
Q3: Is there away to build multiple Freenas devices to create a "storage cloud". So, when I run out of space and capacity on a system, I just add another system to the cloud to increase the volume size?
Q4: My first system is taking a couple of days to re-silver a 2TB drive with a 4TB drive. My system is running at 80% capacity. Should it take this long? Have I done anything wrong from the "simple process" of taking a drive off line, replacing the drive then re-silvering? Is there a more effective way to do this?
Q5: Aside from moving my jails on to remote servers (VMware) and connecting via iSCSI, any better ideas for a solution.

This box is not a commercial box, this is strictly for my own personal use (clearly an expensive but rewarding hobby).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Q1: Is there a way to merge 2 volumes together, "scrub" them and create 1 large volume with RAIDZ3, not losing any data (from what I have read, it looks like a NO).
Your understanding is correct. If there's room on one of the volumes (call it vol1), you can copy the data from the other (call it vol2) to vol1, then destroy vol2 and add its member disks to vol1, but there's no way to combine two pools into one.
Q2: Is there a limitation join the number of drives I can put on to the same volume because I was thinking of just going out and getting a Supermicro chassis that can take 24 drives and putting all 24 drives into a single volume with RAIDZ3 (this is strictly for storage, jails drives are mirrored SSD and the boot drive is a single SSD)?
The recommendation I most commonly see is not to exceed 10-11 disks in a vdev, but you can have multiple vdevs in a pool (volume). Your hypothetical 24 disks could be in three, 8-disk RAIDZ2 vdevs (or even, less optimally, two 12-disk RAIDZ2 vdevs), combined into a single pool.
Q3: Is there away to build multiple Freenas devices to create a "storage cloud". So, when I run out of space and capacity on a system, I just add another system to the cloud to increase the volume size?
No way to do that with separate systems, but you could add a drive shelf/JBOD with a SAS link to your server.
 

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
Your understanding is correct. If there's room on one of the volumes (call it vol1), you can copy the data from the other (call it vol2) to vol1, then destroy vol2 and add its member disks to vol1, but there's no way to combine two pools into one.

The recommendation I most commonly see is not to exceed 10-11 disks in a vdev, but you can have multiple vdevs in a pool (volume). Your hypothetical 24 disks could be in three, 8-disk RAIDZ2 vdevs (or even, less optimally, two 12-disk RAIDZ2 vdevs), combined into a single pool.

No way to do that with separate systems, but you could add a drive shelf/JBOD with a SAS link to your server.

Thanks for your response=. I saw the 12 drive limitation quite a while ago, but have yet to find it again. I was hoping this was some sort of made up number. No one has said what the potential issues are with over 12 drives. Maybe its just a made up number as I have also seen people state the number of controller cards and ports required to build larger systems.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Actually, the limit is a 9 disk RAIDZ1, 10 disk RAIDZ2, and 11 disk RAIDZ3. The problem is that your data has to be stored in an ever-wider vdev (writes are handled on a per-vdev basis). If you spread your data out over too many disks performance goes bad and you end up with this really wide but horribly performing pool. It sucks because you'll put all of your data on the pool and as you fill the pool performance will plumet over time. I did it with an 18 disk RAIDZ3 and used it for over a year just to see how bad it was. It was pretty crappy and there's a reason I'm now on a 10 disk RAIDZ2. ;)

If you do it, expect performance to be pretty bad (there's limited I/O when you are that wide, so doing more than 1 or 2 things simultaneously will totally stall your pool. Being that you said you use plex and not more than 8 streams, you might not even be able to support 3 streams if you go really wide.

In short, obey the limit I mentioned in the beginning or expect me to ignore your gripes and complaints later when you say that your zpool can't do basic things like streaming 2 movies simultaneously. ;)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I hear you loud and clear. 100 drives it is :)

I'd pay to see a 100-drive RAIDZ2 vdev. Not more than 3 bucks, though, I'm not that interested in seeing it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'd pay to see a 100-drive RAIDZ2 vdev. Not more than 3 bucks, though, I'm not that interested in seeing it.

Will a screenshot satisfy this? :D
 

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
I hear you loud and clear. 100 drives it is :)
Then how do people build NAS systems that are 60->100->200TB in size?
Will a screenshot satisfy this? :D


OK, doing some further reading and digging I think I may have come up with a solution that should work for me plus may me worthy enough of cyberjock's time in the future (not a 100 drives... yet....)

6 on board SATA ports (assume om0/1/2/3/4/5) ports and 8 ports on each M1015 (IT mode) (assume da0/1/2/3/4/5/6/7 and db0/1/2/3/4/5/6/7).
zpool create stuff1 raidz da0 da1 da2 da3 da4 da5 da6 da7 om0 raidz db0 db1 db2 db3 db4 db5 db6 db7 om1
This gives me the deal configuration for two RAIDz vdevs stripped together. I can have up to 1 drive failing on either group.
Next step is to have a hot-standby spare, set up on om2. And yes, I know if I lose a controller I lose all data until I replace that controller.
Q1: Am I on the right track? If so, any suggested tweaks?
Q2: Is having a hot-standby spare on the MB port and shared across both vdevs do-able?
Q3: In the future I can just add another RAIDz vdev to the vpool. Any issues with this?

Ideally I would love to build something similar to what BackBlaze brags about. I want a box that will give me 150-200TB of storage space (with some redundancy) and still be able to provide performance to stream a couple dozen streams, do downloads or accept backup data from computers (timemachine). My friends and family members have 25 and 50 meg connections, so network bandwidth will not be an issue (for now).

Thanks in advance for your help.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Then how do people build NAS systems that are 60->100->200TB in size?
Multiple vdevs per pool, SAS expanders, perhaps multiple HBAs. Not too difficult.

You're not really on the right track, in that you shouldn't be thinking about using the CLI to create pools. The GUI will handle it for you nicely, including striping with as many vdevs as you choose. FreeNAS doesn't support hot spares, though (because FreeBSD doesn't). But as to Q3, yes, you can in the future add another RAIDZ (or RAIDZ2, or RAIDZ3) vdev to the pool, as many times as you choose. You can't, however, remove a vdev once it's been added to the pool. Data on the pool will be striped across all vdevs.
 

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
Multiple vdevs per pool, SAS expanders, perhaps multiple HBAs. Not too difficult.

You're not really on the right track, in that you shouldn't be thinking about using the CLI to create pools. The GUI will handle it for you nicely, including striping with as many vdevs as you choose. FreeNAS doesn't support hot spares, though (because FreeBSD doesn't). But as to Q3, yes, you can in the future add another RAIDZ (or RAIDZ2, or RAIDZ3) vdev to the pool, as many times as you choose. You can't, however, remove a vdev once it's been added to the pool. Data on the pool will be striped across all vdevs.

I have no problem sticking with the GUI, I just didn't feel comfortable with what I was doing. There isn't a simple (that I could find) button that says "create vdev", then allows you to select specifically what drives you want in that vdev or the level of RAID.

My (limited) knowledge of expanders is that their performance can be limited quite quickly (please confirm if I am incorrect). My understanding is that you can connect an expander and gain on number of drives, but you have just cut your throughput speed by the size of your expander. If drives run at 1.2Gb/s and my controllers run at 3Gb/s, I can split my 8 port twice giving me 16 drives while maximizing throughput. Anymore and my controller cards are becoming a bottle neck. I could also be over-killing this project. But then again, it all started with driving to a friends place with a few USB drives to grab some content and leave my off-site backup and now most nearly everything is centralized in a datacenter.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My (limited) knowledge of expanders is that their performance can be limited quite quickly (please confirm if I am incorrect). My understanding is that you can connect an expander and gain on number of drives, but you have just cut your throughput speed by the size of your expander. If drives run at 1.2Gb/s and my controllers run at 3Gb/s, I can split my 8 port twice giving me 16 drives while maximizing throughput. Anymore and my controller cards are becoming a bottle neck. I could also be over-killing this project. But then again, it all started with driving to a friends place with a few USB drives to grab some content and leave my off-site backup and now most nearly everything is centralized in a datacenter.

Well, RAM bandwidth can be a bottleneck too. The reality is that you can't judge what will be a limit without looking at all of the bandwidth limitations and then figuring out where the weakest point is. For 99% of home users, that 1Gb LAN port *is* going to be your bottleneck.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Will a screenshot satisfy this? :D

I dunno, it loses some of its circus freakshow charm...

I have no problem sticking with the GUI, I just didn't feel comfortable with what I was doing. There isn't a simple (that I could find) button that says "create vdev", then allows you to select specifically what drives you want in that vdev or the level of RAID.

My (limited) knowledge of expanders is that their performance can be limited quite quickly (please confirm if I am incorrect). My understanding is that you can connect an expander and gain on number of drives, but you have just cut your throughput speed by the size of your expander. If drives run at 1.2Gb/s and my controllers run at 3Gb/s, I can split my 8 port twice giving me 16 drives while maximizing throughput. Anymore and my controller cards are becoming a bottle neck. I could also be over-killing this project. But then again, it all started with driving to a friends place with a few USB drives to grab some content and leave my off-site backup and now most nearly everything is centralized in a datacenter.

Of course, you don't magically get performance out of thin air. However, SAS2 has plenty of spare capacity, when you're dealing with mechanical drives. If a single drive manages something like 1.5Gb/s, a single 8-port SAS2 HBA has enough bandwidth for 32 drives at their full speeds - in practice, you can probably have a few more drives before you'd notice a bottleneck in the storage subsystem.

Then there's the rest of the system - this is a useless conversation if there's a bigger bottleneck somewhere else.
 

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
I agree wholeheartedly. Unfortunately I'm the 1%.

I found the following link, https://forums.freenas.org/index.php?threads/160-tb-component-discussion.12748/

Q1: What is the gui way to create vdevs and then add to an existing volume? I see under advanced, you can do this for the volume initially. Is it very specific so that you can select which drive goes into each vdev, just like the freedom you have during command line?
Q2: Any known issues of adding vdevs (group of RAIDz drives) to an existing volume?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I agree wholeheartedly. Unfortunately I'm the 1%.

And what makes you think that? Nothing here makes me think you are the 1%, except for your choice to use desktop hardware in a server. ;)
 

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
And what makes you think that? Nothing here makes me think you are the 1%, except for your choice to use desktop hardware in a server. ;)

Nice... :)
My requirements seem to be a little larger than what I'm seeing on the board (3-5 drives, up to 10TB, etc). At home boxes, no real high speed network, etc. But you are correct, the MB, RAM and CPU are desktop grade. All in a learning process. Next box is server. I was thinking now of buying 2 24 port supermicro backplanes from eBay... get out me tin snips and welder... a little duct tape, voila, more storage :)
 

weecho

Dabbler
Joined
Jan 28, 2015
Messages
11
And what makes you think that? Nothing here makes me think you are the 1%, except for your choice to use desktop hardware in a server. ;)
And after my last update on this new box I'm playing with, I realized I used the wrong firmware for the M1015 controller cards. I should be running P16 not P19.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Q1: What is the gui way to create vdevs and then add to an existing volume?
Go to Storage -> Volume Manager, select your pool under "Volume to extend", select the disks you want to add to a new vdev, select the RAIDZ level from the drop-down, click "Extend Volume".
Q2: Any known issues of adding vdevs (group of RAIDz drives) to an existing volume?
Not that I've seen.
 
Status
Not open for further replies.
Top