Greetings, how's my plan? :)

Status
Not open for further replies.

ghostt

Cadet
Joined
Jun 18, 2015
Messages
5
Hello!

I have been lurking and studying for quite some time, waiting for the timing to be right for me to invest in a higher quality of file server. For over a decade I have had a centralized file server in my home serving up anything from shared media to NFS mounted root file systems for servers. My architecture has always been disproportionate to my usage, in that I would often cut corners to save money or time, when using the file server in a way that justified much less corner cutting.

I knew the folly of my ways so thankfully I always maintained solid LTO3 tape backups thanks to work supplied employee giveaways of old hardware (working at a data center has its perks!).

The time became right in recent months to pull the trigger and re-do everything with a well built, well thought out FreeNAS machine. FreeNAS was the obvious choice for me because ZFS is the obvious choice. I have years of experience as a Solaris admin working with ZFS, so it was a logical thing for me.

So, I've gone ahead and setup a prosumer grade AMD CPU+motherboard with a whopping 4GB of RAM I found at Radio Shack in the discount bin, 24 hard drives of varying size I'll do with RaidZ1, and a power supply I had handy from my old 486 dx/2 from college (using power splitters and converters of course). I'll be running all of this in a virtualbox VM on Windows XP as the host...... :)

I kid I kid. I found a pretty sweet deal on E-Bay for a fully built SuperMicro setup with 32gb of ECC memory, that gorgeous SuperMicro 24 bay chassis with redundant power supplies, and dual hexacore xeons. I have a pretty slick power conditioner/surge protector (another sweet hand-me-down from work), and a nice enterprise grade UPS (thanks again, work!) that should modestly run things for about 3 hours on battery.

So hardware I think is under control. I'm running memtest86-a-plenty and doing lots of stress testing on the hardware.

My main concern at this point is how to setup the storage I have at my disposal. I have the following:
8x 2tb
8x 1tb
8x 750gb

My general plan was to setup 3 vdevs, each as RaidZ2, which I think is pretty obvious given the disks available.

Initially my thoughts were to create one giant zpool with the 3 vdevs. I understand my main 'gain' by doing this is performance improvement. I will have 2gbit of throughput (good because I have many clients connecting, and yes I have the requisite network hardware to support LACP), so the performance gain might be worth-while.

However the idea of any one vdev going bad causing the pool to go poof, has led me to consider setting up actually three separate pools.

1 pool for production use (2tb drives)
1 pool for replication of critical data from the 1st pool (git repository, financial data, pictures, whatever the wife deems important, etc)
1 pool for ?? (this led me to consider...)

Or perhaps two pools, the 2tb's as a pool, and the 1tb+750gb's as another pool for replication of important data.

Plan beyond that is to synchronize "super important" stuff to an off-site S3 bucket or something.

My usage is quite varied, I boot a cluster of 14 raspberry pi's using the central file server as their root file system storage location (so no more failing SD cards!), its used to store a ton of DVDs (legal rips of my collection that I've accumulated), music, photos, the usual kind of stuff. It also stores centralized "desktops" for all windows and linux machines in the house, so for example all windows machines have the same desktop folder which is a mount from the file server, so desktop icons/items are the same on all machines, etc.

So, were you to start from scratch with this setup, would you go with 1 big volume, 2 medium size, or 3 average size?

I do intend to carve out datasets, and the DVD collection is the big one that should remain all within a single volume (around 11tb), presumably this would go on whichever volume has the 2tb disks vdev.

Appreciate any input (or schooling on things I got wrong in my plan), and thanks to all the folks who spend hours and hours reading and replying to forum threads answering questions. Really cool of you all!

I'm only just starting the burn-in testing of the hardware so I've got time to consider final production configuration, and would prefer to follow best practices out of the gate. :)
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Hello!

I have been lurking and studying for quite some time, waiting for the timing to be right for me to invest in a higher quality of file server. For over a decade I have had a centralized file server in my home serving up anything from shared media to NFS mounted root file systems for servers. My architecture has always been disproportionate to my usage, in that I would often cut corners to save money or time, when using the file server in a way that justified much less corner cutting.

I knew the folly of my ways so thankfully I always maintained solid LTO3 tape backups thanks to work supplied employee giveaways of old hardware (working at a data center has its perks!).

The time became right in recent months to pull the trigger and re-do everything with a well built, well thought out FreeNAS machine. FreeNAS was the obvious choice for me because ZFS is the obvious choice. I have years of experience as a Solaris admin working with ZFS, so it was a logical thing for me.

So, I've gone ahead and setup a prosumer grade AMD CPU+motherboard with a whopping 4GB of RAM I found at Radio Shack in the discount bin, 24 hard drives of varying size I'll do with RaidZ1, and a power supply I had handy from my old 486 dx/2 from college (using power splitters and converters of course). I'll be running all of this in a virtualbox VM on Windows XP as the host...... :)

I kid I kid. I found a pretty sweet deal on E-Bay for a fully built SuperMicro setup with 32gb of ECC memory, that gorgeous SuperMicro 24 bay chassis with redundant power supplies, and dual hexacore xeons. I have a pretty slick power conditioner/surge protector (another sweet hand-me-down from work), and a nice enterprise grade UPS (thanks again, work!) that should modestly run things for about 3 hours on battery.

So hardware I think is under control. I'm running memtest86-a-plenty and doing lots of stress testing on the hardware.

My main concern at this point is how to setup the storage I have at my disposal. I have the following:
8x 2tb
8x 1tb
8x 750gb

My general plan was to setup 3 vdevs, each as RaidZ2, which I think is pretty obvious given the disks available.

Initially my thoughts were to create one giant zpool with the 3 vdevs. I understand my main 'gain' by doing this is performance improvement. I will have 2gbit of throughput (good because I have many clients connecting, and yes I have the requisite network hardware to support LACP), so the performance gain might be worth-while.

However the idea of any one vdev going bad causing the pool to go poof, has led me to consider setting up actually three separate pools.

1 pool for production use (2tb drives)
1 pool for replication of critical data from the 1st pool (git repository, financial data, pictures, whatever the wife deems important, etc)
1 pool for ?? (this led me to consider...)

Or perhaps two pools, the 2tb's as a pool, and the 1tb+750gb's as another pool for replication of important data.

Plan beyond that is to synchronize "super important" stuff to an off-site S3 bucket or something.

My usage is quite varied, I boot a cluster of 14 raspberry pi's using the central file server as their root file system storage location (so no more failing SD cards!), its used to store a ton of DVDs (legal rips of my collection that I've accumulated), music, photos, the usual kind of stuff. It also stores centralized "desktops" for all windows and linux machines in the house, so for example all windows machines have the same desktop folder which is a mount from the file server, so desktop icons/items are the same on all machines, etc.

So, were you to start from scratch with this setup, would you go with 1 big volume, 2 medium size, or 3 average size?

I do intend to carve out datasets, and the DVD collection is the big one that should remain all within a single volume (around 11tb), presumably this would go on whichever volume has the 2tb disks vdev.

Appreciate any input (or schooling on things I got wrong in my plan), and thanks to all the folks who spend hours and hours reading and replying to forum threads answering questions. Really cool of you all!

I'm only just starting the burn-in testing of the hardware so I've got time to consider final production configuration, and would prefer to follow best practices out of the gate. :)
Very nice 1st post. You had me when I read about the "AMD prosumer build with 4GB RAM". I was already putting on my 'angry eyes'. I'd probably just do a single pool with 3 RAIDz2 vdevs.

If you want to make a second post, do a how-to on network booting raspberry pi's with the root file system on a FreeNAS server. :D
 

ghostt

Cadet
Joined
Jun 18, 2015
Messages
5
Very nice 1st post. You had me when I read about the "AMD prosumer build with 4GB RAM". I was already putting on my 'angry eyes'. I'd probably just do a single pool with 3 RAIDz2 vdevs.

If you want to make a second post, do a how-to on network booting raspberry pi's with the root file system on a FreeNAS server. :D

Thanks for the reply, glad I could inject some humor with my 'fake build'. :)

So going with a single pool was my original plan, is the primary benefit here for performance (and simplicity of organization as a secondary benefit I suppose?).

Something keeps leading me back to wanting to keep at least 1 vdev separate as a replication pool for further redundancy of 'important' data. The DVD collection is the lion's share of the disk used (10-11tb), but is also the least important data that I don't care about redundancy as much (as I still have the original DVDs and can re-rip if need be).

Am I simply being too paranoid? I do have ZFS background and most of it was good luck, but I did have my share of "mishaps" too, which keep me on edge about a single pool. :)

Will be happy to share the process of having the Pi's booting from the NAS, its actually pretty quick and easy to do. Once I migrate everything over to the FreeNAS I'll do a quick guide and take screenshots as I 'rebuild' it and share. :)

Cheers!
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Am I simply being too paranoid? I do have ZFS background and most of it was good luck, but I did have my share of "mishaps" too, which keep me on edge about a single pool. :)

I know that feeling. That's why you need to build a second FreeNAS box as a replication target. I assume you have a few spare U's to fill up. :)
 

ghostt

Cadet
Joined
Jun 18, 2015
Messages
5
I know that feeling. That's why you build a second FreeNAS box to use as a replication target. :D

Not a bad idea, do most people who do so go with something a bit more budget-friendly (aka, cheaper)? I think most of my data that I would need replicated for safe keeping comes in at under 2TB, so I could in theory setup a rather small replication environment witha mirror of two 2TB drives and call it a day.

Feasible?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Not a bad idea, do most people who do so go with something a bit more budget-friendly (aka, cheaper)? I think most of my data that I would need replicated for safe keeping comes in at under 2TB, so I could in theory setup a rather small replication environment witha mirror of two 2TB drives and call it a day.

Feasible?
Totally feasible. I'd still try to stay within the umbrella of recommended hardware. The cost delta isn't really that large. I think I saw a TS140 on amazon the other day for $275. You'd just need to add a bit more ECC RAM.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
If you want to make a second post, do a how-to on network booting raspberry pi's with the root file system on a FreeNAS server. :D

Create iscsi target.

Install berryboot on pi

Configure berryboot to connect to iscsi for disk.

A little different than pxe boot but accomplishes the same thing.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
LOL.. that's not the kind of thoroughness I think anodos was looking for.
 

ghostt

Cadet
Joined
Jun 18, 2015
Messages
5
Create iscsi target.

Install berryboot on pi

Configure berryboot to connect to iscsi for disk.

A little different than pxe boot but accomplishes the same thing.

I went the other route of having a small sdcard with just a boot partition, and the boot command line booting the root off NFS.

My main reason for going this route was to keep all the pi root filesystems inside a common directory structure (eg, /fileshare/raspberry_pi_nfs_roots/webserver01). What this afforded me, and this might sound nuts/stupid, but I was able to then setup each of those nfs roots in my git repository, which allows for some fun stuff (helps out a ton for testing, setup a branch for testing a new web app, rollback easily, etc). The downside is you're storing a bunch of binary data in git which is counterintuitive, but overall its been really useful as a result for testing, moving pi's around, and making them more like 'resources' than 'computers'.

I never considered doing iSCSI only because my file server has been a pile of junk all this time and introducing another protocol never seemed like a swell idea. I may re-visit and consider iSCSI once I get the FreeNAS environment operational.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
I went the other route of having a small sdcard with just a boot partition, and the boot command line booting the root off NFS.

My main reason for going this route was to keep all the pi root filesystems inside a common directory structure (eg, /fileshare/raspberry_pi_nfs_roots/webserver01). What this afforded me, and this might sound nuts/stupid, but I was able to then setup each of those nfs roots in my git repository, which allows for some fun stuff (helps out a ton for testing, setup a branch for testing a new web app, rollback easily, etc). The downside is you're storing a bunch of binary data in git which is counterintuitive, but overall its been really useful as a result for testing, moving pi's around, and making them more like 'resources' than 'computers'.

I never considered doing iSCSI only because my file server has been a pile of junk all this time and introducing another protocol never seemed like a swell idea. I may re-visit and consider iSCSI once I get the FreeNAS environment operational.

nice.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Not a bad idea, do most people who do so go with something a bit more budget-friendly (aka, cheaper)? I think most of my data that I would need replicated for safe keeping comes in at under 2TB, so I could in theory setup a rather small replication environment witha mirror of two 2TB drives and call it a day.

Feasible?
You don't need an entire system for 2Tb backup.
USB External Drive might also work if you really want it cheap
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
You don't need an entire system for 2Tb backup.
USB External Drive might also work if you really want it cheap
It depends. My experience is that you want to automate backups as much as possible. It's more expensive than an external drive; but it scales better, protects from bitrot, is automated, and works faster than a USB hard drive.

It's a matter of priorities. Some people want to pay more for a more convenient or better solution.
 

ghostt

Cadet
Joined
Jun 18, 2015
Messages
5
Thanks for all the great feedback, I appreciate it! :)

In setting up/burn-in testing I found that some of the drives I intended to use are indeed having some read errors, so I need to re-plan how my vdevs will be, and how my final volume will look.

My original plan was to do as follows:

I currently have:
8x 750gb drives (at least 2, possibly 3 disks with SMART errors)
6x 1tb drives
6x 2tb drives

- start with 8x 750gb drives, as they are already freed up and
- start migrating data from old environment to this new volume, slowly peeling away data until all common drive types/sizes are freed up on a given type (eg, the 1tb drives next)
- purchase 2 additional 1tb drives (this is where I start to be unclear if I'm focusing too much on symmetric vdevs)
- create new vdev with 6 original and 2 new 1tb drives
- integrate new vdev into existing pool
- begin transferring data from 2tb drives in the old environment to the freeNAS pool
- eventually once all data is moved, purchase 2 more 2tb drives
- create 8x 2tb disk vdev
- integrate new vdev into existing pool

So, with this layout I was putting a lot of focus on keeping vdev disk counts the same. Partially because of organization (24 disk bay, 8 750gb disks, seemed like a good round number to do 3 sets of 8 disks), partially because of wanting to maximize a vdev for storage without sacrificing performance/redundancy (ie, raidz2 affords nice expansion past 5 disks it seems to me).

My primary question is, how important is it to keep vdevs symmetrical? I know the volume wizard will only append a new vdev to an existing pool of the raidz type and disk count is the same as existing vdevs in the pool, but with the 'advanced' mode it seems to work fine (plus can do it fine on the command line as well of course).

I am trying to stick with best practices here though and I've read many times in the past that its 'bad' to have your vdevs with different numbers of physical disks.

What do you all think about this? Basically as of where I'm at now, I expect to throw out at least two of the 750's (two are for sure having SMART issues, 1 of them had issues early on but has disappeared as a problem).

I suspect that I will encounter similar issues with the 1tb drives, as they're fairly old like the 750's. I'm wondering if at this point I should consider tossing out the 750's entirely from the equation, start with the 2tb drives, and slowly work in the 1tb drives (and assuming if those 1tb drives encounter significant problems replace those with 2tb disks, and eventually grow the vdev after all disks are at 2tb in the vdev).

Curious how the experts would proceed. I think I'm fairly well informed on the subject, but I'm not entirely clear on the seriousness of asymmetric vdev disk counts.

* edit *
* oh by the way in my long winded-ness, I failed to mention that I really do need to push the 1tb and 2tb vdevs to 8 disks to meet my capacity requirements. If I kept all of them at 6 disks per vdev, I'd be sitting at around 80% utilization which is way too high for my taste.

The alternative I see here is to stick with 6 disk vdevs, ditch the 750s entirely due to age and likelihood to fail soon anyways, which would leave 6 free SAS slots in the chassis, and eventually purchase some 3tb or 4tb disks to create a 3rd vdev with. *

Thanks!
 
Last edited:
Status
Not open for further replies.
Top