Hello!
I have been lurking and studying for quite some time, waiting for the timing to be right for me to invest in a higher quality of file server. For over a decade I have had a centralized file server in my home serving up anything from shared media to NFS mounted root file systems for servers. My architecture has always been disproportionate to my usage, in that I would often cut corners to save money or time, when using the file server in a way that justified much less corner cutting.
I knew the folly of my ways so thankfully I always maintained solid LTO3 tape backups thanks to work supplied employee giveaways of old hardware (working at a data center has its perks!).
The time became right in recent months to pull the trigger and re-do everything with a well built, well thought out FreeNAS machine. FreeNAS was the obvious choice for me because ZFS is the obvious choice. I have years of experience as a Solaris admin working with ZFS, so it was a logical thing for me.
So, I've gone ahead and setup a prosumer grade AMD CPU+motherboard with a whopping 4GB of RAM I found at Radio Shack in the discount bin, 24 hard drives of varying size I'll do with RaidZ1, and a power supply I had handy from my old 486 dx/2 from college (using power splitters and converters of course). I'll be running all of this in a virtualbox VM on Windows XP as the host...... :)
I kid I kid. I found a pretty sweet deal on E-Bay for a fully built SuperMicro setup with 32gb of ECC memory, that gorgeous SuperMicro 24 bay chassis with redundant power supplies, and dual hexacore xeons. I have a pretty slick power conditioner/surge protector (another sweet hand-me-down from work), and a nice enterprise grade UPS (thanks again, work!) that should modestly run things for about 3 hours on battery.
So hardware I think is under control. I'm running memtest86-a-plenty and doing lots of stress testing on the hardware.
My main concern at this point is how to setup the storage I have at my disposal. I have the following:
8x 2tb
8x 1tb
8x 750gb
My general plan was to setup 3 vdevs, each as RaidZ2, which I think is pretty obvious given the disks available.
Initially my thoughts were to create one giant zpool with the 3 vdevs. I understand my main 'gain' by doing this is performance improvement. I will have 2gbit of throughput (good because I have many clients connecting, and yes I have the requisite network hardware to support LACP), so the performance gain might be worth-while.
However the idea of any one vdev going bad causing the pool to go poof, has led me to consider setting up actually three separate pools.
1 pool for production use (2tb drives)
1 pool for replication of critical data from the 1st pool (git repository, financial data, pictures, whatever the wife deems important, etc)
1 pool for ?? (this led me to consider...)
Or perhaps two pools, the 2tb's as a pool, and the 1tb+750gb's as another pool for replication of important data.
Plan beyond that is to synchronize "super important" stuff to an off-site S3 bucket or something.
My usage is quite varied, I boot a cluster of 14 raspberry pi's using the central file server as their root file system storage location (so no more failing SD cards!), its used to store a ton of DVDs (legal rips of my collection that I've accumulated), music, photos, the usual kind of stuff. It also stores centralized "desktops" for all windows and linux machines in the house, so for example all windows machines have the same desktop folder which is a mount from the file server, so desktop icons/items are the same on all machines, etc.
So, were you to start from scratch with this setup, would you go with 1 big volume, 2 medium size, or 3 average size?
I do intend to carve out datasets, and the DVD collection is the big one that should remain all within a single volume (around 11tb), presumably this would go on whichever volume has the 2tb disks vdev.
Appreciate any input (or schooling on things I got wrong in my plan), and thanks to all the folks who spend hours and hours reading and replying to forum threads answering questions. Really cool of you all!
I'm only just starting the burn-in testing of the hardware so I've got time to consider final production configuration, and would prefer to follow best practices out of the gate. :)
I have been lurking and studying for quite some time, waiting for the timing to be right for me to invest in a higher quality of file server. For over a decade I have had a centralized file server in my home serving up anything from shared media to NFS mounted root file systems for servers. My architecture has always been disproportionate to my usage, in that I would often cut corners to save money or time, when using the file server in a way that justified much less corner cutting.
I knew the folly of my ways so thankfully I always maintained solid LTO3 tape backups thanks to work supplied employee giveaways of old hardware (working at a data center has its perks!).
The time became right in recent months to pull the trigger and re-do everything with a well built, well thought out FreeNAS machine. FreeNAS was the obvious choice for me because ZFS is the obvious choice. I have years of experience as a Solaris admin working with ZFS, so it was a logical thing for me.
So, I've gone ahead and setup a prosumer grade AMD CPU+motherboard with a whopping 4GB of RAM I found at Radio Shack in the discount bin, 24 hard drives of varying size I'll do with RaidZ1, and a power supply I had handy from my old 486 dx/2 from college (using power splitters and converters of course). I'll be running all of this in a virtualbox VM on Windows XP as the host...... :)
I kid I kid. I found a pretty sweet deal on E-Bay for a fully built SuperMicro setup with 32gb of ECC memory, that gorgeous SuperMicro 24 bay chassis with redundant power supplies, and dual hexacore xeons. I have a pretty slick power conditioner/surge protector (another sweet hand-me-down from work), and a nice enterprise grade UPS (thanks again, work!) that should modestly run things for about 3 hours on battery.
So hardware I think is under control. I'm running memtest86-a-plenty and doing lots of stress testing on the hardware.
My main concern at this point is how to setup the storage I have at my disposal. I have the following:
8x 2tb
8x 1tb
8x 750gb
My general plan was to setup 3 vdevs, each as RaidZ2, which I think is pretty obvious given the disks available.
Initially my thoughts were to create one giant zpool with the 3 vdevs. I understand my main 'gain' by doing this is performance improvement. I will have 2gbit of throughput (good because I have many clients connecting, and yes I have the requisite network hardware to support LACP), so the performance gain might be worth-while.
However the idea of any one vdev going bad causing the pool to go poof, has led me to consider setting up actually three separate pools.
1 pool for production use (2tb drives)
1 pool for replication of critical data from the 1st pool (git repository, financial data, pictures, whatever the wife deems important, etc)
1 pool for ?? (this led me to consider...)
Or perhaps two pools, the 2tb's as a pool, and the 1tb+750gb's as another pool for replication of important data.
Plan beyond that is to synchronize "super important" stuff to an off-site S3 bucket or something.
My usage is quite varied, I boot a cluster of 14 raspberry pi's using the central file server as their root file system storage location (so no more failing SD cards!), its used to store a ton of DVDs (legal rips of my collection that I've accumulated), music, photos, the usual kind of stuff. It also stores centralized "desktops" for all windows and linux machines in the house, so for example all windows machines have the same desktop folder which is a mount from the file server, so desktop icons/items are the same on all machines, etc.
So, were you to start from scratch with this setup, would you go with 1 big volume, 2 medium size, or 3 average size?
I do intend to carve out datasets, and the DVD collection is the big one that should remain all within a single volume (around 11tb), presumably this would go on whichever volume has the 2tb disks vdev.
Appreciate any input (or schooling on things I got wrong in my plan), and thanks to all the folks who spend hours and hours reading and replying to forum threads answering questions. Really cool of you all!
I'm only just starting the burn-in testing of the hardware so I've got time to consider final production configuration, and would prefer to follow best practices out of the gate. :)