Old Parts or New Build

Status
Not open for further replies.

i_max2k2

Cadet
Joined
Mar 31, 2018
Messages
8
Hello Guys,

Just a few weeks ago, I started thinking about having my first NAS. So I have been trying to do some reading and at this point I want to do a FreeNas build. To figure what parts would work for my ideal build, here are the use cases for what I would like doing:

1. Back up our iPhones, pictures, documents etc.
2. Store my Media backup, which would take majority of space.
a. Have the ability to perhaps run a VM where a disc drive is connected to the setup to be able to backup my discs using makemkv etc. Would I need direct I/O ability for this?
3. Have some redundancy at least with my personal data and some for the media.

For starting up, have got 2x 10Tb HGST 7200rpm NAS drives. And I'd like to eventually like to add as many 10tb NAS drives.

Now I have a old setup which I have done some upgrades on over time which I'm considering to convert as part of the setup:
2600k HT enabled Quad Core Sandy Bridge on a Asus Maximus IV Extreme Z Mobo,
w/ 32Gb (8x4) DDR3 2133Mhz ram. Now the Motherboard has 4 Native Sata ports + 2 Sata ports controlled by a Marvell Controller.
4 x PCIe 2.0 x16 (x16 or dual x8 or x8, x16, x16)
1 x PCIe 2.0 x4
1 x PCIe 2.0 x1.
I also have a 2 x 120Gb SSD which could be used in Raid 0 for cache or OS drive? with a back up to one of the NAS drives?

Now the memory is not ECC, which is probably not good, I've not read as much about FreeNas yet (hence the questions).
So I can possibly expand the storage with some pcie based expansion Sata cards?.

No my biggest worry would be if I can do the blu ray rips, without VT-d, which the 2600k doesn't support (only the 2600 does).

I'm also open to building a new setup, I'm going to get a new case for this, and do a new PC build for my primary PC, if this can be worked into a NAS.

Thanks for your help!
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Welcome to FreeNAS and the Forums. Please make sure you read our Resources area for recommended hardware.

I'll be completely honest with you, Your old computer system may actually work for you up tot he point of running a VM, no VT-d.

It appears you do not understand how FreeNAS/FreeBSD works, the RAID 0 Cache you mentioned, nope. You need to take this one step at a time and realize up front that FreeNAS is not a standard NAS like RAID5 or other RAID setups, ZFS is way different.

Here is my advice and I'm being serious as it will help you either understand and like FreeNAS or it will turn you off completely.

1) Build your system up using all your RAM, one SSD as the boot device, add the two 10TB NAS drives in a Mirror for storage. Install FreeNAS (current version) and configure your system. Play with your system and do not dedicate your data to it. You need to play with it, destroy the data/pool, recreate it, encrypt if you want, pull a drive and then erase it and then add it as a new drive, put FreeNAS through it's paces. You MUST understand FreeNAS if you value your data or one day you could loose all your data due to a simple mistake. Many people have made simple mistakes here and it has cost them all their data.

2) Create a VM on say your Windoze PC using VMWare Workstation Player and test drive FreeNAS. It will need a minimum of 4GB RAM to run in a VM for basic use, 8GB if you plan to create jails and VMs within FreeNAS. This is the alternative to option 1 above.

3) After about 2 months, if you have done what I've asked and FreeNAS does what you want, rebuild it from scratch. During this rebuild you MUST figure out how much storage capacity you need and build the pool properly. By this time hopefully you have doen enough research to figure this out. Remember that you should be saving 20% free space minimum for normal NAS use, 50% for iSCSI use. We can go over the calculations later, it's not too difficult.

And read the user manual from cover to cover, twice! Trust me, it helps.

I hope this helps you out and Good Luck!
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I did both 2 then 1 first time round :)

My 1 box is still my backup system ;)

And my 2 VM is used for burning USB boot drives ;)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I used #2 option myself the first time around. It also allowed me to justify the money I was about to spend. And at the time, FreeNAS 8.0 was just becoming a reality so it was a risk to spend over $800 on a NAS, but still much cheaper than a store bought NAS with similar performance. I think I got into FreeNAS because I enjoyed the software development and testing aspect of it.

Same here, my first FreeNAS box is now my second ESXi box which sports my firewall and my backup FreeNAS.
 

i_max2k2

Cadet
Joined
Mar 31, 2018
Messages
8
Thank you for the detailed reply. I think its a great idea to do this on a VM first and get to know FreeNAS a bit more, before putting any real data on it. I definitely need to understand how FreeNAS works a little more to figure out the details first. And I can also try through the VM to use the disc drive to see if that works.
 
Last edited by a moderator:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
There is a lot you can do with ESXi and VMWare Workstation but you need to understand the limitations as well if you end up using ESXi as your main platform for the long run, but for evaluation purposes, you can't beat it.

And I can also try through the VM to use the disc drive to see if that works.
Typically the way ESXi works and VMWare Workstation works is it will create a VMDK file that becomes the new drive you created and all the data is stored in this one large file. You can pass-through the drive so it's recognized as a real hard drive but for your initial tests and getting use to FreeNAS, I wouldn't bother using that advanced feature. The good thing about pass-through is the hard drives could be moved to any other computer and run as a bare metal FreeNAS, you wouldn't need ESXi. This is the preferred way of doing things when you are building up your final machine. We can discuss this later if this is the path you go down but FreeNAS on a bare metal machine is the way most users go, forgoing ESXi use.

Good luck.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I'll offer OP an additional piece of advice.
It is highly beneficial to really think through how to migrate your data. Since drives will need to be formatted to ZFS and there are restrictions to how the ZFS pool handle additions of new drives. Coming from other environments, it is quite unforgiving!

As you'll find out down the line of research - Raidz2 will at some point in time emerge as an alternative.
If you would choose to go that route, you'll have to be aware of the requirements to find enough drives to temporarily store your data during the "remodel" to FreeNAS.
I for example, had started collecting drives ....and filling them with data.. prior to settling for a Raidz2 of 7 drives. That ....would require me to at one time supply 7 empty drives at once.
I could only muster up so much space among friends and backups, that I still had to get rid of substantial amounts of data only to make the migration happen.

Cheers,
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

i_max2k2

Cadet
Joined
Mar 31, 2018
Messages
8
So I have been reading some and trying to learn more. Since I started with buying just 2 Nas drives (2 x 10 Tb HGST deskstar 7200rpm drives), and hoping to get a few more down the road I'm in a bit of a fix. As I understand, for creating a vDev, if I use harddisks of different sizes (not sure if its recommended), I can say start with 6 drives RZ2 config, 2 of the 10tb drives, and some other ones, the effective storage space per hard drive is limited to the smallest drive. Now Initially I'd also start with not so much data storage. Until I can replace the rest of the drives with all 10tb ones, I'll be limited to the 1 or 2 tb I start the lowest drive with, is that right?

I am working on creating the VM, I'll start with a 450gb allocation to it, but the way I wanted FreeNas to see it was say using 75gb x 6 (so looking at a RaidZ2 with 6 drives in theory). Also realized not having ecc ram will be a bad idea, since ram failure will result in loosing all data. Is there any way around it? Otherwise, I'd look to build something around a supermicro mother board and perhaps start with 16gb ecc ram.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Also realized not having ecc ram will be a bad idea, since ram failure will result in loosing all data.
Not completely true but it does become a minor risk. I say this because there has been some clarification on how ZFS work and ECC RAM is not required and the added risk is minimal, or so we are told. However you will find that most of the users here will still highly recommend ECC RAM, for peace of mind mostly.

I'll be limited to the 1 or 2 tb I start the lowest drive with, is that right?
What you wrote, I'm not sure I understand your question. To clarify lets say you have six hard drives configured as a RAIDZ2, two are 10TB, two are 4TB, and two are 1TB. This means that your system will be able to store a maximum capacity of 3.6TB. If you replace the two 1TB drives with say 10TB drives then you are now limited to the 4TB drive size limitation and you would have a maximum capacity of 14.6TB.

Think of it this way, in a RAIDZ2 you are writing data across all six hard drives evenly. You must maintain the even writing so you are limited to the smallest drive size. In reality all the space on your 10TB drive is there but ZFS will only allow the data that can be written on all drives evenly. Hope this makes sense.

So this is why it is best to plan ahead for the structure of your pool. You could start with two 10TB drive in a mirror and then add two more at a later date, and then add two more, and add two more. The problem here is waste of extra hard drives, but some people do this.

So the best thing you can do is figure out what you will want to do with FreeNAS, if you want it to support iSCSI then you need to think differently, if you only want to backup your computer files (true backups, not off computer storage) then that is different too. If it's just off computer storage then that too has a typical pool storage layout. Most people will run 5 to 8 hard drives in a RAIDZ2 configuration. Done properly it will provide great performance. Also you need to use proper NAS type hard drives, not "Archive" drives.

Hope this helps some.
 
Last edited by a moderator:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
So this is why it is best to plan ahead for the structure of your pool. You could start with two 10TB drive in a mirror and then add two more at a later date, and then add two more, and add two more.
Adding to this, you'll come to realize how the very first pool setup will have complications not only for the present pool, but ...also for future drives added to the pool.

Here is a perspective to entertain your thought: You are effectively trading off space efficiency for capital expenditure. A Raidz2 of 6 drives is a lot less space efficient than a 12 drive wide Raidz2. Yet when it comes to upgrading ...it is certainly a steeper task to go out and by twice as many drives at once. That should bring your thought back to mirrors. Meanwhile each TB usable is far less cap ex efficient, it is also far less spent <for each upgrade>.
 

i_max2k2

Cadet
Joined
Mar 31, 2018
Messages
8
Thank you!

I'm starting to understand this. I have a bunch of external and internal drives ranging from 1tb to 4tb lying around. I was thinking perhaps, If I can consolidate all the critical data for a few hours on the drives I won't use in the NAS. And use as many as I can in the vdev, I can start with the single mult-volume vdev, which effectively I keep swapping out with the 10tb drives over time, say once a month etc. I'd end up being at 9-10 drive region, I think to start the RZ2 at. Would there be any issues with this approach? It will be a bit of work to take them all out of their enclosures, and plug them in, but the cheapest in the short term.

I also looked at some NAS enclosures, synology specifically, perhaps not the right place to ask, but they do allow expansion of volumes via their Synology hybrid raid config, from what I understand FreeNas doesn't allow that, but is there a downside to that?
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
, I can start with the single mult-volume vdev, which effectively I keep swapping out with the 10tb drives over time, say once a month etc. I'd end up being at 9-10 drive region, I think to start the RZ2 at.
This is not clear enough to make an assessment if it falls within the capacity of FreeNAS.

- You'll need X number of drives right from the initiation of your raidz2, where X is <final form> of your vdev.
- You can add another Raidz2 vdev to your pool at a later date, preferrably of same number of drives as used in your first vdev.
- Once drive number criteria is satisfied, drive size can be different, yet limited by the smallest current drive.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I keep swapping out with the 10tb drives over time, say once a month etc. I'd end up being at 9-10 drive region, I think to start the RZ2 at.
Yes this could be a serious problem. There is another thing called "resilvering" where when you replace a hard drive (normally for failure) then it will rebuild (aka resilvering) the drive using the data on the other working drives. The issue comes in when you have lots of data on your drives, lets say you are 70% full and all your drives are 10TB, this would mean you have 44TB to rebuild. 10TB drives are slow at rebuilding and the problem is if you have two more drive failures during the resilvering process then all your data is gone. So when choosing the capacity of your hard drives, you also must consider the resilvering times. If I had 9 or 10 10TB drives then I'd use RAIDZ3. Sure I loose one more disk of capacity but I gain one more drive for fault tollerance.

So you must factor this in as well.

I recently repalced my six 2TB WD Red drives with four 6TB HGST drives. Both pools were RAIDZ2. While I reduced my overall drive count I also increased my capacity a little but but the real consern was I increased my resilver time considerably. The 2TB drives took something like 3 hours max to resilver, the 6TB drives take about 13 to 16 hours (two drives are 13 hours, one is 15 hours, one is 16 hours, different batches I guess). So was it really a win for me to make this change? Only time will tell. I reduced a little power consumption and the weight of the tower case.
 

i_max2k2

Cadet
Joined
Mar 31, 2018
Messages
8
Yes this could be a serious problem. There is another thing called "resilvering" where when you replace a hard drive (normally for failure) then it will rebuild (aka resilvering) the drive using the data on the other working drives. The issue comes in when you have lots of data on your drives, lets say you are 70% full and all your drives are 10TB, this would mean you have 44TB to rebuild. 10TB drives are slow at rebuilding and the problem is if you have two more drive failures during the resilvering process then all your data is gone. So when choosing the capacity of your hard drives, you also must consider the resilvering times. If I had 9 or 10 10TB drives then I'd use RAIDZ3. Sure I loose one more disk of capacity but I gain one more drive for fault tollerance.

So you must factor this in as well.

I recently repalced my six 2TB WD Red drives with four 6TB HGST drives. Both pools were RAIDZ2. While I reduced my overall drive count I also increased my capacity a little but but the real consern was I increased my resilver time considerably. The 2TB drives took something like 3 hours max to resilver, the 6TB drives take about 13 to 16 hours (two drives are 13 hours, one is 15 hours, one is 16 hours, different batches I guess). So was it really a win for me to make this change? Only time will tell. I reduced a little power consumption and the weight of the tower case.

Although I understand that it takes a while for a full drive, but resilvering probably requires a bunch of read operations right? not write operations on the other drives? The reason I think its still an okay way, cause I don't expect my disk usage to grow fast, I expect less then 400gb addition to total usage every month, and I add and replace a drive every month. In the beginning I'd probably have at most 4-5tb of data, and then by the end of an year, I'd expect my total usage to be <10tb. At that stage I'd probably replace the last drive. I did find these Segate 2tb, they are going for $37 for Amazong Warehouse deal, as a stop gap, till I can replace all the non-10tb drives. And then perhaps use them in another RZ2 vdev.

Another thing was, my Motherboard has 4 sata 3 ports, I'd disable the Marvel 2x Sata 3 ports, since they always act wonky, and increase the ports using a PCIe cards, Any suggestions on that? Lastly I'm thinking of starting the vdev with 12 drives. That gives about ~60% usable space. Thoughts on that?

EDIT: Just saw this RaidZ Expansion coming to ZFS. I understand its probably a year or two away, probably be introduced in Freenas 12. But I could potentially start with that in mind, start with a raidZ2 setup with 4 drives (or perhaps with RaidZ3) , and expand to 4x 10tb in Raid z2, the effective free space available will be 13tb (much less in z3), but hopefully whenever Freebsd comes out, I can expand the vdev. A bit far fetched,, but could be doable since I won't have immediate needs to fill the array quickly.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I did find these Segate 2tb, they are going for $37 for Amazong Warehouse deal, as a stop gap
Be careful, you might not get what you think. Also this drive is not meant to be used in a NAS environment but it's typically a minor risk from that perspective.

I don't think you understood my comment about resilvering. I was not talking about the period of time when you are upgrading to all drives at 10TB size but the times you need to replace a failing 10TB drive wit a new drive. Yes there is a lot of reading but it's the writing that is taking all the time, the writing to the new drive.

EDIT: Just saw this RaidZ Expansion coming to ZFS. I understand its probably a year or two away, probably be introduced in Freenas 12.
I think you need to take a realistic look at what amount of storage and how fast it must be accessed in order to design your pool. The new expansion technology is likely over 2 years away from prime time and that is actually a long period of time so you should not plan on using that technology for the life of your first set of hard drives (3 years). Other people will be here to work out all the bug of this new stuff, painfully I'm sure.

and increase the ports using a PCIe cards, Any suggestions on that?
There are many good cards out there, most are 8 port models. This forum is full of that data and in the Resourses section is the Recommended Hardware Guide which provides examples.

Lastly I'm thinking of starting the vdev with 12 drives. That gives about ~60% usable space. Thoughts on that?
12 drives in a vdev is not typically a smart thing. The link you provided did not tell me the capacity nor RAIDZ level of your proposal.

You are bouncing around between 4 drives, 12 drives, and I'm not sure you know what you really want or need. Your use case is very important so figure out exactly what you want to do. Next figure out how much storage you will need for 3 years minimum. Most hard drive warranties are 3 years, if you purchase 5 year drives then factor in storage for 5 years. Many times if someone is not sure what they want then I'll tell them to double the capacity because most people will store a ton of data even if they don't need it just becasue the space is available. Going through at a later date to thin out what you stored is very time consuming and most people refuse to do that and then just buy larger hard drives.

So if you think you will be at 10TB of storage by the end of the year, and you need 20% free space for a healthy pool, that puts you up to 12TB of raw storage just for this year. I'm not sure what you are storing but 10TB drives are not cheap.

You also stated that your expected growth was 400gb per month, while that may not be a lot for a comercial company, for home use that is a huge amount. You need to find a maximum capacity becasue by the end of year 2 you will have added another 5TB. If you look at 3 years then you would need about 32TB of storage + 20% (8TB) = ~40TB, or seven 10TB drives in a RAIDZ2. Since they are 10TB drives and that is a lot of storage I'd recommend a RAIDZ3 so that makes it eight 10TB drives. I hope this makes sense.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
If I had 9 or 10 10TB drives then I'd use RAIDZ3.

You see, I wouldn’t. But I also have a full backup ;)

Because as we all know Raid is not a backup ;)

Anyway, FreeNAS has no problem with 32TB+ pools or 400GB/month growth, but your wallet might.

So let’s work it out first :)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
You see, I wouldn’t. But I also have a full backup ;)

Because as we all know Raid is not a backup ;)
I fully agree with those statements but in my mind the RAIDZ3 is addressing a convenience issue because restoring that much data is a pain in the rear and can take considerable time. Otherwise we would all just used striped, no RAIDZ, and simply restore our backups after a failure, right :D
 
Status
Not open for further replies.
Top