BUILD Sanity check and drive strategy questions for 846 build

Status
Not open for further replies.

gregnostic

Dabbler
Joined
May 19, 2016
Messages
17
Expected Usage:

The server's primary role will be streaming media via Plex and (maybe) Emby. The vast majority of the content is Blu-ray rips. I've got a gigabit connection at home and I'll be letting some friends and family stream from my server. My inital goal is to support transcoding six simultaneous 1080p streams, and to eventually support transcoding four to six 4K streams by adding a second CPU. (Hence the otherwise-overkill CPU specs.)

I'll be doing backups of a handful of computers of multiple platforms (Mac, Windows, and Linux). Usually three or four clients at any given time. Time Machine for the Mac and, er, something else for the rest. (I'm still pondering my options here.)

And finally, a few Samba shares. One for adding new content to the media library (by far the largest share), and a few others for miscellaneous other files. It's unlikely that there will ever be more than a few people accessing the file shares at any given time. In fact, with rare exception, it'll just be me.

Specs:

Case: SuperMicro CSE-846BE16-R1K28B
Motherboard: SuperMicro X10DRi
CPU: 1x Intel Xeon E5-2630 v4
Heatsink: 1x SuperMicro SNK-P0048PS
Memory: 4x SuperMicro MEM-DR480L-SL01-ER21 (8GB)
OS Storage: SuperMicro SATA DOM (16GB)
HBA: LSI SAS 9211-4i

I've done a fair bit of research on the hardware and I think it should be solid for this purpose, but if anyone has any feedback, I'd appreciate it.

Thoughts and Questions:

I'll be starting with one CPU and 32GB of RAM, but it's a dual-socket board and I'm planning to add a second CPU and another 32GB of RAM sometime down the road as needed. I can bump the RAM up to 64GB right now, but from what I've read, 32GB will be plenty for a while, and that 64GB should probably be sufficient to cover the final goal of 24x 6TB disks.

I'm planning to use six-drive RAID-Z2 vdevs. I'll start with either one or two vdevs and gradually add one vdev to the zpool at a time until the chassis is full. (Much as I'd love to fill the thing with 24 drives all at once, my budget doesn't permit that.) I'm hoping you all can help me make up my mind as to what disks to use at the start.

I've got an existing NAS with eight 3TB drives—Toshiba DT01ACA300—in a RAID-6 array. They've been running solid for the better part of a year now but I'm outgrowing the existing system, both in storage and in CPU capacity.

Given that, I'm considering two main strategies for populating the drive bays:

First is the easiest, but also most expensive, especially upfront. Start with one vdev of six 6TB drives (probably WD Reds). Then I'd copy everything over from the old NAS and decommission it, drives and all. Grow the zpool one vdev at a time with more 6TB disks. Job done.

Second is to pick up six more of the same 3TB Toshiba drives I'm already using, set up the first vdev, copy over the data from my old NAS, then take six of the eight drives from the old NAS and use them for a second vdev. For future expansion, I'd go for 6TB disks for the remaining two vdevs. Then if I needed to expand beyond that, I'd swap out the 3TB disks and expand the vdevs. It would probably be quite some time before I hit the point where I'd need to start swapping disks and expanding vdevs, though.

Both options would result in the same net amount of storage in the beginning (roughly 16TB when leaving 20% free) but the second option would cost about $900 less. I haven't really seen much in the way of information saying whether or not to avoid mismatched drive capacities and speeds (the Toshibas are 7200 RPM) in a RAID-Z2 setup to know if it's a bad thing or just a "not ideal" thing. While I'd like to take advantage of hardware I already have, I also don't want to do anything that will cause me a lot of pain down the road. (I know that upgrading vdevs can be laborious, but I'm OK with that.)

Is it worth using mismatched drives to save $900 upfront or would I be better served just sticking with the larger capacity drives from the get-go?


EDIT: Updated to clarify my intention with the Samba shares.
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
First is the easiest, but also most expensive, especially upfront. Start with one vdev of six 6TB drives (probably WD Reds). Then I'd copy everything over from the old NAS and decommission it, drives and all. Grow the zpool one vdev at a time with more 6TB disks. Job done.
This is the route I would choose. It may seem more expensive now, but when you add in the cost of eventually replacing the 3TB drives with larger ones this is actually the cheaper option.

I'd also start with 2x16GB sticks of RAM. Why limit yourself to 8GB sticks if you don't' have to?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
a few Samba shares. One to provide access to the media
You don't need a Samba share, or any other kind of share, to provide access to the media for Plex or Emby. Both of those access the media locally by attaching the media directory/dataset to the plugin jail as storage.

Aside from that, it's your call. A third option would be to buy the 6 x 6 TB disks, move your data to them, then take six disks out of your existing NAS and add them to the pool as a second vdev. As you can see in my signature, I have mismatched vdevs in my pool. It's actually even worse than it appears in the sig; the 2 TB drives are 7200 RPM, while the 3 TB and 4 TB are 5400 RPM. I've not noticed any problems from this configuration, but I'll readily accept that it isn't ideal. Once I get my 6 TB drives satisfactorily burned in, I'll be adding a vdev of 6 x 6 TB disks to my pool.
 

gregnostic

Dabbler
Joined
May 19, 2016
Messages
17
I'd also start with 2x16GB sticks of RAM. Why limit yourself to 8GB sticks if you don't' have to?

The board has a total of 16 RAM slots, so even with 8GB sticks, it has room for 128GB if I need it. And with four sticks, I can take advantage of all four memory channels per CPU.

You don't need a Samba share, or any other kind of share, to provide access to the media for Plex or Emby. Both of those access the media locally by attaching the media directory/dataset to the plugin jail as storage.

Sorry, I should clarify. I meant access for me, not for Plex/Emby. This would be used for adding new Blu-ray rips (etc.) to the media library.

Aside from that, it's your call. A third option would be to buy the 6 x 6 TB disks, move your data to them, then take six disks out of your existing NAS and add them to the pool as a second vdev. As you can see in my signature, I have mismatched vdevs in my pool.

That's certainly giving me something to think about. How long have you been running that mismatched configuration?
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Welcome to the Forums @gregnostic !

A third option would be to buy the 6 x 6 TB disks, move your data to them, then take six disks out of your existing NAS and add them to the pool as a second vdev.
I was about to suggest this route too.

IIRC on this topic, there is one additional aspect to keep in mind beyond different sized drives in a particular vdev (I've run mismatched drive sizes too without problem).
Mismatching vdev sizes. ZFS cannot redistribute data in the zpool to be evenly spaced/proportionally filled to match size of files already on the zpool (as far as I remember, it will do this, for new added files).
This means that increased zpool performance due to an additional vdev (effectively striped) will only include files added/modified after the second vdev has been added to the zpool.
In the scenario where you have 6x3TB and 6x6TB drives AND data that needs to be 'cleared off the smaller drives first' - you'll be better off by starting with 6TB drives.
Another consequence to put somewhere for future reference is to better avoid waiting to the last minute with space upgrades, since not all files on the pool will benefit from the upgrade in terms of speed.

Cheers / Dice
 

gregnostic

Dabbler
Joined
May 19, 2016
Messages
17
Welcome to the Forums @gregnostic !

Thanks!

Good points about the unbalanced writes. And given that most of the files I'm working with will basically never change once written, I'll be relying on that weighted bias for balancing data across vdevs. Probably not a great idea to fill up a vdev (even just to 80%) straight away in that case. Hm...
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Another option is to not be too worried about getting 6TB's yet. Instead get enough 3TB's to build 2x6drive vdev in a raidz2.
See if the storage is enough for a while. Since 6TB drives have been the king of the hill for a while, up until recently when 8TBs and 10TB drive's have been released, we might see some changes in prices in the not too distant future.

My own path into FreeNAS turned into an expensive nightmare since I had two 6TB wd60efrx drives from a windows environment. Just a month before getting into FreeNAS I got myself 2 additional 6TB. When realizing I needed 3 additional drives to get any sort of decent storage efficiency out from a raidz2... I tell you, I've whished many times I would've not gone the route of high capacity drives. Budgetwise, it is a nightmare. After all, FreeNAS excels in numbers of physical drives rather than a few larger drives - to give you storage efficiency. Since you already are look at a 24bay box and don't have more than 8 drives up front, you'll still have some room to expand.

I guess I'm trying to push you in the direction to reconsider the fetish for 6TB drives in favor of smaller drives to not detonate budget beyond recognition. Perhaps the next upgrade's viable size will be 8TB's?

Cheers /
 
Last edited:

AVB

Contributor
Joined
Apr 29, 2012
Messages
174
Don't forget that you'll need drives and something to put them in to back up your server too. That, for me, has been the most expensive part literally doubling the storage needed to account for backups.
 

gregnostic

Dabbler
Joined
May 19, 2016
Messages
17
I guess I'm trying to push you in the direction to reconsider the fetish for 6TB drives in favor of smaller drives to not detonate budget beyond recognition. Perhaps the next upgrade's viable size will be 8TB's?

Yeah, that's been one of the big concerns for me. I could fill all 24 bays with 3TB disks (including the eight I already have) for almost the exact same cost as just one vdev of 6x 6TB disks would cost me. It would fit into my current budget and I could always just upgrade each vdev going forward. While that would result in having lots of drives to deal with as I upgrade (and extra power usage and heat generation for the same net capacity) there's certainly something to be said about pushing out the purchase of larger disks for a while and getting them for less money.

I wish I could say I've made up my mind just yet, but you all have given me a lot of additional interesting options to think about, and I'm going to play around with some numbers to see what my costs and upgrade paths would look like.

Don't forget that you'll need drives and something to put them in to back up your server too. That, for me, has been the most expensive part literally doubling the storage needed to account for backups.

I haven't figured out exactly what I'll do for backup, but I can say that the vast, vast majority of my data is replaceable. It's mostly Blu-ray rips. And while there's a nontrivial amount of time and effort involved, I don't necessarily need to "back up" dozens of terabytes of data that still exists on the Blu-rays I have access to.

I'll mostly be concerned with a few dozen gigs of various files that I truly care about. And for that, I'll probably use an online backup service.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I wish I could say I've made up my mind just yet, but you all have given me a lot of additional interesting options to think about, and I'm going to play around with some numbers to see what my costs and upgrade paths would look like.
Excellent!
 

gregnostic

Dabbler
Joined
May 19, 2016
Messages
17
Today I ordered everything for the server. With any luck, it should all arrive by the end of the week and I'll have myself a fun project for next weekend.

Ultimately, I decided to order another 18 of the same 3TB Toshiba drives I already have. I'm going to start with three six-disk vdevs comprising the new disks. I'll transfer the data from the old NAS then take six of the eight disks to form the fourth and final vdev. That will leave me with two known-good spares on hand. After parity, formatting, and keeping 20% space free, that'll give me a net capacity of roughly 32TB. Had I spent the same amount on 6TB drives, I would have been able to populate just one vdev and I would have just 24TB of usable capacity once I brought over the existing 3TB disks.

In the future, I'll either upgrade the vdevs one at a time or pick up a JBOD chassis and just start adding more vdevs. In the meantime, though, I did a bit of estimation of expected usage and the 32TB of capacity should last a couple of years, so I have plenty of time for prices of larger disks to come down and to consider an upgrade path based on where things stand.

I wanted to thank you for your input on drive selection @Jailer, @danb35, and @Dice. You all made very good points and helped me finally make a decision that I've been waffling on for far too long.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Last advices:
Make sure you follow the burn-in procedures carefully.
- arrange for a list of what slot contains what serial number of HDD.
 

gregnostic

Dabbler
Joined
May 19, 2016
Messages
17
Yeah, definitely. This server is going to be in constant use for quite some time, so I'm going to test things pretty thoroughly before I install FreeNAS or set up any vdevs. Don't want to take any chances on this one!

And I've got a P-Touch label maker, so I'll be putting a label on each drive sled and maintaining a spreadsheet for serial numbers and drive slots. Don't want to take my chances when the need arises to remove a disk.
 
Joined
Apr 9, 2015
Messages
1,258
Going to toss a couple things in that may help. From my experience with plex you will want to make sure that you encode to MP4 rather than MKV unless you don't mind ALWAYS transcoding, even the native plex apps seem to have a preference for MP4's. This will be even more important when you start working with 4K content along with multiple streams. It would also be good to research what all the clients prefer for settings as this will also help limit your need to transcode.

I know that it may throw a wrench in the works but you may want to add some hot spares when you get up to four vDev's or possibly plan for raidZ3 if you upgrade to larger drives. I know raidZ2 is not very likely to have a problem with data loss and drive failure but at a certain point with very large drives and multiple vDev's you have a higher risk of data loss. When you are looking at 8TB drives being a cost effective large drive in a few years you are going to start running at a much higher risk level for a second and possibly third drive failure during rebuild. One vDev goes down and the pool is gone. It's an increased cost for the same capacity and 100% your choice but there is a big reason why raidZ1 is no longer recommended and when drive capacity goes up so does the failure chance. http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/
 

gregnostic

Dabbler
Joined
May 19, 2016
Messages
17
From my experience with plex you will want to make sure that you encode to MP4 rather than MKV unless you don't mind ALWAYS transcoding, even the native plex apps seem to have a preference for MP4's. This will be even more important when you start working with 4K content along with multiple streams. It would also be good to research what all the clients prefer for settings as this will also help limit your need to transcode.

In my own current usage of Plex, the MKV container doesn't itself trigger transcoding. Plex will demux and remux the video and audio streams if the target doesn't support the container. That uses very little CPU compared to a full transcode and it happens quite frequently with the client devices I use.

That said, I currently have a compatibility issue (a known issue by Plex devs) where certain videos with particular keyframe settings will cause video to skip and drop frames on the new Apple TV. It makes those videos unwatchable. The workaround, unfortunately, is transcoding. So right now, everything I watch is transcoded until the bug is worked out. I also expect that some of my friends will need transcoding because they may not have the bandwidth to stream full Blu-ray quality. I'm going to recommend that they do direct streaming from my server if they have the bandwidth, but I'm not going to worry about it if they don't. Given the efficiency of the CPU, it'll cost me about one cent for the electricity needed to transcode a two-hour movie. It's not ideal, but even with heavy usage, that's a negligible amount of electricity.

I haven't looked into 4K with Plex yet, but I don't expect to integrate 4K video into my setup for some time. To my knowledge, there's not even a way to rip 4K Blu-rays onto a computer anyway. I figure I'll get there eventually, but I'll have some time to work through all those issues by the time that happens.

I know that it may throw a wrench in the works but you may want to add some hot spares when you get up to four vDev's or possibly plan for raidZ3 if you upgrade to larger drives. I know raidZ2 is not very likely to have a problem with data loss and drive failure but at a certain point with very large drives and multiple vDev's you have a higher risk of data loss. When you are looking at 8TB drives being a cost effective large drive in a few years you are going to start running at a much higher risk level for a second and possibly third drive failure during rebuild. One vDev goes down and the pool is gone. It's an increased cost for the same capacity and 100% your choice but there is a big reason why raidZ1 is no longer recommended and when drive capacity goes up so does the failure chance. http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

That may be something to worry about for my next build, but I don't expect that the size of drives I'll be using in this server (even with some upgrades a couple of years down the line) will put me over the tipping point where RAID-Z2 isn't effective anymore. We may well be at that point by the time I expect to look at a new build, but I'm not going to worry about my next build yet. And who knows what the market will look like at that point? I'm hoping this build will last me five years or more. By the time I'm ready for a new build, I may be looking at a flash-based solution by then if their prices keep falling and their capacities keep increasing as quickly as they have and if hard drive capacities continue to increase as slowly as they have for the last five years. And I'll bet a flash-based setup will upend all these "rules" that we're used to with spinning metal, as they have with so many other applications.

For this build, though, I'll have two known-good cold spares on hand for quick replacement if a disk goes bad. That should suffice.
 
Status
Not open for further replies.
Top