BUILD 45 Drives - POD 4.0 with R750 Cards

Status
Not open for further replies.

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
I'm working on my first FreeNAS build with 9.3-REL using a 45 Drives (Backblaze) POD 4.0. I've read cyberjock's PP presentation with all the best practices and I've trolled the forums for a couple weeks before posting so I think I have a good idea of what I should and shouldn't do...

The current system configuration is:

SuperMicro X9 motherboard
Xeon X3470 CPU
32GB DDR3-1333R ECC RAM (4x8GB)
2x HighPoint Rocket 750 Controllers.
500GB WD Boot Drive on Mobo SATA3 Port

Initially I will start with 4x4TB WD RED or 4x6TB WD Ae (see below) and add 4x at a time as needed.

HBA:

My first question is regarding the Rocket 750 card. I've read many posts that say HighPoint cards are generally bad and I've heeded that warning, however, I have not seen much about the R750 cards and what I have seen seems to be a year or so old. HighPoint as well as 45Drive seem to have recent drivers for FreeBSD/FreeNAS and outside the FreeNAS community it seems these are somewhat of an acceptable solution. I've used the card in previous Linux installations with good results but never with FreeBSD and I won't defend the card since I have no real partiality as to whether or not I use it save for spending a few bucks on a new card.

The R750 appears to be JBOD and exposes all the real drives to the system and as far as I can tell does not write any of its own configuration to the drives. However, the reading I've done leads me to believe they've just taken the port multipliers out of the case and put it on the card (which is of course, bad). So what is the consensus? Is anyone actually using these cards and do they work? If it might be a promising solution then I'm willing to spend the time on a 4 to 8 week burn-in, as I'm in no hurry and have plenty of time (~6 months or so). If the card is known to be a waste of time then I'll scrap it and not waste mine or your time screwing with it.

IBM ServRAID M1015's are all over eBay as all the posts here suggest. Easy enough right! These cards have two mini-sas connectors on them. The system is currently wired for 45 drives where every four physical ports go into a mini-sas connector that attaches to the card. Shouldn't I just be able to move a couple of the mini-sas connectors from the R750 cards onto a ServRAID card and still run 8 drives on each card? Or am I mistaken?

HDD's

I will use enterprise drives without argument. My debate is between WD Red 4TB drives and WD (Yellow) Ae 6TB drives. I would prefer of course to use the bigger drives if I can do so safely, even though they're only 5,900RPM and add a couple of SSD for ZIL and L2ARC (which I will do anyway).

For my ZIL and L2ARC I'm considering a StarTech PEXMSATA3422 with 2x Samsung 120GB mSATA SSD. These cards just work with Linux but I don't see a whole lot about them for FreeBSD/FreeNAS. If this is not a good choice, when what would be the recommendation? I would much prefer to use 2x mSATA on a PCIe card vs. a 2.5" SSD in an adapter in a drive bay.


Again, I will do a 4 to 8 week burn-in once I have acceptable hardware!

Network:

The mobo has 2x Intel cards plus an IPMI port. Is this a good solution for long term sustainability or should I just get a i340-T2 and be done with it???


Thanks in advance for and advice and comments!


-brian
 
Joined
Oct 2, 2014
Messages
925
I guess the question is do you plan to fully populate the server with all 45 drives....this is key because that rockraid card has 10 mini SAS that can connect to 4 drives each + say the other 5 on the motherboard, to achieve that you would need x6 9211-8i or M1015's, or x4 9211-8i/M1015 + 2 motherboard SAS connections.... Or use a expander or 2 could accomplish the same task.

I dont think rockraid is a good choice, and while i see LSI does make a 16 port 9201-16i card i am not sure if it is well supported, maybe someone else can weigh in on that but i found a few threads https://forums.freenas.org/index.php?threads/install-driver-for-lsi-sas-9201-16i-hba.14881/ , https://forums.freenas.org/index.php?threads/lsi-sas-9201-16i-and-freenas-9-3.35485/
 

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
Thank you for the input and that was the response I expected to get regarding the R750 cards. There 2 of them in the box so it can support 45 drives without using the mobo controllers but that's not important. I could put as many as 3 of the ServRAID cards in if I needed to but for now, it would just be one.

Still looking for info on the HDD's and NIC.

Thanks!

-brian
 
Joined
Oct 2, 2014
Messages
925
Is this for production use, serving VM's or big data or anything? Or is this for home-ish use? Reason i ask is whether SSD's, ZIL and L2ARC are necessary.

As far as harddrives if its in the budget i rather have x4 6Tb's, and as far as boot drive x2 16Gb USB flash drives will do for boot. No need for a 500Gb drive.
 

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
It's kind of a loaded and long answer... I'll be in my house, if that counts as a home environment lol but in full production as a replacement for data strewn across external USB drives, multiple machines, NFS Server in a VM and even a couple of old Win2K3 "file" servers; oh, and maybe the Mac Mini 'media' server, idk yet!

1) All my daily business data (~20GB) but 'red' hot data. (This data is backed up to the cloud.)
2) NFS Server for all my Linux development machines. all my Linux machines are in a constant state of 'development' and no one installation is stable enough or lives long enough to hold this data safely or for a long term. I'm _done_ with Windows as an NFS Server and running low on space on the ESxi server so the dedicated NFS VM has to go.
3) Will probably run up a Plesk server to house the ~20TB of media the entire house has.
4) Probably no VM's, I don't care for that idea. I have 2x IBM X3650's running ESXi connected to a DS3400 SAN and it's working great. Just busting at the seams for storage. Of course, killing off the NFS Server VM will free me up about 4TB and I should be good for a while.

So there you have my short-sighted vision for my new FreeNAS server. I'm sure other uses will pop up and I'll find new and creative things to do with it. I'd looked at a couple other options as well but FreeNAS seems to be the most mature by a long shot.

-brian
 

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
As far as harddrives if its in the budget i rather have x4 6Tb's, and as far as boot drive x2 16Gb USB flash drives will do for boot. No need for a 500Gb drive.

I'm curious about the statement x2 16GB USB flash for boot... Are you referring to something like mirrored flash drives or just having a spare in case one dies???
 
Joined
Oct 2, 2014
Messages
925
I'm curious about the statement x2 16GB USB flash for boot... Are you referring to something like mirrored flash drives or just having a spare in case one dies???
Mirrored flash drives


Sent from my iPhone using Tapatalk
 

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
Very interesting... so does FreeNAS have the internal capability of mirroring it's own boot drives???

I'm not wild about the idea of using USB but I could easily do a pair of 60GB SSD's on the mobo SATA ports?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yes, and yes ;)
 
Joined
Oct 2, 2014
Messages
925
I'm not wild about the idea of using USB but I could easily do a pair of 60GB SSD's on the mobo SATA ports?
I use a pair of 80Gb SSDs for boot



Sent from my iPhone using Tapatalk
 

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
Righto, thank you! So far, I'm tentatively decided on the following:

2x Samsung 850EVO 120G SSD for boot drives.
1x StarTech Dual 2.5" to Single 3.5" HDD Bay for boot drives.
2x Samsung 850EVO 120G mSATA on StarTech mSATA controller for ZIL and L2ARC.
4x WD Red 4TB WD40EFRX NAS HDD (instead of 6TB WD Ae 5,900RPM).

I know 120GB for the boot disk is serious overkill and I could go smaller. But, I have never had a Samsung drive fail me, I trust them and they are cheap. My lab/test FreeNAS VM uses a 4GB disk image for boot and seems happy (x2 mirrored now).

My ECC RAM and Xeon CPU will be here soon and I'll be able to get started putting it all together. For now, I will try and bring it up with the HighPoint R750 cards and do some disk testing just to see what happens. I'm sure the community would be interested in knowing how well it works (or not). But, at present, I don't really have plans of putting any data in the system with the HighPoint cards in place and will almost certainly procure a couple of M1015's (which should still allow me to run 16 drives) ... unless of course by some random chance the HighPoint's work nicely, pass smart data and show zero errors (hey, it could happen).

Since I already had the server there was no real investment there. My cost so far is about $1,100.00 to get ~8TB online. That makes the price per TB a bit high initially but as I add drives that will decrease over time.
 
Joined
Oct 2, 2014
Messages
925
So with the x4 4Tb drives youll have 6.2Tb or so usable, thats filling to 80% + ZFS overhead, @Bidule0hm has a great calculator found here for calculating usable space and such. Using only x4 4Tb drives may not be the best way to go, i might say get 6 or even double it to 8 drives total, you can always add another vdev to your current pool, and i *think* there are funny ways if you wanted to add hdds to a existing vdev but i dont think theyre recommended and its best to move the data off, destroy it, and recreate it properly.

I would plan for at least a year or so of storage, even start out with something middle of the road
 
Joined
Oct 2, 2014
Messages
925

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
I did end up purchasing 8 WD Red 4TB drives that I'll have in my Thursday.

Would I be better off with 1 VDev of 8 drives or 2 VDev's of 4 drives? I was planning on using RAID-Z2.
 
Joined
Oct 2, 2014
Messages
925
I did end up purchasing 8 WD Red 4TB drives that I'll have in my Thursday.

Would I be better off with 1 VDev of 8 drives or 2 VDev's of 4 drives? I was planning on using RAID-Z2.
i would say 1 of 8 drive RAIDz2, personal choice
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Would I be better off with 1 VDev of 8 drives or 2 VDev's of 4 drives? I was planning on using RAID-Z2.
Depends on your priorities. One 8-drive RAIDZ2 vdev will deliver 50% more storage space than two 4-drive RAIDZ2 vdevs. The latter will deliver better IOPS and be easier to upgrade.

Either way, RAIDZ2 is a good choice.
 

BDMcGrew

Dabbler
Joined
Sep 22, 2015
Messages
49
Thanks, again!

Hard drives are cheap and I'm looking at this for a long term solution so I'll definitely take reliability, stability and performance over anything else.

I'll make a full post in the Off-Topic forum but I'm currently looking at a Supermicro SuperServer as an added VM host to house more Linux VM's and the FreeNAS box will serve NFS to those VM's as well (but not the vmdk's).
 
Status
Not open for further replies.
Top