Build Out advice regarding HBA's and Hardrives

Status
Not open for further replies.

Brian1974

Cadet
Joined
Oct 20, 2015
Messages
5
So I had bought the following components for a nearline storage for work -
currently projects are moved onto the nearline, archived to LTO6 tape , and then archived again to tape before being deleted after 12 months.
Data is moved via NFS over night from various other data storage when its ready to be archived.

right now I have the following components that were going to be just another Centos 6.5 XFS box.

Supermicro SC847BE2C Chassis with front and rear backplane sas3 version
Supermicro X10DRi motherboard
128GB DDR4 ECC Ram
2 x xeon e5 v3 procs
intel 10Gb Ethernet Nic
2 samsung pro 128gb ssd hardrives to mirror the OS (mounted in case)

1 samsung pro 256gb ssd hardrives for Cache (in rear hotswap chassis) Is this too big for the amount of Ram i have ?
30 HGST 6TB Hardrives for Volumes (24 in front ,6 in rear)

i also have an additional 2 Samsung pro 128gb SSd laying around that I was going to use for
the Zil (is this overkill, i know it doesn't need this much space but its what i have, will it be detrimental?)
i would mount these 2 in the rear hotswap bays if recommended.

I also have 2 x LSI 9300 8i cards which i was going to connect to both the front and rear back-plane
I admit i am a little confused as to how i should do this to achieve the best throughput and redundancy.
How is this usually done on a large storage device?

coming from Centos 6.5 machines with raid 6 cards i want to make sure i have as much redundancy set in place
if i am going to go the ZFS freenas route.

Any help pointers appreciated

thanks
 
Joined
Apr 9, 2015
Messages
1,258
The SSD's for the OS are way overkill and a lot of wasted space for FreeNAS. The actual OS resides in less than 2GB of space. Probably be best to pick up a couple SATA DOM's in your case but could use USB drives. Then use the SSD's you had allocated for that to L2ARC if someone else feels it can be useful.

FreeNAS is a software raid with the highest (RaidZ3) having a three drive redundancy. https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

The LSI cards would be flashed to IT (initiator target) mode so that FreeNAS can access the bare metal drives rather than having the controller doing things in between. https://forums.freenas.org/index.php?threads/confused-about-that-lsi-card-join-the-crowd.11901/

As far as the ZIL, it depends on a lot of things like the number of vDev's, clients, size of files, etc. So you will probably need to elaborate for someone with more knowledge to narrow that question down. https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The actual OS resides in less than 2GB of space.
Additional boot environments (IOW, uodates) will take up additional storage. And it makes sense to keep at least some of them.

With mirrored boot SSDs, it makes sense to offload the .system dataset to the boot pool, which has some (minor) advantages..

I also have 2 x LSI 9300 8i cards
They're not proven to be stable yet, so proceed at your own risk.
 
Status
Not open for further replies.
Top