X9DRI-LN4F+ / PERC H310 / 826BE1C-R920LPB

Status
Not open for further replies.

qhash

Dabbler
Joined
May 17, 2018
Messages
18
Hello everyone,

I am new to the new to FreeNAS forums! Hope I can get some information here and finally be able to build a NAS solution I need.

I am left with some hardware from previous project, and that is SuperMicro 826BE1C-R920LPB chassis and Perc H310 controller. I can buy for cheap X9DRI and two 10-core Xeon proccessors.
EDIT: Ah, I do also have HPE 8GB USB RAID1 rev3 stick which I may use as I boot device.

What I want to achieve is a VMWare box with a ZFS storage VM on it that will have direct access to H310 controller and all the harddrives that are going to be presented back as an iSCSI pool(optionally NFS, I do not need iSCSI in fact, just want it) for other VMs.

My questions are:
- will abovementioned HW work good with FreeNAS?
- is FreeNAS capable of doing what I need it to do?
- what would be SAFEST pool possible I could build out of four 4TB SATA 512e drives (guess 4Kn wont work with H310, maybe its worth chaning it? that is also a question :) )
- how much RAM should I have purely for FreeNAS ZFS?
- any hints strongly appreciated

Regards,
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Look at the links under the "Useful Links" button in my signature. There is one about an ESXi AIO system that @Stux built. It should give you a lot of useful information. Lots of other good reading there.
I think that the things you want are generally workable, but there are many implementation details to work on.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

qhash

Dabbler
Joined
May 17, 2018
Messages
18
Thank you. Btw. I plugged in some of my old SSDs and HDDs which I was using with napp-it and freenas (I just built lab setup) found all the pools. Just tested host failure resilency :). altough I used has same HBA card, do not know if that matters.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
what would be SAFEST pool possible I could build out of four 4TB SATA 512e drives
I imagine that a RAIDz3 pool would be the "SAFEST" but it would not be a very good use of space. I would go with a RAIDz2, but It would be most advisable to scare up two more drives and have 6 drives in a RAIDz2 configuration. It should give you around 13TB of usable space after allowing for overhead.
guess 4Kn won't work with H310, maybe its worth chaning it?
The H310 is fine once it is flashed to the correct IT mode firmware, but that system board (if I recall correctly) has PCIe 3.0 slots, so you might want to get a slightly newer card that is also PCIe 3.0... Not a requirement.
 

qhash

Dabbler
Joined
May 17, 2018
Messages
18
My H310 is flashed properly to IT mode, I think. What do you mean by "correct IT mode firmware"?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My H310 is flashed properly to IT mode, I think. What do you mean by "correct IT mode firmware"?
I have used several of the Dell H310 SAS controllers and it can be a pain to convert them from the Dell firmware to the LSI firmware. If I recall correctly, the latest firmware revision is 20.00.07.00 and I have probably got 5 or 6 of those cards in service in various systems. They usually work fine but I have seen situations where they will overheat and fail if they are not getting enough airflow.
This card does the same job but is a "generation" newer, so it runs a bit cooler, and it is PCIe 3.0:
https://www.ebay.com/itm/HP-H220-6G...0-IT-Mode-for-ZFS-FreeNAS-unRAID/162862201664

If you have the H310, it should be fine, just be sure to keep it cool.
 

qhash

Dabbler
Joined
May 17, 2018
Messages
18
I might look for those H220. Do you know any good guide I can follow (or maybe you can help me) which can guide me in proper VDEV/pool/filesystem creation so I will utilize my drives properly and so I will be able to expand in the future? I planning to start with 2x240GBs of SSD, 3x4TB NAS 7200rpm HDDs
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I might look for those H220. Do you know any good guide I can follow (or maybe you can help me) which can guide me in proper VDEV/pool/filesystem creation so I will utilize my drives properly and so I will be able to expand in the future? I planning to start with 2x240GBs of SSD, 3x4TB NAS 7200rpm HDDs
If you have not looked at this before, you should.

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

There are so many options with regard to how to configure your system that there is not a 'one size fits all' answer. You just need to keep in mind that you can't expand a vdev by adding more disks. If you need to expand the capacity of an existing pool without damage to the data, you have to add additional vdevs. Once you add a vdev to a pool, you can't remove it without destroying the pool.
There are some restrictions with ZFS that don't exist with other systems because it was intended to be for enterprise storage systems that were fully designed in advance. The slideshow will help you understand it better, and you probably want to read through this also:

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
 

qhash

Dabbler
Joined
May 17, 2018
Messages
18
I see I need to read a lot. Unitl now I have worked with mirrored vdevs only.
 

qhash

Dabbler
Joined
May 17, 2018
Messages
18
OK, already came back with a question (s).

1. I will have 12 bay chassis. I will boot ESXi off 8GB RAID1 flash.
2. I need to somehow boot-off the FreeNAS VM. I guess I will need to do it from a single SSD drive/SATAdOM. Now - this is SPoF - after configuring all of the VDEVs,datapools,etc... if I will make a backup of that FreeNAS VM ( and I will make that backup after some major changes are made) and this SSD fails, simply replacing the drive and restoring VM should quickly allow me to restore my RAIDZ2 quickly? Did you (or anyone else) had experience with recovering from a failed system drive? It actually does not matter whether if this is hardware install or virtual install... effect is the same, so I guess it must have happened to someone.
3. How is RAIDZ2/Z1 different from RAID5/6 when the rebuild proccess kicks in? What is the benefit of having ZFS vs traditional RAID card when similiar raid levels are considered? Set all other features of ZFS like SLOG aside?
4. What will be a better approach - using some kind of not super fast SLOG SSD ( SATA 6Gbps SSD like Intel DC S3520) or just skip SLOG and create a RAID1 SSD mirror VDEV and use it for most "hot" shares? I am asking because I cant afford expansive SLOG.
5. Back to 4KN. Is using 4Kn drives going to change anything? Error correction? Or maybe performance only as 512E is in fact 4K on the HW layer?
6. I need something like 10-12TB for the moment. I thought about 6x4TB RAID2Z... but in 10 bay enclosure (2slots taken by SSDs) it can be not expanadable. Also URE, parity and big drives can be a problem. Maybe it is better to go 4x6TB RAID10 as a single VDEV, add pool on it and when expansion moment will come, I will add another 4x6TB? cant decide what is best...

update:
7. How is SLOG working with LZ4 compression? Or one has no impact on the other?
8. Any advise on backup strat? At the moment I came to a conslusion that my final objective is extremely complicated... too much untested unknowns everywhere ;)
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
How is RAIDZ2/Z1 different from RAID5/6 when the rebuild proccess kicks in? What is the benefit of having ZFS vs traditional RAID card when similiar raid levels are considered?
Easy answer is that FreeNAS will not work with your regular RAID5/6 drives. You don't have a choice if you want to use FreeNAS. It's ZFS and that's it !

I need something like 10-12TB for the moment. I thought about 6x4TB RAID2Z... but in 10 bay enclosure (2slots taken by SSDs) it can be not expanadable.
Couldn't you put the 2 SSDs in a drive cage by itself? Of course, this all depends on what case you buy and what options are available.

I think you are jumping way ahead here. First figure out your exact use case and then try to fit in the components based on those use cases. From the one-liner in your OP, it seems you need to install a hypervisor on bare-metal and then create a FreeNAS VM -- along with other VMs.
 

qhash

Dabbler
Joined
May 17, 2018
Messages
18
Easy answer is that FreeNAS will not work with your regular RAID5/6 drives. You don't have a choice if you want to use FreeNAS. It's ZFS and that's it !
My question was not about what wil work with what. Let me rephrase. All detailed technology aside, all the HW RAID cards failure problems aside, what is the difference between PARITY HW RAID and PARITY ZFS RAID by means, for example, URE, big drives and drive tendency to fail when stressed? Assume you have RAID6 and RAID2Z. Your setup is running for 4 years. Drive fails.. and it is likely that during stressfull time the rebuild is, another drive is going to fail, or you encounter URE (that is why RAID5 should not be used on any drive that is bigger than 1.2TB, exacly because of the rebuild times).... and maybe third as well.... is above mentioned scenario problematic for ZFS? The more I read, the more it seems it is.

Couldn't you put the 2 SSDs in a drive cage by itself? Of course, this all depends on what case you buy and what options are available.
Supermicro server case with backplane. Can't do. There are versions of this case that have 2x2,5 cage in the back, but mine is old and does not have that feature.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
is above mentioned scenario problematic for ZFS?

It is less problematic for ZFS than HW RAID on some aspects and it is as much (but not more to my knowledge) problematic for the other aspects.
 
Status
Not open for further replies.
Top