SuperStorage 6047R-E1R24N - Will it FreeNAS?

selbs

Cadet
Joined
Sep 20, 2019
Messages
9
Hi folks!

Like most of my projects, this one has become something of its own living, breathing organism. I started off on this journey with a $1500 budget and I was simply going to buy a Synology with a Celeron chip so I could let the device handle a plex server and one.. maybe 2 transcodes. Well, I started down the rabbit hole and I emerged with this:

- Superstorage 6047R-E1R24N: 24 Bay SAS3 backplane storage chassis
- Super X9DRi-LN4F+ with 2x E5-2630L Xeons (60 Watt TDP each)
- 128GB 8x 16GB ECC RAM
- LSI 9211-81 (IT mode)
- 2x IBM i340-T4 Quad GB Nics
- 6x Constellation 4TB SAS
- 2x 128GB Adata SSD

I went $50 over budget

I expect this thing to idle around 300 Watts and since it will be replacing multiple workstations in server roles and several network devices... I think my power bill will be about the same. That being said - here is the plan:
1) Configure 2x 120GB SSDs in raid mirror using on-board intel controller
2) Install ESX to USB thumb and boot from that critter
3) Setup ESX to use this new Datastore
4) Install Freenas to this mirror and assign direct access to LSI controller
5) Add constellation drives, setup pools, turn off dedupe, etc...
6) ?
7) Profit!

Now, if it will FreeNAS - I have an additional 2 questions.
1) I am really torn on whether to iSCSI or NFS. This is a "home" use case. I fully expect that 75% of the network data will never leave the chassis. I'll have a few shares (windows destinations) and of course, Plex which will be sent to the physical LAN.
2) Would I gain any benefit from carving up that initial 120GB SSD mirror into a 30GB partition for the FreeNAS datastore and then creating another 90GB (+/-) partition for cacheing, for FreeNAS? This is another spot that I have a disconnect... whether I *need* a ZIL or SLOG in this small of a setup.

Clear as mud? I have attached an image that can maybe, explain a little better than I have above. Additionally - would it be of any benefit to do a build thread for this project.. or has that already been done to death here?

Thanks!
dJGyJyi.jpg
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
I have gone the FreeNAS under VMware route in the past and that worked for what I needed to do. I used a VM to facilitate the use of my LTO tape to backup my FreeNAS instance. My next experiment will be to use FreeNAS's built in virtualization to do a similar thing through FreeNAS instead of VMware.

As for iSCSI vs NAS what is the application that would greatly benefit from block storage?
As for the cache configuration, again it comes down to application specific characteristics of your data.
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
If you are just going to have 6 drives in this system, I would spread them out. Don't "stack" them. It will be a lot easier to cool them when they are spread out. Also, did you get any Dummy drives with it? If you do not have any, I would suggest either buying some or creating something equivalent. That too would help on the cooling.

Do remember that the way you have "painted" out the RaidZ2 pool on the picture is not the way it actually works. You can not specify that drive 1 and 2 are parity, and drive 3,4,5 and 6 are data. That is just not how it works :)

If you have not yet looked at it, do some reading here: https://www.ixsystems.com/community/resources/links-to-useful-threads.108/
specifically: https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/

I personally tried the ESXi with FreeNAS as a VM but switched over to a dedicated box mainly because I did not trust my own expertise when eventually the shit hits the fan... I do keep my backup system as a VM tho. Merely pointing this out as something to think about. The more complex your system is, the more problematic it often gets when something goes wrong (horribly or not).
 

selbs

Cadet
Joined
Sep 20, 2019
Messages
9
As far as a use case for block - if anything it would be for the VMs that will live on this machine. The VMs I have listed above are my core machines but, I fully expect to have 5-6 machine windows network setup. Let's say that I could have *up to* 10 VMs being served from that storage. I don't think the media or file storage will benefit greatly one way or the other in this SOHO.

Regarding the 6 drives - I *actually* have 8 and was just going to keep 2 of them on the shelf for replacements. That is a good tip on spreading them about for cooling - I hadn't really thought of that... nor had I thought about the 'dummy' drives. I'll look into that.

One more thing - I painted that RAID setup 'logically', though, I didn't realize that it wouldn't "work' that way. :) I'll check out those links you posted

Thanks!
 

selbs

Cadet
Joined
Sep 20, 2019
Messages
9
After reading a bit more in the above links - due to the fact that ESX appreciates sync, I have ordered a PCIe m.2 adaptor and a refurbished Optane 16gb for about $30. The constellations are about 175 mb/s x 6 = 1,050 mb/s x 5 seconds, leaves me with 5,250 mb that can go to this device. That capacity should allow me to run *about* 2 more pools before I need to upgrade the SLOG capacity. I don't ever forsee needing more than 32TB (let alone 48TB) in my situation.
 
Top