Validation on intended setup

Should this work from a technical standpoint (not from a recommended deployment standpoint)?


  • Total voters
    1
Status
Not open for further replies.

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Hi,

Hoping some of you guys might be able to validate my intended deployment of a virtual FreeNAS box/Plex server for home use.

My intention is to have a one box does all kind of approach, with ESXi running on bare metal, hosting several VMs;
  • FreeNAS for ESXi/SMB storage
  • pFsense router
  • Domain Controller
  • Plex server
  • Download server
I want to utilise most of my existing hardware, with some minor amendments to keep costs down. Hardware in use would be;
  • 6 core Xeon E5-2618L V2
  • SuperMicro X9SRL-F
  • 64GB/128GB ECC RAM
  • LSI 9207-8i HBA
  • Intel 24 port SAS expander
  • 4 Port Intel Gigabit NIC
  • 4x WD RED 2TB
  • 4x WD RED 4TB
  • 6x Hitachi 2TB
  • 2x 120GB SSD (1x ESXi datastore, 1x SLOG drive)
  • GTX 760 GPU
  • Some no name 24 bay 4U chassis with no built in expander on back planes (SFF-8087 connectors on each of the 6 backplanes)
The idea would be;
  • Run ESXi 6.5 off a small USB key on bare metal.
  • I would put the FreeNAS VM onto the SSD with a VMFS datastore on it, connected via Intel chipset SATA port (can do mirroring at a later date).
  • I would then pass through the LSI HBA to FreeNAS, for access to the large spinners, and the onboard LSI controller for access to the other SSD (for use as a SLOG drive. Again can mirror at a later date).
  • The 9207 would have one SFF 8087 connection direct to a backplane, with the other going to the Intel expander, which in turn connects to the remaining backplanes.
  • The FreeNAS box would then have several mirrored vdevs (so essentially in a RAID 10 style), and would act as an NFS datastore for ESXi. The hypervisor would connect to FreeNAS via virtual switch, so no need for external network to slow things down. SMB between the Plex VM and FreeNAS would also occur in a virtual switch.
  • I would then create my Windows VMs on this datastore, the Plex server having the GTX 760 passed through to allow hardware transcoding.
  • ESXi would team 3 of the 4 ports on the NIC to allow multiple clients to hit FreeNas and Plex etc. with no bottlenecks. The fourth port would then be assigned to pfSense for WAN access.
  • Startup sequence would be set, so that on physical boot, FreeNAS would start first, with a several minute delay before auto starting the other VMs (pfSense first, then DC, to ensure gateway connectivity is good when the DC starts)
I think that's all the points I wanted to note in terms of setup, and while I appreciate it probably still isn't considered a good solution for production environments, as an infrastructure engineer I am willing to accept that I may have a harder time fixing things when it goes south. This is especially true because it is purely for my own home use.

As far as I can tell, this should all work, but hopefully others can validate that.

Thanks very much
Eds
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Thanks Chris.

First link is basically what I came across when originally thinking of design choices for the new setup, and is where I have gathered the majority of my info.
I see this particular revision of the guide has slightly more info in it (things like resource reservation and FreeNas side settings), so will go back and re-read to ensure there are no nasty 'gotchyas' I will need to worry about.

Second link, looks like Stux goes into a lot of step by step detail which is fantastic! Will definitely have that open when I am going through my own deployment.

Regards to SSDs to use for SLOG, I know the recommendation seems to be Intel SATA SSDs on the low end, but is there any issue with using some more consumer off the shelf products, such Samsung 850 Evos? I see Stux suggests something on the low end of Intel's lineup like the S3700, but doesn't seem to be quite as high performance as a consumer grade product, and much more expensive.
The P series SSDs are way out of my price range really. I assume PLP is a key feature, but again if I am willing to forego that will an 850 be ok?

If so, think I'm good to pull the trigger and order the bits I need :D happy days.

Eds
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Regards to SSDs to use for SLOG, I know the recommendation seems to be Intel SATA SSDs on the low end, but is there any issue with using some more consumer off the shelf products, such Samsung 850 Evos? I see Stux suggests something on the low end of Intel's lineup like the S3700, but doesn't seem to be quite as high performance as a consumer grade product, and much more expensive.
The P series SSDs are way out of my price range really. I assume PLP is a key feature, but again if I am willing to forego that will an 850 be ok?
The power loss protection is the key feature and the wear endurance. I have not done any comparison between the two, but the low latency, high endurance and power loss protection offered by the Intel drives is the reason they are more expensive. The latest generation Samsung drives are very good, but may not be as good or last as long, but they may very well be good enough. As for the power loss protection, I suppose that if you have a good UPS to keep everything online until it can be gracefully shutdown, you might be able to do without it. It is a risk, but if you know about it and you are willing to take it, I would just say that you should be sure to have good backup copies of all the data and configurations. A sudden loss of power could cause an unforeseeable amount of data loss. Not that the quantity would be unpredictable, but you wouldn't know what was going to go.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Thanks again Chris, all valuable input.

I'd be sad if I lost all the data that would be hosted on this box, but it wouldn't be the end of the world. UPS is definitely on the list too.

Might fully read through Stux's posts to try and make a final decision on SLOG.

Cheers
Eds
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
I'm pretty new to Freenas but my understanding is that if you are not going to use a SLOG device with PLP then you don't need one at all. Just disable sync writes and it will be way faster than any SLOG can do. I could be wrong and maybe Chris can chime in.

Thought: Maybe a SLOG without PLP would lessen your exposure a little if Freenas locks up or looses power as the cache on the SLOG is less than the cache Freenas will use if you turn off sync writes.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Good point there. I suppose disabling sync writes for NFS shares then has performance equivalent to the pool you are writing to?
Other approach is to use iSCSI rather than NFS, so at least the ZFS metadata is sync, with iSCSI writes being async?

I think realistically I might start with a decent consumer grade SSD with a UPS, and then when funds allow, replace the SLOG with a PLP capable SSD.
 
Status
Not open for further replies.
Top