My Norco RPC-4224 build - ESXi /w FreeNAS, Ubuntu, Windows 10

Status
Not open for further replies.

Sirius

Dabbler
Joined
Mar 1, 2018
Messages
41
G'day everyone.

I have a few questions about my current build and its planned upgrades.

For a bit of background, I want to move all my storage out of my daily rig into a NAS/server, and just have a single boot SSD in the daily rig. I also want to have all my (legal) media in a centralised location so I don't need to power up my daily rig just to watch a movie or listen to music somewhere else in the house. I also want to backup all the systems in the house to the NAS as well. Media and backups will be served via SMB.

I'd also like to put all my games on the central system too, so that means stuff like my Steam and Origin libraries. Ideally I'd then be able to play all my games on any system with access to the NAS with minimal performance loss. iSCSI would of course offer the best performance, but I've read ZFS + iSCSI can be a bit wonky regarding stuff like zpool fragmentation? Plus if I use SMB it's easier to share the game drive with multiple systems if needed.

Finally, I want to be able to run VMs for whatever I feel like with full hardware pass-through capability, hence using ESXi to virtualise FreeNAS as opposed to a bare metal install. Plus this is also an interesting learning experience to me as I haven't used ESXi before.

The current build is as follows:
  • CPU: Intel i7 4930k
  • RAM: 32gb G.Skill Ripjaws
  • Mobo: ASUS P9X79 WS
  • PSU: Seasonic Platinum 760W PSU
  • Case: Norco RPC-4224
  • NICs: 2 x onboard Intel GigE
  • Boot: USB stick for ESXi and a 120gb Kingston SSD for the VMs
  • Hypervisor: ESXi 6.5 U1
  • OS1: FreeNAS 11.1 U2
  • OS2: Ubuntu Server 16.04 LTS for Docker and whatever else
  • OS3: Windows 10 for Steam, Origin, etc to download games
The storage hardware consists of the following:
  • SAS card: Dell H200 flashed to LSI IT mode firmware
  • SAS expander: Intel RES2CV360
  • 4 x 4TB Seagate IronWolf
  • 4 x 4TB Toshiba MD04ACA400
  • 4 x 4TB WD Red
All drives are in a mirrored vdev setup, giving ~24GB storage, ~48GB raw. There is no SLOG or L2ARC.

I also have 4 x 1TB Samsung drives in a mirror, along with 8 x 3TB Toshiba drives I need to SMART test. Those drives will all either be in their own zpools or I won't use them depending on their condition. Either way I don't trust them as much as my new drives as the Toshibas are around ~5 years old and the Samsungs are even older.

Once I get the money I'm planning to get:
  • CPUs: 2 x Intel Xeon E5 2670
  • Mobo: Intel S2600CP2J
  • RAM: 128gb ECC DDR3
  • NICs: 10gigE NIC, eg. Chelsio 320 or Chelsio 520
Considering my goals stated at the start of this post, is there anything I'm missing to get the sort of setup and performance I'm after? I was going to dedicate 56gb of the 128gb ECC RAM just to FreeNAS plus most of a CPU, leaving the rest of the RAM and CPU to the other VMs. Currently I can only give 16gb to FreeNAS seen as I only have 32gb until I get that dual CPU setup.

Would I need an L2ARC or SLOG in order to get acceptable performance over 10gigE? I was thinking of using a PCIe SSD (consumer M.2 - Samsung 960 Pro for example, unless I somehow find an enterprise grade drive) as the L2ARC and then 2 x 100gb HGST SLC SAS SSDs mirrored as the SLOG. Of course there's no point adding a SLOG and L2ARC if it won't help. Or is a SLOG always recommended?

Anyway, I hope I'm making sense. If not feel free to as me questions and I'll clarify things. I know this is all probably a bit over the top for a home setup but I'm also using it as a learning experience plus the hardware is fun to play with - even if it is a pain trying to find some of the gear in Australia without paying a fortune :(

Kind Regards,
Sirius
 

Sirius

Dabbler
Joined
Mar 1, 2018
Messages
41
Another option I've been pondering is to split the system in two - so have a dedicated ESXi box and a dedicated FreeNAS box. Not sure if I'd gain much from doing that though...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
A member @Stux did some research on the SLOG situation, I have a link to it in my signature.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Another option I've been pondering is to split the system in two - so have a dedicated ESXi box and a dedicated FreeNAS box. Not sure if I'd gain much from doing that though...
This is an option, but several people have done what you are suggesting. Here is the link I mentioned earlier. It is very well researched and documented. Even though he is using some different hardware than you are looking at, it should give you a lot of good information.

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
I too am on this journey. My parts are on the way (in signature). I will keep in touch to see if we can help each other out on this adventure. The one thing I worry about is the gaming. Id always rather have local storage for games. You could use the ESXI/FreeNAS to store the files for the game, or to run a game server, but I think the drives or network may be a bottleneck as compared to a local disk. I will be interested to see how it works for you though.
 

Sirius

Dabbler
Joined
Mar 1, 2018
Messages
41

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Good read. I will add that to the list of projects for the future.
 

Sirius

Dabbler
Joined
Mar 1, 2018
Messages
41
So I decided to jump on the HGST 100gb SAS SSDs before they got taken by someone else.

Hopefully they're not too worn out but they were only ~$90 AUD on eBay so I can't complain too much :D Plus I'll be over-provisioning them as a SLOG and then running in a mirror.

Now I just need an SFF-8087 to SAS SSD breakout cable and I'll be good to go.
 

Sirius

Dabbler
Joined
Mar 1, 2018
Messages
41
So... I have 2 arrays in the box now - 8 x 3TB 7200rpm Toshibas, plus the array I mentioned earlier (4 x 4tb Toshibas, 4 x 4tb Red, 4 x 4tb IronWolf)

I also received the 2 x 100GB HGST SAS SSDs. I used the following commands to partition up and assign them (my two pools are called mainzpool and iscsizpool):


gpart create -s GPT da20
gpart create -s GPT da21
gpart add -t freebsd-zfs -a 4k -s 16G da20
gpart add -t freebsd-zfs -a 4k -s 16G da20
gpart add -t freebsd-zfs -a 4k -s 16G da21
gpart add -t freebsd-zfs -a 4k -s 16G da21
zpool add mainzpool log mirror da20p1 da21p1
zpool add iscsizpool log mirror da20p2 da21p2


I'm not using an L2ARC (not enough ARC to bother) and I've set 'sync' to 'always' on both pools.

Now... I just thought I'd share my performance results and ask a few questions.

1. Can Q1T1 speeds be improved? If so, how?
2. What can I do to improve SMB speeds? I was hoping to avoid iSCSI and just use SMB but I really like iSCSI's performance...
3. Should using only 8TB of the ~10TB on the iSCSI pool be adequate to maintain performance? Or should I shrink it to say 6TB or so? The entire zpool will basically only be for iSCSI.

Also, these tests were performed using a Windows 10 VM talking to the FreeNAS VM via VMXNET3 interfaces, I realise my speeds over a 'real' network won't be as good but I am planning on 10gigE NICs. The 'D' drive is the iSCSI share, and the 'Z' drive is the SMB share.

Please let me know if any other information would be of use.

iscsi vm with slog.png
smb vm with slog.png
 

Sirius

Dabbler
Joined
Mar 1, 2018
Messages
41
I just thought of something...

I assumed with sync=always that using a SLOG would help offset the performance loss. I wanted to use sync=always as I assumed (maybe incorrectly?) that it's the best way to guarantee integrity of writes.

Would using sync=standard while still keeping the SLOG devices ensure the same level of protection as sync=always while also allowing improved performance?

If I've got things the wrong way around feel free to correct me, I'm still trying to learn everything as I go along so I'm definitely out of my depth with some of this x.X
 
Status
Not open for further replies.
Top