Been thinking about this.

Status
Not open for further replies.

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
Hi all,

As stated in my 'Hello there' post, I'm looking to build something. There is a part which hasn't to do a thing with FreeNAS but it will be part of the build eitherway.

I'll try to explain the things I would like to make that happen:"

- First of all, a storagesystem. FreeNAS. Not only just for storing movies, also personal/work/school documents/movies/pictures and maybe OS backups.
- A gameserver for some Steam games. Just for friends, maybe other people, won't be dozens of servers running in the same time, but would be nice to have it there.
- Maybe a printserver, since we do a lot of printing thanks to the education/work part, and a great Xerox.. :)
- Don't know if it would be possible to create a sort of VPN server too.

Before I begin yelling weird things, all this is out of personal interest / been in touch with some of the parts at work. In my opinion, a solution like getting ESXi is the way to go. Make a few virtual machines, and install the required software on that. So 1 FreeNAS, the other something like Windows Server / something Linux alike. I have followed some WS / Linux course, although they're not masterclass/professional/hackerman level. Also had some lessons fiddling around with servers and hypervising like Hyper-V and VMware, so I might still know some basics, but that's why I am asking the question here.

As for the hardware I would like to use, I thought of the following:

Motherboard: SuperMicro X10SLM+-LN4F (got this already at home) > https://www.supermicro.com/products/motherboard/xeon/c220/x10slm_-ln4f.cfm
CPU: Intel Xeon E3-1230 v3
RAM: 16GB ECC DDR3 Crucial CT102472BD160B
Case: Fractal Design Node 804
PSU: Seasonic G360W / Be Quiet! Power 9 CM 400W
CPU cooler: Guess, stock.
HDD > NAS function: 4-6 3.5" WD Red / Seagate Ironwolf 3-4TB
Fans: Have a few around here

As for the ESXi and the other function, as I think those will run fine on a Windows Server VM, maybe getting a small SSD and a 1 or 2TB disk for the ESXi and WS VM should do the trick.

I have 1 specific question regarding RAID. I read a lot, you should stay away from hardware RAID controllers. But in this article ( https://forums.freenas.org/index.php?threads/hardware-recommendations-read-this-first.23069/ ) I see the IBM M1015 is a good choice, if you want to get a card.
Gotta admit, I've got obsessed with the SATA2/3 specs. Maybe it's something autistic, I don't know :)
Would be an idea to run the specific disks which I want to run for FreeNAS on that IBM, and the other 2 disks for ESXi and the server on the motherboards chipset. Or am I making a grave mistake here?

And are there other things to look out for? Things I'm missing, tips and tricks?

Thanks in advance, community.. :)
 

Linkman

Patron
Joined
Feb 19, 2015
Messages
219
Yes, you'd pass the HBA through to FreeNAS so it has direct control, and access to the drives; and the ESXi boot disc would be on the motherboard SATA ports.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
I don't know if you can pass individual disks through, it might have to be the whole controller. And the M1015, like other LSI 9211-related controllers, can have its firmware changed to "IT" mode. That makes it into a simple host bus adapter, no RAID function at all, just a solid way of connecting up to 8 SATA disks.
 

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
Yes, you'd pass the HBA through to FreeNAS so it has direct control, and access to the drives; and the ESXi boot disc would be on the motherboard SATA ports.

I don't know if you can pass individual disks through, it might have to be the whole controller. And the M1015, like other LSI 9211-related controllers, can have its firmware changed to "IT" mode. That makes it into a simple host bus adapter, no RAID function at all, just a solid way of connecting up to 8 SATA disks.


Exactly. So I would have a clear dividing line between the disks/interface for FreeNAS, and the other disks for booting and the server.
And as for the filesystems/RAID levels. I'm reading the ZFS file system is used. And the RAID functionality for FreeNAS is called RAID-Z.
RAID-Z1 - RAID 5
RAID-Z2 - RAID 6
RAID-Z3 - RAID 7

As I would like to use 4-6 disks, let's say 4. RAID-Z2 would be the choice, with 2 drives which can fail. Only difference with RAID 10 are the read/write speeds. Z3 would be overkill, and Z1 is quite basic.

I, simple Jack, was thinking to get that controller to use it for RAID-10. But the only way to use it is dedicating it to what wblock is saying, for connecting the disks. Or is there a way to run RAID-10 with that controller?
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
The ZFS equivalent of RAID10 is a stripe of mirrors. Throughput on these can be good, but only one drive on a mirror can fail instead of RAIDZ2, where any two drives can fail.
 

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
Okay, so the safest way to go would be RAID-Z2 anyway. Flash the IBM to make that the dedicated storage controller.

Are there any things to watch out for? Keep things in mind?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
I don't know if you can pass individual disks through, it might have to be the whole controller.
You can pass though individual disks in ESXi, it's called RDM, however I advise against it due to it making maintenance of the system just a bit more complicated and complicated is not what you need when you have a drive failure. The LSI card is by far the best way to go and just pass that through to the FreeNAS VM.
 
Last edited by a moderator:

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
One question. I've read something here on the forums, about ESXi and FreeNAS, with some kind of sync feature, and an SLOG (SSD?) device? Is that still relevant, since the post is from 2013..
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
One question. I've read something here on the forums, about ESXi and FreeNAS, with some kind of sync feature, and an SLOG (SSD?) device? Is that still relevant, since the post is from 2013..
I don't know but your question leads to to ask why you are considering an SLOG. based on the intended use-case i wouldn't think you would need it.
 

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
Oh well, I stumbled upon a thread over here, have to find it back. there were some talks about ZFS and bad speeds / bad for storage/integrity.
I'm not considering to get it, just hoping to clarify the need (or not) to keep it in mind. If you see my usercase does not need it, I believe that :)
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
SLOG is only important if you're demanding massive quantities of sync writes... the only real use case I've seen for this is a VM store, presented via NFS or iSCSI to a hypervisor like ESXi. You aren't doing that, so a SLOG is unnecessary.
 

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
Ah okay! Clear. Just want to do it right from the start, so better ask instead of running into trouble. Check. Thanks for the replies guys, appreciate it!
 

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
So.
ESXi up and running. Flashed the M1015 to IT. Only thing left to get is the 804 and the HDD's.

Only thing to me that's weird, but maybe it's a beginners error.
I see the card in the Manage > Hardware list, stated as 'passthrough enabled/active'
When I go to storage, I can't see it in the Adapters listing. Although, when I go into the SSH I can see it being listed as vmhba1, where the Lynx Point SATA controller is vmhba0.

Is that just because the passthrough is enabled? Haven't checked beforehand when I put it in passthrough... :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
Hi Chris,

Thanks. The Build Report from Stux is my main 'lead thread'. There is a lot of helpful information in there, my build looks in a few things the same. I'll look into that 9.10 guide, looks promising!
 

jaccovdzaag

Dabbler
Joined
Feb 9, 2018
Messages
22
Okay. So, here is a recap of the last week.

Got all the parts here.

Motherboard: SuperMicro X10SLM+-LN4F
CPU: Intel Xeon E3-1231 v3
RAM: 16GB ECC DDR3 Crucial CT102472BD160B
Case: Fractal Design Node 804
PSU: Seasonic G360W /
CPU cooler: Cryorg C7
HDD NAS: 4x4TB Seagate Ironwolf
SSD boot/l2arc/swap: Sandisk SSD Plus 240GB
SSD SLOG: Crucial MX300 525GB (could get it for a good price)

Firstly:
I was running on an old ESXi, version 4887370, and I was not able to get it updated. So, new account, fresh install and now upto 7388607.

Thanks to this, https://forums.servethehome.com/ind...5-lsi-9211-8i-firmware-on-uefi-systems.11462/, I got my M1015 flashed. Worked out really well.
I've been following the thread of STUX, since there are a lot of similar things in there. And it's a great read.
The Freenas 9.10 on VMware is also a great read.
So, after that, did the ESXi settings and created a FreeNAS VM, first to test if the disk were visible. They did. So that worked out great as well.

Now, there are a few things I need to work/think out before going any further.

  • Networking. Since I have 5 ports, my switches support link aggregation, and we're with 6 people here, I see a thing making a LAN1/LAN2 link for the 'NAS' purpose. LAN3 for the Windows machine and LAN4 for management/interface of ESXi/FreeNAS. LAN5 is a specific IPMI port of the motherboard.
  • NFS/iSCSI. I read a lot of these 2. I know, I might have did some more research before buying stuff, but I'm just hyped I guess. I've read the Sync writes, or: Why is my ESXi NFS so slow, and why is iSCSI faster? topic, but I can't get my head around it. The primary devices which will be connected are some Windows 10 clients and a Odroid C2 with Kodi on it. So what's useful? Just stick with NFS, and share the entire volume on there? I've already implemented the swap/l2arc/slog devices, and that volume is already up.
So those 2 things, apart from sharing it, are the 2 main things I'm thinking about right now..


Regards,

Jacco
 
Status
Not open for further replies.
Top