Help with build

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Hi All

I have been a long time synology nas user and use it for media sharing/streaming, mail/contacts/calendar, cloud drive....the list goes on. I use sonar/radar/plex dockers and would like to explore docker further as I find containers more useful than full vm’s.

I have experimented with expenology both bare metal and esxi VM on my g8 microserver but went back to a ds unit when all the dsm 6 issues with expenology occurred, although obviously they finally got it working.

Reason for this post is I once again find the synology hw very limiting and I would like to retire the synology nas and my numerous raspberry pi devices (used to be vms on my g8 but that’s another story) and build an AIO for nas, docker, vm’s.

Thru work I have managed to acquire the following HW:

HP Ml310e v2 matx mobo with both original fans and PSU
E3-1241-v3 Xeon cpu
32gb ecc ram
Lsi-9207-8i hba
Quad intel Nic pcie card
HP H220 hba

I have purchased the following:

2u short depth server rack for matx boards
Thermal take 3 bay 3.5” drive cage
Low profile cpu heat sink suite HP mobo

I already had the following:

2 x 500gb ssd
2 x 250gb ssd
4 x 750gb wd red 2.5”
2 x 3tb wd red 3.5”
2 x 2tb wd red 3.5”
Icydock 4 bay 2.5” drive cage
Icydock 6 bay 2.5” drive cage

I have already completed the physical mods to the case to mount everything and connect the fans to blow air in across the mobo pcie slots and the other fan to extract air directly above the cpu heat sink. The icydock are mounted down one end where there was a front cutout for drive bays, at the opposite front end I made a cutout and mounted the thermal take drive cage. Apart from just in front of the HP intake fan there is no usable space left in the case.

So given my use case should I go bare metal FreeNAS or virtualise storage back into esxi?

What cards and drive configurations should I use (either bare metal or esxi)?

Do I need a slog (prepared to buy Optane 900p if needed seeing as I haven’t spent much so far)?

Cheers
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Cheers for that was good reading, seems a slog is definitely required so I ordered the 900p.

In the meantime I fired up my build to do some basic checks and luckily all my electrical mods to adapt the HP wiring harness to the case have worked.

Unfortunately the small case size is causing issues with airflow particularly around the almost fully populated pci slots. Temps rise too much too quick and reversing the cpu fan had no effect.

I think I need a bigger case with better airflow, side benefit would be I could try and fit more drive bays in. I have my eyes on a raven rv02, it’s a big case but it has 8 front facing 5.25 bays just begging for bay adaptors to be fitted.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That's the biggest problem with the 1U and 2U chassis. To get the airflow, you need fast, loud fans. I use 4U chassis for my systems because I can slow down the fans but you still need to be sure to get circulation of air over the expansion cards or you can have overheating. I have had a SAS controller failure because of it getting too hot.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I think I need a bigger case with better airflow, side benefit would be I could try and fit more drive bays in. I have my eyes on a raven rv02, it’s a big case but it has 8 front facing 5.25 bays just begging for bay adaptors to be fitted.
You might want to consider using a service like this:
https://www.shipito.com/en/

Several members of the forum that live outside the US have purchased items in the US and had them shipped because, even after paying shipping, it was less expensive or it allowed them to get something that they otherwise could not get.

An example might be this chassis, although the prices have been edging up lately for some reason:
https://www.ebay.com/itm/SuperMicro...-846EL1-24X-Trays-2x-PWS-920P-SQ/382382593982

This unit has a SAS2 expander backplane which would allow you to control all 24 drives in the front from a single SAS controller. It also has redundant power supplies so that you don't need to worry about a power supply failure crashing your server. The supplies are designed to work on a range of power, so they should be compatible in Australia. Also, the power supplies are the SQ (super quiet) models that are not supposed to sound like a jet on takeoff.
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Cheers Chris

I have used freight forwarders in the past for hard to get items and it worked out ok, I use amazon a lot these days instead as I find them quite good (note not amazon AU who are hopeless). That supermicro looks good, but the 900p cost me a small fortune so I will try and reign in my spending as I only see the mobo/cpu/ram combo I have being good for 2 yrs until the upgrade bug strikes.

I ordered the case and should have it on the weekend, its a departure from trying to build a small unit so I may as well make the most of the size and fit as many disks as possible.

I scored an LSI 9211 8i card from work so that can go in the PCI 2 slot and provide access for 8 3.5" HDD's. When I decommission my Synology I will relocate the disks to the freenas box.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I scored an LSI 9211 8i card from work so that can go in the PCI 2 slot and provide access for 8 3.5" HDD's.
If you get a chassis with a SAS expander (or two) you can control 256 drives with that card, if I recall correctly. I use a LSI/Broadcom 2907-8i controller in both of my systems and in the main system it is connected to two SAS expanders that allow me to run up to 48 drives on the system as it stands. I only have 36 in there for now...
Be sure to post photos of your build.
One of the other guys from down under posted this one back in 2016:
https://forums.freenas.org/index.ph...o-x10-sri-f-xeon-e5-1650v4.46262/#post-315452
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
pickup my case and 900p tomorrow and will get started on stripping the current system down and moving to new case.

I don’t know if I’ll be as meticulous as stux but I will at least try and photo document the build process and post it up.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I don’t know if I’ll be as meticulous as stux but I will at least try and photo document the build process and post it up.
@Stux has done some really good build reports. Here are a couple more links to his work if you want to look:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Well I have all my parts now and I have stripped out the previous 2U rack and ready to move forward when I found something I overlooked. In my revised plan to go big and use the 9211 as well I did not look at how many PCIE lanes the MB/CPU support, the PCIE 2 x8 slot I was planning to put it in only has an x1 link width. I was planning on putting all the 3.5" drives (8 once I decommission syno NAS) now I have some choices to make.

The way I see it I have the follwoing options (mindful of the fact I have the large tower now may as well use it):

1. Revise storage down. eg. drop one of the 2.5" cages and use the onboard sata controller; this gives me a total of 8 sata 3 on 9207 card, 2 sata3 and 2 sata 4 on MB.

2. Find an alternative MB in the 1150 cpu socket family that has at least 2 PCI 3 x8, 1 PCI 2 x8 and 1 PCI x1 link width support so that I can reuse cpu and ram.

3. Shop for new MB/CPU/RAM in a later Xeon product range, be expensive option I reckon.

Reccomendations appreciated.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Well I have all my parts now and I have stripped out the previous 2U rack and ready to move forward when I found something I overlooked. In my revised plan to go big and use the 9211 as well I did not look at how many PCIE lanes the MB/CPU support, the PCIE 2 x8 slot I was planning to put it in only has an x1 link width.
There are not (in my opinion) many good choices in the socket 1150 arena. It is because there just are not enough PCIe lanes designed into the CPU architecture. This model system board from Supermicro would be my choice if I had to go with that socket.
https://www.supermicro.com/products/motherboard/Xeon/C220/X10SLH-F.cfm
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Cheers Chris, your help has been invaluable in guiding my build. I have looked thru the links you have posted so far and come to a couple of hard conclusions:

1. I did not do nearly enough research on going down the AIO esxi/freenas path.

2. By recieving a bunch of free parts (old stuff) and combining with some of my spare stuff (also old) I locked myself into going down a path to building a system that I would most likely end up upgrading/rebuilding within 12 mths.

3. I need to have thought bigger and stopped trying to cram lots of small drives into cases when I probably only need 4 x ssd (already got enough cpacity for 750GB mirror) and 6 x 3.5" (min 4 TB size).

I have purchased some neat stuff along the way though:

1. The new case is perfect, the unique arrangement were the rear panel is presented at the top of the case suites my installation.

2. The 900P optane card is going to be very beneficial.

3. The ssd's will not go to waste and will be my main esxi VM pool.

4. Thermaltake 3.5" cages are quite good, was surprised given the AU price (unusually low for here).

So new plan, I will use the WD 3.5" red drives and 16GB of my ram to get the G8 microserver running ESXi and put an Xpeno VM on it to replace Syno NAS. Migrate Syno files/services to G8 and sell Syno NAS to fund new freenas/esxi purchases. Ultimately the G8 will be sold once the freenas/esxi system is up and running.

I will then use the ML310 board/cpu, remaining ram, HP fans, HP pwr supply, 2U case and the icydock 4 bay 2.5" unit with the WD 750GB reds to build a small scale ESXi server which I will donate to my nephew who is just starting to get into some serious computing (other than playing games). Without so much clutter in the case the cooling should be ok compared to when I had it packed to the gunnels.

This leaves me to purchase a new MB/CPU/RAM combination and 6 x WD red 3.5" drives (min 4 TB ea). So far I have found:

https://www.amazon.com/dp/B018AX44Q...colid=N3O8BOJ6GROI&psc=0&ref_=lv_ov_lig_dp_it

which is substantially less than the 600AUD it sells for locally, would be about 280AUD delivered.

I can get 2 x 16GB Kingston ecc unbuffered ddr4 ram for 610AUD locally which is only 50AUD more than amazon US by the time shipping/taxs/currency conversion comes into it, likewise I can get a E3-1240 V7 locally for the same price as Amazon US, hard drives I would just purchase locally.
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Bit the bullet and ordered the X11SSM-F and 2 x 16gb ddr4 2400 CL17 kingston unbuffered ECC ram from Amazon US. Actually found a local supplier with a great price on an e3-1245 v6 xeon for 370 AUD, was quite surprised and it was in stock too.

The stuff from Amazon should arrive in 2 weeks, get to play with my g8 and 2u server until then.
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Well Amazon lost my board and have refunded my money after 20 days of no further tracking info coming online.

I have spent last 2 weeks trying to find a replacement. Finally found a supplier who just got some stock in and ordered, pretty hard here in Oz to get the good supermicro boards at a decent price (amazon US is out now due to the tax changes 1st july and their refusal to ship to AU).

So another delay in the system build....oh well.
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Well after getting a few more parts (power extension cables for hdd's) I finally got to build my system, after being ultra busy at work.

All went without a hitch with one exception that made me very grateful I got the supermicro board with the additional expansion card slot. Turns out that the cpu cooler tower blocks the first pcie slot due to a combination of the slot being close to the line of the cpu socket and the cooler being a little bit wider to hold a larger fan. I had to shuffle everything down a slot but luckily the pcie lanes available were sufficient for the cards going into each slot.

Next step will be power on tests to make sure everything runs ok and bios checks to see fans/temps reporting ok, if all this is successful I will proceed to update bios and install esxi onto thumb drive. I plan to use the optane card as a datastore for freenas and also to create vdisks for slog/arc for datapools.

I will have 3 pools:

SSD - 2 x 500gb, 2 x 250gb
HDD WD Red - 6 x 2tb
HDD Iron Wolf - 4 x 4tb

What would be the best pooling arrangement regards vdev's and slog/arc allocation from the optane for my disk collection ?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
All went without a hitch with one exception that made me very grateful I got the supermicro board with the additional expansion card slot. Turns out that the cpu cooler tower blocks the first pcie slot due to a combination of the slot being close to the line of the cpu socket and the cooler being a little bit wider to hold a larger fan. I had to shuffle everything down a slot but luckily the pcie lanes available were sufficient for the cards going into each slot.
Photos?
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
nbSjj0D


Here ya go.

https://imgur.com/a/nbSjj0D
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Nice.
 
Top