Help with build

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Still need some guidance regards this though:

I plan to use the optane card as a datastore for freenas and also to create vdisks for slog/arc for datapools.

I will have 3 pools:

SSD - 2 x 500gb, 2 x 250gb
HDD WD Red - 6 x 2tb
HDD Iron Wolf - 4 x 4tb

What would be the best pooling arrangement regards vdev's and slog/arc allocation from the optane for my disk collection ?
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Hi All

I am finally looking to continue on with my freenas project now that I have some time over the xmas break.

I have made a change to the HDD arrangment by getting rid of the 6 x 2TB reds and replacing them with 4 x 4TB reds. This gives me one 4 x SSD pool, one 4 x4TB ironwolf pool and one 4 x 4TB red pool.

The case is a bit of a mess wiring wise as the pics show, may look to tidy it up if it proves an airflow problem.
 

Attachments

  • IMG_0026.jpg
    IMG_0026.jpg
    332.2 KB · Views: 335
  • IMG_0027.jpg
    IMG_0027.jpg
    298.3 KB · Views: 317
  • IMG_0028.jpg
    IMG_0028.jpg
    176.7 KB · Views: 322
  • IMG_0029.jpg
    IMG_0029.jpg
    273.4 KB · Views: 311
  • IMG_0030.jpg
    IMG_0030.jpg
    220.8 KB · Views: 329
  • IMG_0031.jpg
    IMG_0031.jpg
    258.1 KB · Views: 326
  • IMG_0032.jpg
    IMG_0032.jpg
    327.3 KB · Views: 338
  • IMG_0033.jpg
    IMG_0033.jpg
    236.7 KB · Views: 331
  • IMG_0034.jpg
    IMG_0034.jpg
    200.7 KB · Views: 335
  • IMG_0035.jpg
    IMG_0035.jpg
    260.5 KB · Views: 320

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
And continuing on.
 

Attachments

  • IMG_0036.jpg
    IMG_0036.jpg
    252.1 KB · Views: 341
  • IMG_0037.jpg
    IMG_0037.jpg
    228.6 KB · Views: 340
  • IMG_0038.jpg
    IMG_0038.jpg
    244.8 KB · Views: 343
  • IMG_0039.jpg
    IMG_0039.jpg
    246.9 KB · Views: 319
  • IMG_0040.jpg
    IMG_0040.jpg
    371.8 KB · Views: 321
  • IMG_0041.jpg
    IMG_0041.jpg
    303.3 KB · Views: 350

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
This will be an AIO build for ESXi 6.7 and I woulkd appreciate some general guidance on the best configuration/optimization of the disks for use with freenas.

I would be particularly keen to hear from anyone who has used the Optane memory, I was thinking to use it as my first ESXi store for freenas and either partition it to so I can use part as a SLOG or maybe even setup vDisks in ESXi for this purpose (would take a performance hit but may still leave enough left to do a decent job for home use).
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
May I ask why you would want to use two pools for your hdd's? I might have missed something earlier in the thread about your pool requirements.
But, why not run all 8 drives in a RaidZ2? Or do you plan on using mirrors?
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
I am asking for advice regards the ssd/hdd setup as I have never run freenas before.

I have listed the drives by the quantity and type (for informational purposes) as I am not sure if this makes a difference in freenas (as it does in traditional raid).
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
This is one of the things that is very nice with ZFS, it is not that picky when it comes to drives. The VDEV's will match the size of the smallest drive.
You can with no problems use the Seagate and WD drives together. I use it in my system.

Just remember that all the drives that you want to be controlled by FreeNAS must be passed through directly.

If it was me setting up this, I might have gone with an SSD mirror for the VM's and an 8x4TB RaidZ2 for normal storage. But this all requires all the 12 drives to be directly controlled by FreeNAS.

Someone else might have a better and more qualified answer on this.
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Hi IQLess thanks for the reply and the info. I have gone much larger with drives than my storage needs require, this was with the intent to maximize iops as I understand it by using the freenas equivalent of mirrors/raid 10. Please correct me if I'm wrong?

I plaaned to use the ssd storage for the base VMs and the hdd for data, not quite sure how to accomplish this and I still ahve to sort out the Optane unit as a slog and ESXi store.
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
@Stux has done some really good build reports. Here are a couple more links to his work if you want to look:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561

The @Stux build is really well written out, so I would follow that build. He used a different PCIe NVMe drive, but as far as I know, its the same approach.

I plan to use the optane card as a datastore for freenas and also to create vdisks for slog/arc for datapools.
I don't think this is recommended. The best way I would guess is to store the FreeNAS vm on a disk that ESXi controlls, and then pass through everything else to the FreeNAS vm. But, that might mean you would need another HBA or expander card (not really sure on this part).
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Hi All

I am finally looking to continue on with my freenas project now that I have some time over the xmas break.

I have made a change to the HDD arrangment by getting rid of the 6 x 2TB reds and replacing them with 4 x 4TB reds. This gives me one 4 x SSD pool, one 4 x4TB ironwolf pool and one 4 x 4TB red pool.

The case is a bit of a mess wiring wise as the pics show, may look to tidy it up if it proves an airflow problem.

I’d suggest taming the sata rats nest :)

Check this post for my technique:
https://forums.freenas.org/index.ph...sdv-tln4f-esxi-freenas-aio.57116/#post-401279
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
I don't think this is recommended. The best way I would guess is to store the FreeNAS vm on a disk that ESXi controlls, and then pass through everything else to the FreeNAS vm. But, that might mean you would need another HBA or expander card (not really sure on this part).

Yeah that would be the way to go but unfortunately I am all full up. With the optane, HBA and quad nic I don't have any slots left. I am pretty sure putting Freenas on USB is not reccomended so my storage options are non-existant unless I change design or lose functionality, this is why I thought I could use the optane for a few things.

I had a read thru Stux build very nicely done and written up, almost ashamed of my cabling now :(
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
I’d suggest taming the sata rats nest :)

Check this post for my technique:
https://forums.freenas.org/index.ph...sdv-tln4f-esxi-freenas-aio.57116/#post-401279

Cheers Stux, very nice build and post.

Maybe you could give some advice, I have reached a point were due to lack of resources I was considering to use the optane to hold the Freenas ESXi VM whilst also utilising it for slog duties.

I had read a post over on serve the home where gea (the guy that provides nappit) stated the optane had so much performance for home use that you could potentially put ESXi on it and create vDisks for slog/arc. Do you think this would work or would the performance hit be too big?

The other option would be what you have done and repartition the optane and use the partitions for ESXi and the others for slog/arc, have you been happy with this arrangement on your build?
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
With the optane, HBA and quad nic I don't have any slots left.
This is why I mentioned an expander card :) You could power it with a molex connection, and mount it anywhere you want in the case. You only need a 8087 to 8087 cable to connect the HBA to the exander card. This would give you more connections. This would give you the possibility to use the onboard sata connections to use for ESXi datastores, and everything connected to the HBA/Expander passed through to the FreeNAS VM.

Looking forward to see the rest of the build :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Cheers Stux, very nice build and post.

Maybe you could give some advice, I have reached a point were due to lack of resources I was considering to use the optane to hold the Freenas ESXi VM whilst also utilising it for slog duties.

I had read a post over on serve the home where gea (the guy that provides nappit) stated the optane had so much performance for home use that you could potentially put ESXi on it and create vDisks for slog/arc. Do you think this would work or would the performance hit be too big?

The other option would be what you have done and repartition the optane and use the partitions for ESXi and the others for slog/arc, have you been happy with this arrangement on your build?

The difference is my PCIe NVME is passed into FreeNAS and then partitioned.

I could find no information on the safety of using an ESXi disk as slog, but given the critical nature of a SLOG, I don’t think it’s a good idea.

Assuming you’re using an HBA for passing into FreeNAS just use a cheap sata ssd off the mobo for your ESXi boot and data store. And pass in the Optane.

Btw, you may only need the slog if you’re planning on running ESXi VMs off the FreeNAS nfs/isci data store.
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
This is why I mentioned an expander card :) You could power it with a molex connection, and mount it anywhere you want in the case. You only need a 8087 to 8087 cable to connect the HBA to the exander card. This would give you more connections. This would give you the possibility to use the onboard sata connections to use for ESXi datastores, and everything connected to the HBA/Expander passed through to the FreeNAS VM.

Looking forward to see the rest of the build :)

So an expander card does not require a pcie slot, thats great, will definitely look into this.

If I can fit it into the case it opens up the possibility of using the mobo sata"s for an ssd ESXi store, I assume that ssd pools benefit less from a slog than hdd's?
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Btw, you may only need the slog if you’re planning on running ESXi VMs off the FreeNAS nfs/isci data store.

Yes I will be doing this with an Xpenology VM.

I could find no information on the safety of using an ESXi disk as slog, but given the critical nature of a SLOG, I don’t think it’s a good idea.

Over on serve the home forum there are a few threads where they have benchmarked using ESXi vDisks carved off the optane (no partitioning) and there is a performance hit, but the card has so much horsepower it seems negligible in the grand scheme. One guy carved out vDsiks for a slog and also an arc and his benchmarks show less than 10% drop from raw. I guess the question mark would be on longevity maybe of this kind of setup, the posts do span a year though with no reported issues.

Assuming you’re using an HBA for passing into FreeNAS just use a cheap sata ssd off the mobo for your ESXi boot and data store. And pass in the Optane.

I will look into IQless suggestion of an HBA expander as this would free up the mobo sata connectors/controller, currently I am a bit maxed out on resource unless I redesign.

The difference is my PCIe NVME is passed into FreeNAS and then partitioned.

So Freenas can do disk partitioning at the ZFS level as opposed to the raw disk itself?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So Freenas can do disk partitioning at the ZFS level as opposed to the raw disk itself?

No, but you can set it up manually. This isn't a particularly good idea, but if you know what you're doing, it works.
 

venno

Dabbler
Joined
Jun 4, 2018
Messages
29
Well its been a while so I thought I would update on my progress.

I finally finished the AIO box but after testing found that it was way to noisy and drew too much power. I ended up going back to a split solution, I purchased a QNAP nas for storage and usenet, purchased a Supermicro 1U server (5018D-FN8T) for ESXi main compute and a little Shuttle DS67U for a backup ESXi server compute.

This gave me the opportunity to play with 10gbe connections for storage and it worked well, I ended up adding to my Unifi network stack and got a 10gbe switch and also a 10gbe nic for my PC. This setup served me well whilst I looked for a SFF low noise/power DIY NAS configuration.

I ended up building a replacement for the Qnap, its about 30% bigger in volume but it packs in more drive bays and arguably better HW using the following components:

Unas NSC-810 case - 8 x 3.5" drive bays
Supermicro mini-itx A2SDi-H-F Denverton C3758 mobo
2 x Kingston 8gb DDR4 2400 ECC Udimm's
Seasonic 1U flex 250W bronze psu
ATX extender cable for PSU
Icydock Flexidock quad SSD cage
Molex to sata power cable for Icydock unit
Supermicro 1U PWM fan
Optane 800P 58gb NVME cache drive

I modded the case to fit the Icydock unit in the middle above the HDD cage, tight fit and made cable routing interesting but doable.

I had to use a psu extender cable as there was no way the seasonic one was going to reach the other side of the case and thus the mobo.

The mobo doesn't receive much air flow being side mounted so I put in 1U server fan for high static pressure air flow across the mobo, the 10gbe stack still gets a little to warm for my liking so I removed all the mobo back plate cutouts (my board doesn't use them all) to allow for air exhaust and this at least stabilised the temp at 50C on the stack.

The supplied case fans are pretty quiet and do a good job, the HDD temp never goes above 30C. The icydock unit's fan and the PSU fan are also pretty quiet.

I was a bit worried about the psu size but it has no probs running everthing and doesn't get hot, may have even been able to go smaller (200W). I will take some power measurements after the system beds in.

The case was difficult to cut the internal steel is hardened whilst the front plate is aluminium with some sort of rubber painted coating. I had to be very careful with the front plate as the coating heated up quite quickly with any cutting attempts and this would cause the coating to start peeling away starting at the edges. I had to do a bit of cutting then take a break to let it cool down, rinse repeat, took me a whole day to cut the slot.

Cramming so much into a small volume made it difficult to work in the case and also do a neat job with cable routing. I tried to keep the rear fan air path inside the case clear of allcabling and largely succeded. Had to keep cables away from where the cover slots between the front plate and the frame lest any cables be caught up/crushed/cut...etc...

Got freenas installed (11.2) and all running well using a USB 3 key for booting, the mobo only has 1 usb 3 port and it is single port on the board itself. I have one SSD mirror (2 x 240gb) as the main set and active for iocage...etc... and another SSD mirror (2 x 1tb) which will be my primary iSCSI lun for ESXi were my VM's requiring speed will live.

Have got 4 x wd red and 4 x iron wolf HDD's (all 4TB) in the HDD cage, the drives have been configured as 4 mirrored pairs to make up the pool with the optane unit added as slog. I kept my 900p from the previous build and put that in the main ESXi server and run VRF cache, I may revisit this decision as there is a spare x4 pcie slot in the NAS and the 900p would just fit (cooling may be an issue though).

Anyway onto the pics :)
 
Top