geekmaster64
Explorer
- Joined
- Mar 14, 2018
- Messages
- 50
Hey there,
I have at home an HP DL 180G6 running latest and greatest FreeNAS with:
Dual E5606's (2.13GHz Quad's - Easily upgradeable)
16GB ECC 10600R (Easily Upgradeable)
6X 2TB WD RE4 Drives (With 4 more on the way)
1X 250GB SSD (Samsung Evo 850)
1X 128GB SSD (Something not so special that's new and I don't care if it dies)
HP P410... each drive in RAID0 (HBA being delivered this weekend :))
Dual Port 1GBe (Built-In)
Dual Port Intel 1GBe (ext)
Hyper-V Server: (Soon to be replaced by an R620 with E5-2660's and more goodies or R810 with Quad 10 Cores)
DL 380G6
Dual E5540's
56GB Ram
Server 2016 Data-Center (because licensing for me and why not when it's free for my lab)
Workload:
Server 2016 VM's with a few Ubuntu
1 SQL Server (2016)
Mostly Applications from Microsoft and other random crap (System Center, Windows Azure Pack, Microsoft Project, Microsoft Dynamics AX)
1VM is (Surprise!) a Plex Server (1VHDx for OS and another for media that is several TB in size like 4)
I have the setup on FreeNAS as:
RAID10 (3 mirrors with 2VDev's in each mirror)
Presented the volume (4K block on FreeNAS to Hyper-V via iSCSI (with MPIO on Separate Subnets and with least-queue depth) formatted at 4K and the VM's are formatted at 4K to ensure block alignment)
I am testing out GZIP-9 because my CPU's are bored and when migrating VM's to the new ZVOL they barely hit 50-60% then dropped down to less than 6% with all 8 VM's running and frankly I'm going to have a bit of stuff on this as time goes on)
I have the 128GB SSD enabled as a ZIL for the system, after a lot and lot of reading, I have been convinced that I should get more memory before enabling the 250GB SSD as an L2ARC.
My questions are this:
1. Good idea on the L2ARC until more memory? (Want to get 64GB soon for it)
2. Block alignment for Hyper-V, what are your thoughts about me using 4K on FreeNAS vs the default 16K? Better suggestions?
3. My MPIO, separate subnets etc... 1NIC session to all NIC's on the FreeNAS from the host with Least Queue Depth - good idea bad idea? (I think it's good based on all the documentation and tons of google reading and past experience with my Nimbles)
4. ??? - Anything I should change or suggestions?
P.S. - I know that this is a long post and I'm sure no one ever asks for guidance, but I assure you, I've done a crap ton of reading in the last 34 days on this forum, OpenZFS and even spoke with 45drives.com as far as their experiences go.
Thank you so much in advance!
I have at home an HP DL 180G6 running latest and greatest FreeNAS with:
Dual E5606's (2.13GHz Quad's - Easily upgradeable)
16GB ECC 10600R (Easily Upgradeable)
6X 2TB WD RE4 Drives (With 4 more on the way)
1X 250GB SSD (Samsung Evo 850)
1X 128GB SSD (Something not so special that's new and I don't care if it dies)
HP P410... each drive in RAID0 (HBA being delivered this weekend :))
Dual Port 1GBe (Built-In)
Dual Port Intel 1GBe (ext)
Hyper-V Server: (Soon to be replaced by an R620 with E5-2660's and more goodies or R810 with Quad 10 Cores)
DL 380G6
Dual E5540's
56GB Ram
Server 2016 Data-Center (because licensing for me and why not when it's free for my lab)
Workload:
Server 2016 VM's with a few Ubuntu
1 SQL Server (2016)
Mostly Applications from Microsoft and other random crap (System Center, Windows Azure Pack, Microsoft Project, Microsoft Dynamics AX)
1VM is (Surprise!) a Plex Server (1VHDx for OS and another for media that is several TB in size like 4)
I have the setup on FreeNAS as:
RAID10 (3 mirrors with 2VDev's in each mirror)
Presented the volume (4K block on FreeNAS to Hyper-V via iSCSI (with MPIO on Separate Subnets and with least-queue depth) formatted at 4K and the VM's are formatted at 4K to ensure block alignment)
I am testing out GZIP-9 because my CPU's are bored and when migrating VM's to the new ZVOL they barely hit 50-60% then dropped down to less than 6% with all 8 VM's running and frankly I'm going to have a bit of stuff on this as time goes on)
I have the 128GB SSD enabled as a ZIL for the system, after a lot and lot of reading, I have been convinced that I should get more memory before enabling the 250GB SSD as an L2ARC.
My questions are this:
1. Good idea on the L2ARC until more memory? (Want to get 64GB soon for it)
2. Block alignment for Hyper-V, what are your thoughts about me using 4K on FreeNAS vs the default 16K? Better suggestions?
3. My MPIO, separate subnets etc... 1NIC session to all NIC's on the FreeNAS from the host with Least Queue Depth - good idea bad idea? (I think it's good based on all the documentation and tons of google reading and past experience with my Nimbles)
4. ??? - Anything I should change or suggestions?
P.S. - I know that this is a long post and I'm sure no one ever asks for guidance, but I assure you, I've done a crap ton of reading in the last 34 days on this forum, OpenZFS and even spoke with 45drives.com as far as their experiences go.
Thank you so much in advance!