dagrichards
Dabbler
- Joined
- Jun 24, 2015
- Messages
- 12
I am still putting together a NAS box of floor scrapings and E-Bay parts. The intent is to use it for ESXi data store of throw away VMs to be used in a training environment.
Specs:
SuperMicro X8DTI-F with a single 5620
24 GB of ECC
3Ware 16 port Raid controller ( JBODing the the disks won't ask for help when it eats my data )
8 2TB HGST sata III 7200 rpm disks
8 250GB assorted mutt sata III 7200 rpm disks
1 mellanox 10G nic
1 consumer 120GB SSD on a PCIe card ( log device )
With that hardware the next question is usually something like "I made a raidz-1 and its slow .... why".
I finally got my crap together and did this
It gives me nearly 8TB of storage, I will keep it down to less than 4 TB used.
pool: tank
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/496d9b23-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
gptid/4b42b99a-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/dca11c7a-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
gptid/dd720e26-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
.... etc for 8 mirror vdevs
logs
gptid/4b890fb1-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
I have started to do some bench testing, running iometer on 2 guests so far just one xenServer
set up like this: http://community.atlantiscomputing....e-Iometer-to-Simulate-a-Desktop-Workload.aspx.
Starting tomorrow I will be able to connect 5 ESXi servers and run pair of guests off each.
Im doing ill advised things like turning off checksum ( won't ask for help when it corrupts my data ).
And seeing how it changes the iometer score.
I have atime off.
I have vfs.zfs.prefetch_disable=1
I will compare nfs sync with the ssd ZIL vs async and see if the difference is worth the additional risk.
So far average IO response is 28 MS with checksum off, and 62 ms with checks on using sync ifs and the SSD. 580 IOPs vs 275 IOPs respectively.
More numbers as they are gathered
The intended use case is to attach 20 ESXi's with 10 MS 2008R2 guests hosted.
Students won't do anything more than install the servers and vmotion them a couple times back and forth.
The ESX'i will be booting off internal storage.
Right now I am getting about 93% hits on my ARC, how do I know if I am need more RAM?
It seems like as long as that number stays above say 90% I have enough....
Specs:
SuperMicro X8DTI-F with a single 5620
24 GB of ECC
3Ware 16 port Raid controller ( JBODing the the disks won't ask for help when it eats my data )
8 2TB HGST sata III 7200 rpm disks
8 250GB assorted mutt sata III 7200 rpm disks
1 mellanox 10G nic
1 consumer 120GB SSD on a PCIe card ( log device )
With that hardware the next question is usually something like "I made a raidz-1 and its slow .... why".
I finally got my crap together and did this
It gives me nearly 8TB of storage, I will keep it down to less than 4 TB used.
pool: tank
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/496d9b23-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
gptid/4b42b99a-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/dca11c7a-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
gptid/dd720e26-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
.... etc for 8 mirror vdevs
logs
gptid/4b890fb1-4d41-11e6-8363-0025904a45ae ONLINE 0 0 0
I have started to do some bench testing, running iometer on 2 guests so far just one xenServer
set up like this: http://community.atlantiscomputing....e-Iometer-to-Simulate-a-Desktop-Workload.aspx.
Starting tomorrow I will be able to connect 5 ESXi servers and run pair of guests off each.
Im doing ill advised things like turning off checksum ( won't ask for help when it corrupts my data ).
And seeing how it changes the iometer score.
I have atime off.
I have vfs.zfs.prefetch_disable=1
I will compare nfs sync with the ssd ZIL vs async and see if the difference is worth the additional risk.
So far average IO response is 28 MS with checksum off, and 62 ms with checks on using sync ifs and the SSD. 580 IOPs vs 275 IOPs respectively.
More numbers as they are gathered
The intended use case is to attach 20 ESXi's with 10 MS 2008R2 guests hosted.
Students won't do anything more than install the servers and vmotion them a couple times back and forth.
The ESX'i will be booting off internal storage.
Right now I am getting about 93% hits on my ARC, how do I know if I am need more RAM?
It seems like as long as that number stays above say 90% I have enough....