FreeNAS with vCenter and 10GB Ports

Status
Not open for further replies.

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Just hook it to your controller like your other hard drives. Then in FreeNAS you can designate it to be your SLOG.

Aha, so I am losing one WD Red 2 TB drive or two if i have two SSD SLOG drives then, if so, I would prefer a PCI SLOG device, agree as i have an open PCI express slot, agree?
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
Aha, so I am losing one WD Red 2 TB drive or two if i have two SSD SLOG drives then, if so, I would prefer a PCI SLOG device, agree as i have an open PCI express slot, agree?

A PCI SLOG is almost always superior to a SSD SLOG because it has lower latency. Just make sure your SLOG has power loss protection. The problem with the PCIE SLOG is that it is usually pretty expensive. You could get a PCIE controller for $80 (actually as low as $30) plus the $120 SSD slog and it would still be less than a PCIE SLOG. Really your choice. If you have the funds then go PCIE SLOG.

EDIT: Or is it your case rather than controller that is limiting how many hard drives you can currently have?
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
A PCI SLOG is almost always superior to a SSD SLOG because it has lower latency. Just make sure your SLOG has power loss protection. The problem with the PCIE SLOG is that it is usually pretty expensive. You could get a PCIE controller for $80 (actually as low as $30) plus the $120 SSD slog and it would still be less than a PCIE SLOG. Really your choice. If you have the funds then go PCIE SLOG.

EDIT: Or is it your case rather than controller that is limiting how many hard drives you can currently have?

Yes, its the case of a non PCIE controller installation that would be limiting the number of drives I can have. So, I would like to achieve the 12 TB of storage with the 6 WD RED's and the server only has 6 bays which leaves me my only option which is to use the PCIE Contoller SLOG or PCIE Controller with SSD drive.

Its amazing what you learn from this forum. Thank you for your insight, much appreciated.

Marlon
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Dell R710 with 6 WD Red 2TB NAS 5400 RPM Drives
8 port Gigabit Ethernet ports
2 x 10GB SPF+ Ports (Intel x520-DA2)
24 GB Memory
IBM M1015 HBA Card

Really, you probably want much larger drives. And more of them. See any of the zillion times I've talked about fragmentation and VM block storage.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
JGreco,

Thank you for your input. Because of budget constrains, that's all I could afford.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Yes, its the case of a non PCIE controller installation that would be limiting the number of drives I can have. So, I would like to achieve the 12 TB of storage with the 6 WD RED's and the server only has 6 bays which leaves me my only option which is to use the PCIE Contoller SLOG or PCIE Controller with SSD drive.

Its amazing what you learn from this forum. Thank you for your insight, much appreciated.

Marlon

If you use mirrored vdevs, which is highly recommended for your use case, you're only going to have ~6TB of storage...not 12TB. With your current hardware and build plans, your performance is not going to be very good. 10-12 vm's, even with a very light workload, wont perform very well. It sounds like this project is constrained by a low budget. You'd really need to do some tuning and optimization to make it useful if you cant increase your disk count. A decent SLOG is highly recommend and then you need to make sure you set your zvol zfs option sync=always. To make better use of the storage space you'd need to make sure you properly configure your ESXi hosts and guest vm's, especially Exchange server, for whats called "in guest scsi unmap support." There are several things that need to be setup properly to allow this to happen, iSCSI device based zvol on FreeNAS, ESXi host enabled for it, guest vm's support, thin provisioned disks for vm's, proper NTFS allocation unit size (depends on ESXi build, 6.5 with latest updates highly recommended)...most savings will likely be seen for your Exchange vm but you'd want to configure the db's and log files to be stored on an NTFS formatted vmdk with 32k or 64k allocation unit size.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I would like to achieve the 12 TB
Also, if you use mirrored vdevs with 2TB drives, you will not have 12TB of storage, you would only have about 5.5TB and you should only use half of that when working with iSCSI.
I wish you would have mentioned how much storage you wanted earlier. If you did, I missed it. for 12TB usable, and using iSCSI you are not supposed to fill the pool beyond 50%, you need 24TB and that equates (with mirrors) to about 48TB of raw storage.
That is a huge difference. You will need 6 x 6TB drives, not 6 x 2TB drives.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
If you use mirrored vdevs, which is highly recommended for your use case, you're only going to have ~6TB of storage...not 12TB. With your current hardware and build plans, your performance is not going to be very good. 10-12 vm's, even with a very light workload, wont perform very well. It sounds like this project is constrained by a low budget. You'd really need to do some tuning and optimization to make it useful if you cant increase your disk count. A decent SLOG is highly recommend and then you need to make sure you set your zvol zfs option sync=always. To make better use of the storage space you'd need to make sure you properly configure your ESXi hosts and guest vm's, especially Exchange server, for whats called "in guest scsi unmap support." There are several things that need to be setup properly to allow this to happen, iSCSI device based zvol on FreeNAS, ESXi host enabled for it, guest vm's support, thin provisioned disks for vm's, proper NTFS allocation unit size (depends on ESXi build, 6.5 with latest updates highly recommended)...most savings will likely be seen for your Exchange vm but you'd want to configure the db's and log files to be stored on an NTFS formatted vmdk with 32k or 64k allocation unit size.

Very interesting comments on the size of the storage and you are all right. The drives are coming today as we speak from Amazon. I really based my storage needs looking at the current storage requirements I have today which is no more than 5 TB and the mirrored drive configuration had slipped me as well for the total usable storage space with iSCSI.

I may suck it up and return the drives for 6 x 6TB drives.

Also, can you recommend a good PCIE controller for the Intel S3700 SSD drive?

Thank you.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
If you use mirrored vdevs, which is highly recommended for your use case, you're only going to have ~6TB of storage...not 12TB. With your current hardware and build plans, your performance is not going to be very good. 10-12 vm's, even with a very light workload, wont perform very well. It sounds like this project is constrained by a low budget. You'd really need to do some tuning and optimization to make it useful if you cant increase your disk count. A decent SLOG is highly recommend and then you need to make sure you set your zvol zfs option sync=always. To make better use of the storage space you'd need to make sure you properly configure your ESXi hosts and guest vm's, especially Exchange server, for whats called "in guest scsi unmap support." There are several things that need to be setup properly to allow this to happen, iSCSI device based zvol on FreeNAS, ESXi host enabled for it, guest vm's support, thin provisioned disks for vm's, proper NTFS allocation unit size (depends on ESXi build, 6.5 with latest updates highly recommended)...most savings will likely be seen for your Exchange vm but you'd want to configure the db's and log files to be stored on an NTFS formatted vmdk with 32k or 64k allocation unit size.

Great comments and feedback. As i just mentioned to Chris, i might invest in 6 x 6TB drives.

Can you recommend a good PCIE Controller for a Intel 100GB S3700 SSD drive?

Thank you,
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Great comments and feedback. As i just mentioned to Chris, i might invest in 6 x 6TB drives.

Can you recommend a good PCIE Controller for a Intel 100GB S3700 SSD drive?

Thank you,

Why do you need another card? Doesn't the m1015 support 8 drives without an expander?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
For a system in production, what RAID is recommended for the 6 WD drives, RAID 10?
If you use the 'Wizard' to do the initial configuration and select that you want, 'Virtualization', the configuration will be RAID-10 equivalent which is striped, mirror vdevs in ZFS.
The screen looks like this:
Virt-Volume.PNG
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Chris,

So I cancelled my Amazon 6 x 2 TB WD Red's and will get 6 x 4 TB WD Red's from Ebay which is better value for the buck. For the amount of storage I really need, 12 TB in a mirrored RAID10 ZFS will suffice.

The only thing I am looking for now is a good PCIE Controller card for the Intel SSD S3700 SLOG device, any suggestions?

Thank you,
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So I cancelled my Amazon 6 x 2 TB WD Red's and will get 6 x 4 TB WD Red's from Ebay which is better value for the buck. For the amount of storage I really need, 12 TB in a mirrored RAID10 ZFS will suffice.
Keep in mind that with iSCSI, you can't fill the pool beyond 50%. That is why I suggested 6TB drives. I did the math and that is what you need, unless you don't actually need 12TB of usable storage.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Keep in mind that with iSCSI, you can't fill the pool beyond 50%. That is why I suggested 6TB drives. I did the math and that is what you need, unless you don't actually need 12TB of usable storage.

Chris, you are indeed correct, I dont need anywhere close to 12TB of usable storage, probably only 1/3 of it is really what i need.

Any recommendations on a power loss protection PCIE controller for the Intel SSD S3700 drive?

Thank you,
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Chris, you are indeed correct, I dont need anywhere close to 12TB of usable storage, probably only 1/3 of it is really what i need.

Any recommendations on a power loss protection PCIE controller for the Intel SSD S3700 drive?

Thank you,

The S3700 has power loss protection built into it. You just need an available sata port to plug it in. If you have no on board ports that will work, then just another pci-e HBA is all you need, like the one you already have, and a breakout cable for it. With only 6 bays in your server I'm not sure where you're gonna put it, but I'm sure you can figure it out.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
The S3700 has power loss protection built into it. You just need an available sata port to plug it in. If you have no on board ports that will work, then just another pci-e HBA is all you need, like the one you already have, and a breakout cable for it. With only 6 bays in your server I'm not sure where you're gonna put it, but I'm sure you can figure it out.

Awesome and thanks for your reply. On the hunt for a PCIE HBA on Ebay.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Awesome and thanks for your reply. On the hunt for a PCIE HBA on Ebay.
Be careful of the fake crap from China! Try to source something from the USA, like an HP H220 (LSI 2308 chip).
 
Status
Not open for further replies.
Top