FreeNAS with vCenter and 10GB Ports

Status
Not open for further replies.

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
I have no experience with that type of card, so you'll have to get someone's else's opinion on it.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Its not a HBA card so not sure it will work. Basically the design allows for the SSD to be installed right onto it instead of breakout cables.
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
I wonder if a M.2 SSD will work

I don't know of any that are optimal for a SLOG. You need power loss protection, high write endurance, and low latency. If you can find those things I don't see why it would not work.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
I don't know of any that are optimal for a SLOG. You need power loss protection, high write endurance, and low latency. If you can find those things I don't see why it would not work.

From what people are saying PLP is built into the Intel SSD S3700 drive, just have to find a home for it, either suck up and buy a PCIE Intel SSD S3700 or look for a PCIE controller that can work with the S3700 for low latency.
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
From what people are saying PLP is built into the Intel SSD S3700 drive, just have to find a home for it, either suck up and buy a PCIE Intel SSD S3700 or look for a PCIE controller that can work with the S3700 for low latency.

Yes the S3700 has PLP. I was talking a M.2 drive. I am not aware of a M.2 drive with PLP.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If you're so budgetarily constrained that the swap of 6TB for 2TB drives was difficult, I would point out that you might be better off skipping the SLOG. The SLOG causes a performance reduction but buys a guarantee that things written to the pool actually get written to the pool. This is important if you're a business and your VM's are doing transactional stuff you'd hate to screw up. For a home user, FreeNAS crashing or losing power is an unlikely scenario, and when combined with the performance hit of SLOG, and the cost of SLOG, you might opt to take the slight risk of running without SLOG. If you go that route, you will want to make sure you power off all your VM's if your filer crashes or loses power. If that's an acceptable tradeoff, then you end up with a less expensive solution that also performs better.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
The SLOG causes a performance reduction
Only if sync=disabled or sync=standard and using iSCSI with ESXi in the OP's scenario. OP stated that this setup is for a small business...I wouldn't recommend running with either mode, only sync=always, hence the recommendation for a good SLOG device. The pool doesn't have close to enough disks that an on pool zil would be faster than a slog. I'd still highly recommend a good slog...just not sure about your choice of that pci-e sata card jobby.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The SLOG causes a performance reduction
Only if sync=disabled or sync=standard and using iSCSI with ESXi in the OP's scenario.

That makes no sense. The SLOG stops causing a performance reduction with sync=disabled but also becomes pointless.

OP stated that this setup is for a small business...I wouldn't recommend running with either mode, only sync=always.

That ought to be dependent on the risk/reward calculation.

The pool doesn't have close to enough disks that an on pool zil would be faster than a slog.

That also makes no sense. There is literally no situation where the in-pool ZIL would be faster than a SLOG.

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
That makes no sense. The SLOG stops causing a performance reduction with sync=disabled but also becomes pointless.



That ought to be dependent on the risk/reward calculation.



That also makes no sense. There is literally no situation where the in-pool ZIL would be faster than a SLOG.

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

first point...You're taking what I said out of context by omitting what you said, saying it reduces performance. While I suppose its true to an extent, but if you don't use it (meaning having the zfs options set the way I said) then you're potentially putting company data at risk with the only benefit being increased speed as writes. You're reading my text wrong or I'm no English major or both.
second point...ultimately up to the business leaders.
third point...incorrect. If you had a very large pool of disks in a mirrored vdev setup, say 44 10k or 15k sas disks and had an S3700 slog, or better yet, a large pool of ssd's, then an ssd slog would be slower than an on pool zil (PCIe NVMe drive maybe not the case). The point is, it's definitely possible, but unlikely in most scenarios and that was my point.
 
Last edited:

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Here is my recent update:

Returned the 6 x WD Red's 2TB for 6 x WD Red's 4TB drives. I am on the fence with a SLOG for a number of reasons:

1. The Dell R710 is only 6 bays, so I need to find out how to integrate a Intel S3700 SSD drive in the system using one of the available PCI Express ports and I haven't found any PCI Express cards to do that yet and the some people on here have suggested some ideas but they were not tested. An Intel P3700 400GB SLOG device is very pricey on Ebay@$400 so out of budget.

2. I am playing catch up and learning about SLOG devices and technology on here as I am new to Freenas, so I am not so sure given my business case whether or not a SLOG device is required.

When I first started out this network redesign, it's because my network pretty much a Adhoc of stuff and a few band aid here and there and wanted to have a more stable, scalable, secure and avoid a single point of failure where possible. That being said, the importance of my data is critical to a certain degree and that's why i have scheduled backups. Having mentioned the criteria for a solid network comes at a certain price and with a limited budget, I have to prioritize what I need and don't need. Hence the SLOG integration.

Thank you,
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
2. I am playing catch up and learning about SLOG devices and technology on here as I am new to Freenas, so I am not so sure given my business case whether or not a SLOG device is required.

So basically what happens when there is a request to write to disk, the request can be Synchronous or Asyncronus.
Now I am no expert on what decides which type but it looks like the Application running will request what type. In ESXI, if you use NFS, the ESXI kernel will always request Synchronous. If using ISCSI this is not the case. I believe the program in the VM determines the request type. Anyway for Freenas Synchronous request means that Freenas must wait till the write request is written to permanant media before it returns the ok to the requester. This means the requester (in your case ESXI) must wait for the acknowledgement. If you are waiting on spinning disk, then your wait can be slow to very slow. If it is an Asyncronus request then Freenas can reply after it has the request cached in memory and does not have to actually wait until it is written to disk. This greatly speeds up things as you can imagine. The problem comes in when you have a stoppage of some kind (NAS lockup, power loss, etc). It is possible for you to loose the requests that are still in memory before they were written to disk. This may or may not be a big deal, depending on what was in memory. Since you are planning to run the entire VM from your NAS, I would suggest you tell Freenas to use Synchronous writes always on your ISCSI drive. Then to speed up the writes, use a fast SLOG device to speed up writes. How Freenas actually does the writes I will leave up to you to research (they actually use a ZIL before it is written to disk). Anyway so the decision to use a SLOG to hold your ZIL comes down to whether you are okay with possible corruption or loss of data if there is a NAS lockup or power failure, etc.
Once again I am not an expert on exactly all the request that go on and such but you get the big picture of what can happen if you choose not to use Synchronous writes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
first point...You're taking what I said out of context by omitting what you said, saying it reduces performance.

I literally quoted what you quoted of mine. I did not omit it.

So let me be extra-crispy clear here. The SLOG will cause a performance reduction, if it is used, and if it isn't used, then there is no frickin' point in having a SLOG device, so better then not to spend the money.

While I suppose its true to an extent, but if you don't use it (meaning having the zfs options set the way I said) then you're potentially putting company data at risk with the only benefit being increased speed as writes. You're reading my text wrong or I'm no English major or both.

The benefit of increased write speed is very significant to most environments. If I can tell you "I can give you a massive speed increase, while decreasing your cost, for a modest increase in risk," many environments take that.

second point...ultimately up to the business leaders.

Actually, usually up to the IT people, manager, usually not something that reaches the CTO.

third point...incorrect. If you had a very large pool of disks in a mirrored vdev setup, say 44 10k or 15k sas disks and had an S3700 slog, or better yet, a large pool of ssd's, then an ssd slog would be slower than an on pool zil (PCIe NVMe drive maybe not the case). The point is, it's definitely possible, but unlikely in most scenarios and that was my point.

So yes there are edge cases where you're doing something idiotic, but even for the SAS HDD's you would find that not to be true. Basically you need to create a situation where the SLOG device is actually *slower* than the individual pool devices, but who would do that? The write path for the SLOG device is optimized for SLOG, whereas the write path to the in-pool ZIL follows the whole pool write path, and is not optimized for the needs of the ZIL. So if you create a pool of mirrored S3700 SSD's, and try to use in-pool ZIL on that, and then compare it to a separate SLOG device, you'll still find out that the SLOG works better because you're not going through the general pool write path, and instead using the optimized SLOG write path. You actually have to go full-stupid and move on to using a crappy SSD or HDD that's slower than your S3700 pool. You could also put your SLOG on magtape. But all of these cases are idiotic.

You seem to be thinking that using the in-pool ZIL would offer you the benefits of the parallelization potential of lots of devices, but in reality the ZIL serializes commits and you don't get the massive parallelization that, if it actually happened, might give your argument a chance at success. Even then, it would only work in cases where you had a large bunch of writers attempting to write in parallel. A single writer is always going to be dependent on the lock-step nature of sync writes, so the best hope for ZIL performance for a single writer or small number of concurrent writers is to optimize the hell out of the SLOG write path, and to use an extremely low latency SLOG device, so that the traversal from write() call down to the ZIL and back up happens as quickly as possible.

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Please read the section on "Laaaaaaaatency." This is all about latency, not potential throughput.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
I literally quoted what you quoted of mine. I did not omit it.

So let me be extra-crispy clear here. The SLOG will cause a performance reduction, if it is used, and if it isn't used, then there is no frickin' point in having a SLOG device, so better then not to spend the money.
Yes, and I've agreed with this the whole time...I'm sure I could have worded my sentence better. End of story

The benefit of increased write speed is very significant to most environments. If I can tell you "I can give you a massive speed increase, while decreasing your cost, for a modest increase in risk," many environments take that.
Agreed. It may be beneficial in this users case to then think about creating multiple zvols, one that has sync=always for business critical apps like his/her Exchange server, etc...and one where sync=disabled, for increased speed where data can be at a slightly higher risk, although the former would require a SLOG or at least in-pool ZIL.

Actually, usually up to the IT people, manager, usually not something that reaches the CTO.
Again, I agree and this is how my company works, but you're generalizing. I was speaking to the fact that this is a 20 person shop and it may be wise to talk this over with somebody so the risk/rewards are understood by more than just the "IT guy." No offense OP ;-)

So yes there are edge cases where you're doing something idiotic, but even for the SAS HDD's you would find that not to be true. Basically you need to create a situation where the SLOG device is actually *slower* than the individual pool devices, but who would do that? The write path for the SLOG device is optimized for SLOG, whereas the write path to the in-pool ZIL follows the whole pool write path, and is not optimized for the needs of the ZIL. So if you create a pool of mirrored S3700 SSD's, and try to use in-pool ZIL on that, and then compare it to a separate SLOG device, you'll still find out that the SLOG works better because you're not going through the general pool write path, and instead using the optimized SLOG write path. You actually have to go full-stupid and move on to using a crappy SSD or HDD that's slower than your S3700 pool. You could also put your SLOG on magtape. But all of these cases are idiotic.
Yes, I also agree with this. I absolutely should have left that comment out because no correct setup would ever configure their system like that. I was merely eluding to the fact that its possible to have an in-pool zil that would perform decently without a SLOG if you had a crazy setup. You said, "There is literally no situation where the in-pool ZIL would be faster than a SLOG." While there is no reasonable solution where you'd likely ever want this, it is possible and that was the point. People do dumb things with their systems all of the time, you of all people can attest to that. again...should have left this out as it doesn't apply here.

This is already getting ridiculous though. I don't want to get into an internet quoting match. We're both here to help and give insight into the options for this setup. You know your stuff and I've never said otherwise. Sometimes these conversations and ideas are hard to convey over the interwebs.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
I am learning a lot by just reading the feedback on here. I think what I will do is deploy my Freenas without SLOG and see how it performs. I understand the consequences without the SLOG device. If I feel I need to add it, I can always add it later.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
So basically what happens when there is a request to write to disk, the request can be Synchronous or Asyncronus.
Now I am no expert on what decides which type but it looks like the Application running will request what type. In ESXI, if you use NFS, the ESXI kernel will always request Synchronous. If using ISCSI this is not the case. I believe the program in the VM determines the request type. Anyway for Freenas Synchronous request means that Freenas must wait till the write request is written to permanant media before it returns the ok to the requester. This means the requester (in your case ESXI) must wait for the acknowledgement. If you are waiting on spinning disk, then your wait can be slow to very slow. If it is an Asyncronus request then Freenas can reply after it has the request cached in memory and does not have to actually wait until it is written to disk. This greatly speeds up things as you can imagine. The problem comes in when you have a stoppage of some kind (NAS lockup, power loss, etc). It is possible for you to loose the requests that are still in memory before they were written to disk. This may or may not be a big deal, depending on what was in memory. Since you are planning to run the entire VM from your NAS, I would suggest you tell Freenas to use Synchronous writes always on your ISCSI drive. Then to speed up the writes, use a fast SLOG device to speed up writes. How Freenas actually does the writes I will leave up to you to research (they actually use a ZIL before it is written to disk). Anyway so the decision to use a SLOG to hold your ZIL comes down to whether you are okay with possible corruption or loss of data if there is a NAS lockup or power failure, etc.
Once again I am not an expert on exactly all the request that go on and such but you get the big picture of what can happen if you choose not to use Synchronous writes.

Zredwire,

Perfect, that was a very good explanation and thank you for that.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Team,

Update: Got my 6 x 4TB WD Red's and a LSI 9211-8i HBA adapter and flashed it to IT mode no problem. I also installed a DUAL X520-DA2 10GB SFP+ card for my iSCSI traffic on my Freenas. Freenas also has a 4 port 1GB ethernet card as well that will be used for other management traffic. My switch has 4 x 10GB SFP+ ports where my other two ESXi hosts each with a DUAL X520-DA2 10GB SFP+ and 4 port 1GB ethernet card will get plugged into for iSCSI traffic and managment traffic/vmotion.

I have been doing some research and from what I have learned so far is that you use Volume Manager to create a Pool. Your pool could contain a number of vdev's. Each vdev could have a single disk or multiple disks for striping or mirroring or RAID.

My current storage as we speak is no more than 4TB and thats generous.

Example 1: Usable Storage = 12 TB after losing 50% due to disk mirroring

vdev 1 = 2 x 4TB Mirrored
vdev 2 = 2 x 4TB Mirrored
vdev 3 = 2 x 4TB Mirrored

Example 2: Usable Storage = 8 TB after losing 2/3 due to 3 way mirror, so usable storage is 1/3 in a 3 way mirror.

vdev 1 = 3 x TB Mirrored
vdev 2 = 3 x TB Mirrored

I will probably go with Example 1 as my use case reflects the benefits of mirrored disks.

Question: This was somewhat confusing, I read in another article that all disks should be in the same pool, well if you have multiple separate vdev's/pool, then its not possible as the same article also mentioned you can do striped between the vdev's?

So, can you do or would you do 3 vdev's/pools each with 2 mirrored disks and do a stripe between each vdev1, 2 and 3 or do you do one pool with all 6 disks mirrored and forget about the stripe.

Lastly, do you gain anything by introducing striping with mirrored vdev's?

Thank you,

Marlon
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
Don't get Pools and VDEVs mixed up. Pools are made up of VDEVs. VDEVs are made up of hard drives. When your talking mirrored VDEVs, when you add multiple mirrored VDEVs to a Pool, Freenas will "stripe" across the mirrored VDEVs. It's not really striping per say but it balances the writes out over the VDEVs thus giving you increased performance like a stripe would.
Pools are separate. You cannot stripe across multiple Pools or use disk (VDEVs) in more than one Pool.
 
Last edited:
Status
Not open for further replies.
Top