FreeNAS datastore(s) becoming inaccessible during/after heavy load

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Something else to consider - While it's essentially never that I do huge numbers of storage vMotions, that was the one way that I found I was able to duplicate the 'crash' I was getting when trying to suspend and shut down a bunch of machines in a hurry due to a power failure where I want to try and get everything down gracefully and maintain the running state of a handful of VMs. That being said - the storage vMotions are NOT (or at least SHOULD not be) related to iSCSI. As FreeNAS/TrueNAS (supposedly) supports VAAI, that function should not be passing over iSCSI - VMWare should be sending a command that essentially says 'move this machine from this datastore to that datastore and report back when done' and the storage does that (Kind of over simplified description).

Now, that all being said, to comment WI_Hedgehog posted, these drives are all SAS (Not SATA) drives and the ones in my main FreeNAS that are part of the striped mirror that my VMs boot off originally came out of IBM/Lenovo servers if memory serves (I'm not pulling one to check lol). The larger disks that make up my RAIDz3 pool are HGST Helium SAS disks. Along with Dell hardware through and through, so it's not garden variety cheap consumer nonsense - the least enterprise components in the entire chain, and saying they're least enterprise is debatable, would be the supermicro enclosures.

One of my main complaints about FreeNAS/TrueNAS, or maybe it's just ZFS in general, is that absolutely ridiculous waste of disk space - to say that one should only be using 10-20% of the disk space is utterly absurd. While I get that the nature of the filesystem can leave holes and such, making it harder and harder to write data properly when it starts to fill up, I'm rather surprised there isn't some sort of background defrag patrol built in that happens in times of lower usage to streamline the disk usage and preserve or open up large swaths of free sectors.

It's also crazy to suggest that a volume becoming inaccessible due to hitting it hard is 'normal'. Individual things slowing down or the overall system plateauing, sure, absolutely understandable and, honestly, expected. But no matter how hard you hit it, it should never go offline or become inaccessible like this. I've dealt with quite a few storage systems - from IBM/Lenovo, HP and Dell RAID controllers with internal arrays, to EqualLogic, Compellent, PowerVault, Promise, and I think there may be one or two I can't recall, and I've NEVER had one that just pushed a volume offline like this due to being hammered - if IXSystems is selling TrueNAS as an 'Enterprise Product' (Which they are), this CAN'T be right. Like I said, I can understand performance tanking or plateauing when you start really thrashing the storage, but it should NEVER go offline like this. To be fair, I haven't tried TrueNAS yet for my testing to see if maybe it doesn't exist there these days, but that goes back to the block size and VMFS issues from my other thread that I got no input on.

I should also note that while I haven't gone crazy testing on my main FreeNAS because, well, I really don't want to crash it or push any of its volumes offline, I have moved multiple VMs simultaneously to my RAIDz3 volume and it hasn't blown up (only powered off test VMs are on my z3 volume - its purpose in life is two large VMDKs that contain my media library and general data volumes)- it SEEMS to only be striped mirrors. Maybe I'll have to try that on the test machine.

One other funny side note, to say that RAID10/Striped Mirrors etc are the king of all configurations that will always outperform RAID5/6 RAIDz is not universally true. Years ago, I was using a Promise vTrak (which I'd actually still be using now if it wasn't for the fact that it only supported up to 4TB disks, and only if they were VERY SPECIFIC SAS disks), and was going to use a RAID10 setup for my VMs as 'conventional wisdom' always said RAID10 is the fastest. Well, it wasn't. Not even close. My benchmarks on the RAID10 setup SUCKED. So I reached out to Promise, and they told me no, it's optimized for RAID5/6, so I created RAID5 volumes to test with and that absolutely blew the doors off the RAID10 setup. With the MASSIVE amount of processing power and memory available to FreeNAS/TrueNAS compared to that Promise array, I'm rather surprised that RAIDz provides 'no performance benefit beyond one disk worth of performance'.

I guess the TLDR is if the 'issue' was "I'm trying to copy/move a lot of data and it's going real slow", and the evidence showed that the disks were being hammered, causing the performance to slow down (Let's say one vMotion runs at 150MB/s, but then when three are run at the same time, they each only get 50MB/s), saying 'you need more disks to increase performance' would be totally acceptable. But hammering the disks shouldn't cause the volumes to become inaccessible.
 
Joined
Jun 15, 2022
Messages
674
You're right, hammering the system shouldn't cause it to break. Post a full hardware list (kind of one of the requirements of the forum rules anyway) and we'll try to poke holes in it (and maybe we can't).

ZFS kind of has anti-fragmentation built in, from the high-level documents I've read it's really involved and how it works depends on many factors, but "they try."

The other thing you could try is some benchmarks that will hammer the server just as hard yet eliminate vMotion.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
You're right, hammering the system shouldn't cause it to break. Post a full hardware list (kind of one of the requirements of the forum rules anyway) and we'll try to poke holes in it (and maybe we can't).

ZFS kind of has anti-fragmentation built in, from the high-level documents I've read it's really involved and how it works depends on many factors, but "they try."

The other thing you could try is some benchmarks that will hammer the server just as hard yet eliminate vMotion.
Well, I'm not sure how much more detailed I can get than what I have in my sig, but here goes...

Both my main and test FreeNAS setups are running 11.1u7, they're both Dell Poweredge FC630's, the main being a single Xeon E5-2699v3 and 256GB RAM, the test being two E5-2680v3's and 128GB RAM. The main has two LSI based Dell 12GBps SAS HBAs, the test has two SAS9207-4i4e HBAs. In my main setup (and I'll save a few keystrokes, or not, but EVERYTHING is SAS, NO SATA), I have ten 2TB disks that were originally in IBM/Lenovo systems configured as Stiped Mirrors (basically, RAID10), for all of my VMS to boot and work off, a two disk mirror for my Blue Iris DVR data (these are in one enclosure), and in the second enclosure, I have eleven HGST Helium SAS disks (Mostly 8TB, but a few 10TB that replaced 8TB disks that hadn't 'failed' per se, but were having issues) in a RAIDz3. One thing I've learned is DO NOT extend pools between chassis - I've always felt this way, but it really came to head one time when I put an empty caddy into a slot and somehow, it shorted something and the whole enclosure shut down - I literally s*** myself - fortunately, I shut everything down PDQ, got the enclosure back up and booted the FreeNAS controller, and all was well - but had even one disk not been in that enclosure, things could have been REALLY bad... Enclosures are SuperMicro 2U 12 bay, not sure of the exact model number, and they have the EL2 (Dual Expander) backplanes, connecting back to the HBAs. Using Dell external SAS cables.

And again, the reason I'm still using 11.1u7 is, aside from this, it's been ROCK SOLID. I had serious issues with 11.2 or 11.3, don't recall which, and while I believe that was on a R710, not a xx30, one common component is the Broadcom 10Gb NICs, and since there weren't any ideas as to the cause of the issues...... and then with regards to moving to TrueNAS, the unanswered issues with regards to TrueNAS block size and VMWare VMFS creation... Well....

Not exactly the 'I have a Pentium D and 2GB non-ecc RAM and I can't figure out why this isn't working' lol.

At my job prior to where I am now, we always half joked (with EMPHASIS on half - it was actually near 100% serious.... ) that I had a better network than most of our customers lol. It's basically a miniature replica, aside from the storage, of what I work with all day every day at work, and I use it as my sandbox, testing upgrades and such before actually deploying them, which has saved my butt more than once. And if it adds anything, my core switch is a Procurve 5406ZL2, with 10Gbps SFP+ ports that link the FN410S 'IO Aggregators' into the main network.

I'm a total nerd, and I don't like dealing with janky **** like I had to deal with at my last job... What I have may not be the 'latest and greatest', but it's still decent and absolutely enterprise grade.

Any more detail I can provide?
 
Last edited:

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
So I had another instance of a volume going offline on my main FreeNAS when I was trying to shut things down, but this time I wasn't in a hurry and was trying to be gentle, only suspending one machine at a time, but I think my mistake was shutting down a few at once, which theoretically should have been less intense than suspending a few at the same time.... Rather crazy that it breaks like that. BUT, that being said, I decided to give TrueNAS another shot, and installed TrueNAS 13.0-U4 on my other, nearly identical FC630, that I had been using for testing FreeNAS 11. The first thing I noticed this time was that after I created the pools and ZVols, then configured all of the iSCSI sharing and whatnot, when I tried to create a VMFS6 datastore in VMWare.... IT WORKED! Previously, as stated in my other thread, on TrueNAS, when I just created the pools and ZVols, then shared them out without any changes, VMFS6 datastores would fail unless I changed the block size to something smaller than default/recommended (I believe the largest I could set it to and have a VMFS6 datastore successfully be created was 64k), and that came with a warning from TrueNAS about it being 'not recommended', but that 'appears' to be resolved. So I've been beating it up for about a week now, started with just simple VMotioning several machines from one datastore to the other, suspending all running machines at once, then moved up to VMotioning several running machines from Datastore 1 to Datastore 2, and at the same time VMotioning several other machines from Datastore 2 to Datastore 1, and knock on wood, so far, I have NOT been able to break it, unlike FreeNAS 11. Moving that many machines in both directions at the same time is pretty slow, but let's be honest: That's to be expected. I'm completely fine with it being slow when it's being beat up. What I'm NOT ok with is it breaking the way it does in FreeNAS 11 - Like WI_Hedgehog said, that shouldn't break it. Slow things down? ABSOLUTELY. Break it? No. The most important part is that so far, I haven't been able to break it. So I'm CAUTIOUSLY OPTIMISTIC that upgrading to TrueNAS 13.0-U4 might actually be the ticket to resolving this issue. So at this point, I'm not sure what else I can do to try to break it, since this kind of activity (well, a little less, actually) has pretty reliably broken the test FreeNAS 11 instance, and technically, less abuse has broken my main FreeNAS instance.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
common component is the Broadcom 10Gb NICs, and since there weren't any ideas as to the cause of the issues

I specifically disrecommend Broadcom chipsets as hit-or-miss in the 10 Gig Networking Primer, they are often a bit flaky. Lots of people have had issues with them. Anything that can cause network traffic to pause or delay is a killer on an iSCSI setup, so it is strongly recommended to use the chipsets known to work well especially if you are planning stuff like vMotion.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
I specifically disrecommend Broadcom chipsets as hit-or-miss in the 10 Gig Networking Primer, they are often a bit flaky. Lots of people have had issues with them. Anything that can cause network traffic to pause or delay is a killer on an iSCSI setup, so it is strongly recommended to use the chipsets known to work well especially if you are planning stuff like vMotion.
Well, for what little it may be worth, ALL of the VMWare hosts at work have the same Broadcom network adapters as my machines, and I've never had the slightest hiccup on any of them in roughly a decade. :shrug:

The other thing to consider with regards to your comment, is that if FreeNAS and TrueNAS DO support VAAI as they supposedly do (and evidence suggests that they do), Storage vMotion between datastores on the same array are not iSCSI related - that's handled by the storage device, with the only traffic essentially being updates to VMWare indicating the status of the move.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, for what little it may be worth, ALL of the VMWare hosts at work have the same Broadcom network adapters as my machines, and I've never had the slightest hiccup on any of them in roughly a decade. :shrug:

Yeah, that's great, but that's VMware using Broadcom's drivers for VMware. Hypervisors are an important target platform for ethernet chipset manufacturers, because if they suck, they will get yanked and replaced, and never bought from that brand again in the future. Windows is usually an important target platform too. Linux, FreeBSD, not as much. Intel and Chelsio have both put immense efforts into their driver performance and quality for both FreeBSD and Linux, with dedicated teams of driver authors. Broadcom apparently hasn't. Without teams maintaining not only the drivers for the latest silicon but also drivers for older silicon, awesomeness fades and user bases become disenchanted. At one time, Adaptec was the undisputed champion of SCSI adapters for Linux and FreeBSD with the Adaptec 154x, but the following quarter of a century saw that shine fail as their commitment to the open source community faded.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Yeah, that's great, but that's VMware using Broadcom's drivers for VMware. Hypervisors are an important target platform for ethernet chipset manufacturers, because if they suck, they will get yanked and replaced, and never bought from that brand again in the future. Windows is usually an important target platform too. Linux, FreeBSD, not as much. Intel and Chelsio have both put immense efforts into their driver performance and quality for both FreeBSD and Linux, with dedicated teams of driver authors. Broadcom apparently hasn't. Without teams maintaining not only the drivers for the latest silicon but also drivers for older silicon, awesomeness fades and user bases become disenchanted. At one time, Adaptec was the undisputed champion of SCSI adapters for Linux and FreeBSD with the Adaptec 154x, but the following quarter of a century saw that shine fail as their commitment to the open source community faded.
Fair points, but as I mentioned (and to be fair, I probably edited and added it while you were drafting your reply), where the storage array supports VAAI, functions like Storage vMotion are not iSCSI functions - that's handled internally by the storage array. That answer also doesn't address why (knock on wood), the problem 'seems' to now be gone with my testing on TrueNAS 13.0-U4. That may very well be why the 11.2 or 11.3 variant was randomly rebooting though. I'm not necessarily against acquiring an Intel 10G NDC for the FreeNAS/TrueNAS machine, however, the question I always pose to people troubleshooting issues (I deal with a lot of people that want to launch everything at the wall and see what sticks): What if we make this change and the problem remains (and in my job, where people want to throw everything including the kitchen sink, I usually end up right lol)? So far, knock on wood, the issue hasn't re-emerged with TrueNAS 13.0-U4.

Oh, and actually, I should recant a small bit - I HAVE had issues with the Broadcom 10g NICs on VMWare, but ONLY where VMWares Qfle3 driver was involved - where the bnx2x driver was involved, it was always rock solid. Of course the bnx2x driver is no longer an option on ESXi7 and above.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
where the storage array supports VAAI, functions like Storage vMotion are not iSCSI functions - that's handled internally by the storage array.

You may have misunderstood VAAI to be a monolithic set of functions. Many times VAAI is only partially supported, and in such cases, the VMware host reverts to traditional methods to complete an operation. See for ex. https://kb.vmware.com/s/article/1021976

What happens if I have VAAI enabled on the host but some of my disk arrays do not support it?

When storage devices do not support or provide only partial support for the host operations, the host reverts to its native methods to perform the unsupported operations.

VAAI is a set of opportunistic operations that can enhance the performance of a vSphere cluster. There are numerous cases where VAAI offload operations cannot be used at all, including

For any primitive that the array does not implement, the array returns an error, which triggers the ESX data mover to attempt the operation using software data movement. In the case of ATS, it reverts to using SCSI Reservations.

VAAI hardware offload cannot be used when:
  • The source and destination VMFS volumes have different block sizes
  • The source file type is RDM and the destination file type is non-RDM (regular file)
  • The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
  • The source or destination VMDK is any kind of sparse or hosted format
  • Cloning a virtual machine that has snapshots because this process involves consolidating the snapshots into the virtual disks of the target virtual machine.
  • The logical address and/or transfer length in the requested operation is not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
  • The VMFS datastore has multiple LUNs/extents spread across different arrays
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
You may have misunderstood VAAI to be a monolithic set of functions. Many times VAAI is only partially supported, and in such cases, the VMware host reverts to traditional methods to complete an operation. See for ex. https://kb.vmware.com/s/article/1021976



VAAI is a set of opportunistic operations that can enhance the performance of a vSphere cluster. There are numerous cases where VAAI offload operations cannot be used at all, including
I get it (and you're also suggesting that FreeNAS/TrueNAS does NOT support VAAI, or at least not entirely because VMware will try to use VAAI and if the Array doesn't support functions, VMWare will then fall back), but with the 'VAAI hardware offload cannot be used when:'

Nope - Both datastores are provisioned exactly the same
Nope - No RDM at all
Nope - Thin to Thin
Nope
Nope - no snapshots, and even if there were, it's not cloning a VM, it's moving it from one container to another.
Should be a nope - both datastores are created by FreeNAS/TrueNAS and formatted as VMFS by VMWare
Nope - all on the same array - If you're trying to VMotion from one array to a different one, well DUH, it doesn't apply because the two arrays can't orchestrate the move from one container to the other, and must be transferred by way of iSCSI (Or NFS or whatever links are involved).


Based on my network traffic observations, FreeNAS AND TrueNAS support at least the storage vmotion portion of VAAI. In my vMotion tests with both FreeNAS 11 and TrueNAS, while the storage vMotions are in progress, the network traffic is essentially nil.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
where the storage array supports VAAI, functions like Storage vMotion are not iSCSI functions

Well, the primitives actually could likely be considered as such; VMware categorizes VAAI for BLOCK ("iSCSI") as a different set of things than VAAI for NAS ("NFS"). TrueNAS is VMware certified for VAAI with iSCSI, but last I heard not for NFS.


and you're also suggesting that FreeNAS/TrueNAS does NOT support VAAI, or at least not entirely

No, I'm suggesting I've been watching this since the days of ESXi 4.1 and have quietly encouraged the development of VAAI on FreeNAS. It is not a single thing and it is not a simple thing, so it requires more than generalizations to do it justice.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Well, the primitives actually could likely be considered as such; VMware categorizes VAAI for BLOCK ("iSCSI") as a different set of things than VAAI for NAS ("NFS"). TrueNAS is VMware certified for VAAI with iSCSI, but last I heard not for NFS.




No, I'm suggesting I've been watching this since the days of ESXi 4.1 and have quietly encouraged the development of VAAI on FreeNAS. It is not a single thing and it is not a simple thing, so it requires more than generalizations to do it justice.
Again, fair point, but straying a bit... I commented on NFS which isn't relevant to my situation, for which I apologize, but 'TrueNAS is VMware certified for VAAI with iSCSI, but last I heard not for NFS', so TrueNAS DOES support iSCSI VAAI (which iSCSI is all that applies to me), and based on what I've read along with network utilization during Storage vMotions, so does FreeNAS 11 (I believe it may have been introduced somewhere around FreeNAS 9?). So the point being the way I've been beating it up is not beating up iSCSI or the NICs, it's been (mostly) beating up FreeNAS. TrueNAS hasn't been a factor in my testing because, as mentioned, every version of TrueNAS I've played with, until TrueNAS 13-U4, has had problems with datastores being formatted VMFS6 unless the TrueNAS volume was provisioned with a 'non-standard' block size, which TrueNAS complained about, which was why going that route was a hard stop, especially since no one could give any feedback on that issue. With TrueNAS 13-U4 appearing to work as expected without any 'tweaking', that changes TrueNAS. Hence why I was just testing that.

It's entirely possible that the issue I've been seeing is purely some sort of FreeNAS issue that was banished to oblivion with TrueNAS, but since the block size/VMFS6 issue was coming up and no one could provide any feedback, I abandoned TrueNAS at that time. Since that issue seems to be resolved in TrueNAS 13-U4 I started testing with that version and the results are promising - that may be the ticket to putting this issue to bed.

Again, the longstanding issue hasn't been performance - if you have 150MB/s transfer rate, and you move a machine, and it gets 150MB/s of transfer, you can't expect moving 10 machines to get 150MB/s each - they'll get 15MB/s each. Individual performance will suck while overall performance will be the same. The problem has been FreeNAS basically crashing under load (or not all that much load) and making one or more datastores unavailable. That shouldn't happen and is unacceptable. Based on current info, it seems that the issues I've had are a FreeNAS issue of some sort, which no one really cares about because, in reality, it's yesterday. And you can't blame volume fragmentation or usage percentage, because I've seen the same issues on my main FreeNAS that you'd suggest are 'over utilized' as I've seen on my test ZVols that have maybe 150GB out of 4TB actually used, so PLENTY of space for writes.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The problem has been FreeNAS basically crashing under load (or not all that much load) and making one or more datastores unavailable. That shouldn't happen and is unacceptable. Based on current info, it seems that the issues I've had are a FreeNAS issue of some sort, which no one really cares about because, in reality, it's yesterday.

The time to complain about this would have been some years ago, so I still don't really understand what the point is here. The TrueNAS CORE iSCSI stuff was rewritten by an iXsystems staffer as a high performance in-kernel service so it's not like the issue would have gone unaddressed. As it is, this really boils down to "NASware gets better over time, whodathunk".
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
The time to complain about this would have been some years ago, so I still don't really understand what the point is here. The TrueNAS CORE iSCSI stuff was rewritten by an iXsystems staffer as a high performance in-kernel service so it's not like the issue would have gone unaddressed. As it is, this really boils down to "NASware gets better over time, whodathunk".
Very true, but for whatever reason, it never cropped up until well after FreeNAS was EOL and replaced with TrueNAS. The point of it was trying to figure out what was causing the issue and see if there was a way to resolve it. Fortunately, I discovered that the block size/VMFS6 issue that was present in earlier versions of TrueNAS was apparently fixed (That, along with no responses/comments about the issue is why I originally abandoned the idea of upgrading to TrueNAS), which led me down the path of trying TrueNAS again, and that does, at this point, seem to be the ticket.
 
Top