High CPU creating zpool

rimbalza

Cadet
Joined
Sep 21, 2020
Messages
5
Hi, I have a problem creating a new pool on my FreeNAS-11.3-U4.1. When I use the GUI the wait icon is there forever (I mean days) and I have a python process eating up a full 2 cores. I read threads about being the SMART creating problems or the GUI, so I created the pool from the command line and even disabled SMART. Now it is the zpool process that is taking the same CPU and it seems never ending (it's 4 hours now, so I suppose it is the same as the GUI).
And here are the details... This is a Hyper-V VM (so no SMART problems here), 4 Xeon cores and 12 Gb of RAM. System storage is a local disk on the HyperV host, the data storage is an iSCSI disk on the HyperV host configured as pass through to the FreeNAS box. The real iSCSI backend is a Netgear device.
Why the hell am I doing this?
I have a working properly sized FreeNAS running on HP hardware. I needed a replica of all of its data in another plex inside the campus. I cannot (customer limits) create another physical box in the other plex, so I resorted using this configuration since the Netgear NAS is in the other plex.
This is sub optimal of course, but that machine will (hopefully) never need to serve any live data.
The problem is why does it take forever with two spinning cores to create that 8T volume? Looking at the network traffic it is generating barely no load on the remote iSCSI server. The FreeNAS dashboard itself says it is not using any memory, only a lot of CPU.
Is there some reason it should behave this way? I can leave it running even for days if needed, but I would like to know if it will ever finish.
Thank you
 

rimbalza

Cadet
Joined
Sep 21, 2020
Messages
5
1600689463252.png

this is the status of the server now. No zXX command is working, seems something is locked
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Is there some reason it should behave this way?
I can't say that I fully understand why the system is behaving the way it is, but I would say that this is not the way FreeNAS expects to be used and that may have caused it to run into some kind of software black hole that it will eventually crash over.
You can run FreeNAS virtualized, but it is best to present FreeNAS with direct access to physical disks. Let FreeNAS take those physical disks and turn them into a ZFS Pool.
Having said that, I recognize you are constrained by what the organization will allow.
I would try presenting the storage to the virtual FreeNAS in a different way. Here are a couple links that might provide useful information:


"Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data.
https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/

Virtually FreeNAS ... an alternative for those seeking virtualization
https://www.ixsystems.com/community...ative-for-those-seeking-virtualization.26095/

FreeNAS 9.10 on VMware ESXi 6.0 Guide
https://b3n.org/freenas-9-3-on-vmware-esxi-6-0-guide/
 

rimbalza

Cadet
Joined
Sep 21, 2020
Messages
5
Thank you Chris for replying, two different things here methinks.
I think the black hole is actually there because, as you see in the screenshot above both zpool and kernel were pretty busy doing essentially nothing, sinche the iSCSI backend did not receive any command. Also, being a test and before your reply, I thought about presenting the disk in a different way to FreeNAS. So I tried to shut down the VM. From inside it took a looooong time to perform the shutdown and the result was as follows
1600707041264.png

You can see:
- zpool status command that was hung (no surprise)
- FreeBSD being shut down, but something inside the kernel was still revving at high speed, as you can see from the HyperV counter highlited. So it was a non-dead system, even with shutdown completed.

About the storage, I took the long and inefficient way I tried to avoid: I mapped the iSCSI volume to the Hyper-V host, formatted and assigned a letter in Windows (really, a path/mount point but it is the same). This confirmed the iSCSI volume on the backend was healthy. Then I added a VHDX disk to FreeNAS on that volume. FreeNAS saw the disk and created the pool in one minute. This of course is not efficent since there are two file system levels, ZFS inside FreeNAS and then NTFS inside the Windows host. Also it is not easy to match the size of the VHDX file to the available iSCSI volume, I loose some space. In the end, being a replica host performance is not a great problem.

So it definitely works, but something bad is happening at the FreeBSD layer when using pass-through disks on Hyper-V. I think it should be interesting to investigate, not for this un(usual|supported) scenario but for the OS itself.
Thanks
 
Top