Hello,
I have setup a FreeNAS system in esxi with an HBA passed through. At times I see the memory spike to consume all 35 GB of RAM (looking at the reporting tool), and during this time the system is completely unresponsive. SSH, HTTPS, Network Sharing, Shell access all do not work while the memory spikes. I can't determine the root cause of this, whether it is an issue with Samba or FreeNAS. I am not entirely familiar with FreeBSD. This is preventing some service from working (namely system backups and larger file transfers) from working. I have debugging setup on both SMB and syslog in an attempt to find the information.
Clients accessing this are three Ubuntu 18.04 machines, two Windows 10, one Windows Server, two macOS devices.
Configuration wise:
Bare metal hardware:
FreeNAS 11.2 "hardware":
zpool status:
zfs list
zpool get all zfs_pool
I have setup a FreeNAS system in esxi with an HBA passed through. At times I see the memory spike to consume all 35 GB of RAM (looking at the reporting tool), and during this time the system is completely unresponsive. SSH, HTTPS, Network Sharing, Shell access all do not work while the memory spikes. I can't determine the root cause of this, whether it is an issue with Samba or FreeNAS. I am not entirely familiar with FreeBSD. This is preventing some service from working (namely system backups and larger file transfers) from working. I have debugging setup on both SMB and syslog in an attempt to find the information.
Clients accessing this are three Ubuntu 18.04 machines, two Windows 10, one Windows Server, two macOS devices.
Configuration wise:
Bare metal hardware:
Code:
AMD: 1700X Motherboard: ASRock X470 Master SLI/ac RAM: 64 GB Corsair DDR4 ECC SSD1: Intel S3520 SSD2: OCS Vertex 4 EVGA GT710 LSI 9211-8i Seasonic 400W Platinum OS: ESXi 6.7
FreeNAS 11.2 "hardware":
Code:
4 CPU Cores 35 GB RAM LSI 9211-8i (Passthrough) 6 x 6TB WD RED 1 x 120GB Intel SSD ZIL
zpool status:
Code:
pool: zfs_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zfs_pool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/993c2ebc-a1d5-11e8-97ef-000c294af712.eli ONLINE 0 0 0 gptid/9b428677-a1d5-11e8-97ef-000c294af712.eli ONLINE 0 0 0 gptid/9d7ee4ea-a1d5-11e8-97ef-000c294af712.eli ONLINE 0 0 0 gptid/9f88b60b-a1d5-11e8-97ef-000c294af712.eli ONLINE 0 0 0 gptid/a1c84e0e-a1d5-11e8-97ef-000c294af712.eli ONLINE 0 0 0 gptid/a3e93bef-a1d5-11e8-97ef-000c294af712.eli ONLINE 0 0 0 logs gptid/a56f4e93-a1d5-11e8-97ef-000c294af712.eli ONLINE 0 0 0 errors: No known data errors
zfs list
Code:
NAME USED AVAIL REFER MOUNTPOINT freenas-boot 869M 10.3G 64K none freenas-boot/ROOT 869M 10.3G 29K none freenas-boot/ROOT/Initial-Install 1K 10.3G 866M legacy freenas-boot/ROOT/default 869M 10.3G 867M legacy zfs_pool 1.68T 19.9T 176K /mnt/zfs_pool zfs_pool/.system 73.2M 19.9T 192K legacy zfs_pool/.system/configs-c6df7f068c2c4171917a68f70a853917 631K 19.9T 631K legacy zfs_pool/.system/cores 4.10M 19.9T 4.10M legacy zfs_pool/.system/rrd-c6df7f068c2c4171917a68f70a853917 19.4M 19.9T 19.4M legacy zfs_pool/.system/samba4 647K 19.9T 647K legacy zfs_pool/.system/syslog-c6df7f068c2c4171917a68f70a853917 48.1M 19.9T 48.1M legacy zfs_pool/.system/webui 176K 19.9T 176K legacy zfs_pool/home 983K 19.9T 983K /mnt/zfs_pool/home zfs_pool/storage 441G 19.9T 411G /mnt/zfs_pool/storage zfs_pool/storage/games 29.7G 1.97T 29.7G /mnt/zfs_pool/storage/games zfs_pool/media 1.25T 2.78T 1.25T /mnt/zfs_pool/media
zpool get all zfs_pool
Code:
NAME PROPERTY VALUE SOURCE zfs_pool size 32.5T - zfs_pool capacity 4% - zfs_pool altroot /mnt local zfs_pool health ONLINE - zfs_pool guid 10932289593781872771 default zfs_pool version - default zfs_pool bootfs - default zfs_pool delegation on default zfs_pool autoreplace off default zfs_pool cachefile /data/zfs/zpool.cache local zfs_pool failmode continue local zfs_pool listsnapshots off default zfs_pool autoexpand on local zfs_pool dedupditto 0 default zfs_pool dedupratio 1.60x - zfs_pool free 30.9T - zfs_pool allocated 1.58T - zfs_pool readonly off - zfs_pool comment - default zfs_pool expandsize - - zfs_pool freeing 0 default zfs_pool fragmentation 1% - zfs_pool leaked 0 default zfs_pool bootsize - default zfs_pool checkpoint - - zfs_pool feature@async_destroy enabled local zfs_pool feature@empty_bpobj active local zfs_pool feature@lz4_compress active local zfs_pool feature@multi_vdev_crash_dump enabled local zfs_pool feature@spacemap_histogram active local zfs_pool feature@enabled_txg active local zfs_pool feature@hole_birth active local zfs_pool feature@extensible_dataset active local zfs_pool feature@embedded_data active local zfs_pool feature@bookmarks enabled local zfs_pool feature@filesystem_limits enabled local zfs_pool feature@large_blocks active local zfs_pool feature@sha512 enabled local zfs_pool feature@skein enabled local zfs_pool feature@device_removal enabled local zfs_pool feature@obsolete_counts enabled local zfs_pool feature@zpool_checkpoint enabled local
Last edited: