One of our freenas servers 11.0-u2 has crashed due to swap space on a failed drive taking out windows file sharing and the web GUI. This issue is apparently resolved in 11.1 https://redmine.ixsystems.com/issues/23523
Same issue as https://forums.freenas.org/index.ph...crash-with-unexplained-log.53865/#post-372899
Glad to see it fixed as despite having 32GB ram the system apparently was using swap on that hard drive for critical services.
Question is: Dose the update auto mirror swap on updated systems or would I be better rebuilding all our servers to prevent this happening again?
Note: have had hard drives fail before but this is the first time its taken the GUI and other services out.
lots of this on screen:
Email from that morning
Same issue as https://forums.freenas.org/index.ph...crash-with-unexplained-log.53865/#post-372899
Glad to see it fixed as despite having 32GB ram the system apparently was using swap on that hard drive for critical services.
Question is: Dose the update auto mirror swap on updated systems or would I be better rebuilding all our servers to prevent this happening again?
Note: have had hard drives fail before but this is the first time its taken the GUI and other services out.
lots of this on screen:
Code:
> swap_pager: I/O error - pagein failed; blkno 3146169,size 16384, error > 6 > vm_fault: pager read error, pid 1652 (devd) > swap_pager: I/O error - pagein failed; blkno 3149379,size 4096, error > 6 > vm_fault: pager read error, pid 6001 (zfsd) > swap_pager: I/O error - pagein failed; blkno 3146144,size 8192, error > 6
Email from that morning
Code:
Checking status of zfs pools:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
SQL 888G 246G 642G - 23% 27% 1.00x ONLINE /mnt
WKGS 65T 11.0T 54.0T - 13% 16% 1.26x DEGRADED /mnt
freenas-boot 7.44G 2.11G 5.33G - - 28% 1.00x ONLINE -
pool: WKGS
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a degraded state.
action: Online the device using 'zpool online' or replace the device with 'zpool replace'.
scan: scrub repaired 0 in 21h57m with 0 errors on Sun Apr 22 21:57:38 2018
config:
NAME STATE READ WRITE CKSUM
WKGS DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/e476f71a-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/e590f00f-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/e6a63440-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/e7b826c0-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/e8cd6b93-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/e9e3537a-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
raidz2-1 DEGRADED 0 0 0
8473900225476131747 REMOVED 0 0 0 was /dev/gptid/eaf67c8b-c4d2-11e5-bfa8-0cc47a694230
gptid/ec095868-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/ed08557f-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/ee006978-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/ef122970-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
gptid/f02872d3-c4d2-11e5-bfa8-0cc47a694230 ONLINE 0 0 0
errors: No known data errors
Local system status:
3:01AM up 62 days, 19:25, 0 users, load averages: 0.37, 0.19, 0.12