As I've had some crashes / reboots during a scrub with (repaired) CKSUM errors, I wanted to know if the scrub errors occurred around the same time as the reboots. For this, I've created some very basic scripts that does a 'zpool status' every x seconds and log this with timestamps to a logfile...
I experience this twice in a month.
From the alarm date-time tag The boot pool was DEGRADED while scrubbing the boot pool itself.
I already reinstall the OS in new USB stick.
But afraid of happening again.
I have zfs check down here. vvvvv
This one is the first time:
I replaced one drive in a 2 drive mirror and resilvered the mirror. The system is now reporting there are permanent errors
in one specific file. The file isn't too important, but I restored it from another copy anyway. But zpool status is still reporting
permanent errors on that file. The help...
(sorry for my bad English)
I have an old FreeNAS-9.10.2-U6 (561f0d7a1) running on an old HP Prolian MIcroserver (first gen). Using two desktop Seagate 4TB disk in mirror setup. This is the current status:
status: One or more devices has experienced an...
I posted this to freebsd-fs but I imagine it's of interest here as well.
On one of my systems I noticed a severe performance hit for scrubs since the sequential scrub/resilver was introduced. Digging into the issue I found out that the system was stressing the l2arc severely during the...
Does the following seem reasonable for a homeserver:
Followed this guide.
Day 01: scrub
Day 08: SMART(long)
Day 15: scrub
Day 22: SMART(long)
[4AM for all]
[Scrubs: threshold: 10 days.]
Day 05: SMART(short)
Day 12: SMART(short)
Day 19: SMART(short)
Day 26: SMART(short)
[3AM for all]
Hello, I have a FreeNAS 8 setup that I plan to migrate soon. It ran into a scheduled scrub yesterday and during the scrub, it is finding a lot of 'MEDIUM ERRORs' on two disks of my zpool. My first instinct was to replace both of these disk but when I checked zpool status, the scrub is still...
My FreeNAS box had been running FreeNAS-11.1-U2 perfectly fine since it came out, and had been running continuously for about three months. Then, a couple hours ago, BAM! -- it rebooted for no reason I can figure. (I confirmed that the reboot was real via the uptime, which was in the minutes...
So I have a recently installed pretty vanilla build of Freenas 11.1u4 I'm trying to use as a file/media server in a Windows environment. It's a 3-drive Raid-Z (encrypted) that I set up with a couple user accounts/groups and SMB shares and that's it. I set up the Volume/Datasets and permissions...
I was doing some copies between SMB shares on a Windows client machine... and a Windows installer was running in a Bhyve VM, when my FreeNAS became completely unresponsive. I couldn't SSH into it, access any shares, or interact with the web GUI at all. So I did a hard reset.
When I booted back...
I google'd a lot but cannot find any solution to my problem:
The ZFS scrubbing stuck at 56.66% and does not continue:
root@freenas:~ # zpool status -v Prim
status: Some supported features are not enabled on the pool. The pool can
still be used, but some...
I have some 10 scheduled scrubs (default setting, 35 days) that were working fine with 11.0-U2 but started emailing me errors after updating to 11.0-U3, errors that also persist with 11.0-U4.
What is strange is that the weekly scrub for the boot drive is working fine. The other scrubs send me...
Yesterday, i had my drive pool configured as a set of mirrored drives.
While copying some large files and i looked at the console to see errors. Did a zpool status and found 175 write errors on my da1 drive.
Today i reformatted disks a raidz3 pool. After doing a zpool status, i dont see...
I'm doing a read-thru of Replacing a Failed Drive for my FreeNAS 9.10 system before actually going through the process.
There is a note after the first step (which is to OFFLINE the disk) that says:
> If the process of changing the disk’s status to OFFLINE fails with a “disk offline failed -...
I got this via email from my freenas and have no clue what it means. The scrub got started and is running though.
Cron <root@freenas> PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/root/bin" /usr/local/libexec/nas/scrub -t 35 tank
Exception in thread...