If you give some description of your hardware, it is possible that a better answer could be forthcoming. Please review the forum rules.Hi,
I have a problem with the tank. the tank has reached 97%. So how can I deal with this problem?
TQVM
the suggestion for clearing your problem back then was:Hi All,
I'm new here. And so fresh on freeNAS. Our storage reaching 98% and appeared write failed. How to free up the storage without expanding the storage.
Thanks
Search for a thread titled - Disk full can't delete any files. Please help.
The answer is in message #3.
How do you deal with any other full volume? Delete files.
If you give some description of your hardware, it is possible that a better answer could be forthcoming. Please review the forum rules.
Also, you posted back on Mar 28, 2014
the suggestion for clearing your problem back then was:
No, it isn't; there are no 20TB hard drives. Perhaps you meant to say that was the size of your pool.Actually the size of the hard drive is 20TB.
No, it isn't; there are no 20TB hard drives. Perhaps you meant to say that was the size of your pool.
And, as @Chris Moore pointed out, you asked this exact same question in your only other post, almost four years ago. You were given the answer then. What makes you think the answer is different now?
Maybe this..I bet that it is snapshots taking up all the space. So, you need to delete any old snapshots you have that you don't need. I had that on one of my servers at one point.
Also, please post the code snip of your zpool status because you definitely have something wrong with your pool.
Sent from my SAMSUNG-SGH-I537 using Tapatalk
I actually had something more like this in mind.Maybe this..
pool: Storage state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub repaired 0 in 83h8m with 0 errors on Wed Jul 5 11:08:21 2017 config: NAME STATE READ WRITE CKS Storage DEGRADED 0 0 raidz2-0 DEGRADED 0 0 gptid/e30271e3-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e3bf3548-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e499b128-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e57934b8-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e654767d-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 13314010698547058984 UNAVAIL 0 94 gptid/e8043869-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e8de61a9-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 errors: No known data errors
Does the illustration you attached show all the drives that are supposed to be in the system?Maybe this..
I'd guess not; since it goes up to da11, I'd bet there are 12 drives in there. This has fail written all over it.Does the illustration you attached show all the drives that are supposed to be in the system?
But what you showed up gives enough of the story to tell you that you have one disk that is totally failed and needs to be replaced as soon as possible because you are only running RAID-z1 and with a failed disk you don't have any redundancy.
Does the illustration you attached show all the drives that are supposed to be in the system?
It appears, from what you have shared, that you have 9 x 3TB drives in a RAID-z1 pool that is completely filled.
If you still need this data, you need to replace that failed drive as a patch to keep it alive a little longer, but if you still need the functionality, it is time to get a new server or significantly upgrade the one you have.
I would suggest purchasing a replacement system with 10 x 6 TB drives in RAID-z2 which should get you about 31 TB of practical usable storage capacity. The data can then be copied over from the old system. If you buy a 12 bay server, you could even have a couple hot spare drives or expand the size of the pool. This might be a good model for you:
https://www.supermicro.com/products/system/2U/5028/SSG-5028R-E1CR12L.cfm
You might want to go directly to iXsystems and get them to give you a quote:
https://www.ixsystems.com/ix-server-family/rackmount-servers/?ix-server=2212-2
Not in FreeNAS, it doesn't--FreeNAS just doesn't include that capability. Determining which disk has failed is going to be a process of elimination. Go to Storage -> View Disks, and note the serial numbers. If there's a disk identifier (e.g., da4) listed there with a serial number, that isn't listed in the volume status page, that's probably the bad one. If all the disks listed there are also listed in the volume status page, then you'll need to power down the system and look for the one disk whose serial number isn't listed there.How to determine which one disk fails because the led status lights normally.
If you'd share the error message, we might have some other information. Otherwise, the best we can say is "maybe".Or is it related to a scsi card? Because when you look in the log file, the scsi card has an error.
zpool status
in code tags). The more information you share, the better our chances of being able to help you.I actually had something more like this in mind.Code:pool: Storage state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub repaired 0 in 83h8m with 0 errors on Wed Jul 5 11:08:21 2017 config: NAME STATE READ WRITE CKS Storage DEGRADED 0 0 raidz2-0 DEGRADED 0 0 gptid/e30271e3-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e3bf3548-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e499b128-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e57934b8-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e654767d-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 13314010698547058984 UNAVAIL 0 94 gptid/e8043869-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 gptid/e8de61a9-3cb0-11e6-84f5-0cc47a7ca68d ONLINE 0 0 errors: No known data errors
But what you showed up gives enough of the story to tell you that you have one disk that is totally failed and needs to be replaced as soon as possible because you are only running RAID-z1 and with a failed disk you don't have any redundancy. That means that if another disk fails, you loose all your data. In addition to that one of the disks that is not failed has 18 errors and that means (to me) that it might be on the way to failure also.
It would appear that nobody has been maintaining this system for quite a while. It is many versions behind on the software also, but I wouldn't worry about that right now.
It is probably time to consider a new system here.