Tank reaching 97%

Status
Not open for further replies.

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
Hi,

I have a problem with the tank. the tank has reached 97%. So how can I deal with this problem?

TQVM
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Hi,

I have a problem with the tank. the tank has reached 97%. So how can I deal with this problem?

TQVM
If you give some description of your hardware, it is possible that a better answer could be forthcoming. Please review the forum rules.

Also, you posted back on Mar 28, 2014
Hi All,

I'm new here. And so fresh on freeNAS. Our storage reaching 98% and appeared write failed. How to free up the storage without expanding the storage.

Thanks
the suggestion for clearing your problem back then was:
Search for a thread titled - Disk full can't delete any files. Please help.

The answer is in message #3.
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
How do you deal with any other full volume? Delete files.

Yes delete files.. Actually the size of the hard drive is 20TB. A few days ago, the capacity was up to 18TB. Therefore I remove some unnecessary files. So now it's 16TB. But the tank is still 97% even though the data has been discarded.

TQVM
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
If you give some description of your hardware, it is possible that a better answer could be forthcoming. Please review the forum rules.

Also, you posted back on Mar 28, 2014

the suggestion for clearing your problem back then was:

Hardware :
CPU - Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
Memory - 32742MB

Any others info..

TQVM
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Actually the size of the hard drive is 20TB.
No, it isn't; there are no 20TB hard drives. Perhaps you meant to say that was the size of your pool.

And, as @Chris Moore pointed out, you asked this exact same question in your only other post, almost four years ago. You were given the answer then. What makes you think the answer is different now?
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
No, it isn't; there are no 20TB hard drives. Perhaps you meant to say that was the size of your pool.

And, as @Chris Moore pointed out, you asked this exact same question in your only other post, almost four years ago. You were given the answer then. What makes you think the answer is different now?

Yes the size of pool (20TB). OK I give you the print screen my real problem for your extra info..
upload_2017-11-8_11-0-55.png


TQVM
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
What version of FreeNAS are you using? And, for the third time, why do you think the answer you were given nearly four years ago has changed?
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
FreeNAS version 9.2.1.2.. four years ago, I was transferred to another department. Actually I can not solve the problem. The problem has been resolved by others. About a year ago I was transferred to this department back.

TQVM
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So, you posted your other thread nearly four years ago. You got one response, but vanished until today. If your pool is continuing to fill up, you should probably just expand it (after you fix it, see below). Otherwise...

If your pool is full, delete some data. If that doesn't free space, it's probably in snapshots, so you can delete them, or wait for them to expire (assuming it's an automatic snapshot task).

And your pool is degraded, so there's likely at least one failed disk.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I bet that it is snapshots taking up all the space. So, you need to delete any old snapshots you have that you don't need. I had that on one of my servers at one point.
Also, please post the code snip of your zpool status because you definitely have something wrong with your pool.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
I bet that it is snapshots taking up all the space. So, you need to delete any old snapshots you have that you don't need. I had that on one of my servers at one point.
Also, please post the code snip of your zpool status because you definitely have something wrong with your pool.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
Maybe this..

upload_2017-11-9_0-15-27.png
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Maybe this..
I actually had something more like this in mind.
Code:
  pool: Storage
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
  the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
  see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 83h8m with 0 errors on Wed Jul  5 11:08:21 2017
config:

  NAME  STATE  READ WRITE CKS
  Storage  DEGRADED  0  0
  raidz2-0  DEGRADED  0  0
  gptid/e30271e3-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e3bf3548-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e499b128-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e57934b8-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e654767d-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  13314010698547058984  UNAVAIL  0  94
  gptid/e8043869-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e8de61a9-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0

errors: No known data errors

But what you showed up gives enough of the story to tell you that you have one disk that is totally failed and needs to be replaced as soon as possible because you are only running RAID-z1 and with a failed disk you don't have any redundancy. That means that if another disk fails, you loose all your data. In addition to that one of the disks that is not failed has 18 errors and that means (to me) that it might be on the way to failure also.
It would appear that nobody has been maintaining this system for quite a while. It is many versions behind on the software also, but I wouldn't worry about that right now.
It is probably time to consider a new system here.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Maybe this..
Does the illustration you attached show all the drives that are supposed to be in the system?
It appears, from what you have shared, that you have 9 x 3TB drives in a RAID-z1 pool that is completely filled.
If you still need this data, you need to replace that failed drive as a patch to keep it alive a little longer, but if you still need the functionality, it is time to get a new server or significantly upgrade the one you have.
I would suggest purchasing a replacement system with 10 x 6 TB drives in RAID-z2 which should get you about 31 TB of practical usable storage capacity. The data can then be copied over from the old system. If you buy a 12 bay server, you could even have a couple hot spare drives or expand the size of the pool. This might be a good model for you:
https://www.supermicro.com/products/system/2U/5028/SSG-5028R-E1CR12L.cfm
You might want to go directly to iXsystems and get them to give you a quote:
https://www.ixsystems.com/ix-server-family/rackmount-servers/?ix-server=2212-2
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Does the illustration you attached show all the drives that are supposed to be in the system?
I'd guess not; since it goes up to da11, I'd bet there are 12 drives in there. This has fail written all over it.
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
But what you showed up gives enough of the story to tell you that you have one disk that is totally failed and needs to be replaced as soon as possible because you are only running RAID-z1 and with a failed disk you don't have any redundancy.

How to determine which one disk fails because the led status lights normally. None are orange or red. Or is it related to a scsi card? Because when you look in the log file, the scsi card has an error.

TQVM
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
Does the illustration you attached show all the drives that are supposed to be in the system?
It appears, from what you have shared, that you have 9 x 3TB drives in a RAID-z1 pool that is completely filled.
If you still need this data, you need to replace that failed drive as a patch to keep it alive a little longer, but if you still need the functionality, it is time to get a new server or significantly upgrade the one you have.
I would suggest purchasing a replacement system with 10 x 6 TB drives in RAID-z2 which should get you about 31 TB of practical usable storage capacity. The data can then be copied over from the old system. If you buy a 12 bay server, you could even have a couple hot spare drives or expand the size of the pool. This might be a good model for you:
https://www.supermicro.com/products/system/2U/5028/SSG-5028R-E1CR12L.cfm
You might want to go directly to iXsystems and get them to give you a quote:
https://www.ixsystems.com/ix-server-family/rackmount-servers/?ix-server=2212-2

I will purpose to my boss.

TQVM
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
How to determine which one disk fails because the led status lights normally.
Not in FreeNAS, it doesn't--FreeNAS just doesn't include that capability. Determining which disk has failed is going to be a process of elimination. Go to Storage -> View Disks, and note the serial numbers. If there's a disk identifier (e.g., da4) listed there with a serial number, that isn't listed in the volume status page, that's probably the bad one. If all the disks listed there are also listed in the volume status page, then you'll need to power down the system and look for the one disk whose serial number isn't listed there.
Or is it related to a scsi card? Because when you look in the log file, the scsi card has an error.
If you'd share the error message, we might have some other information. Otherwise, the best we can say is "maybe".

Honestly, you're making it very hard for us to help you. You still haven't told us anything about what you've tried to do to fix the problem, what the result was. You haven't answered some of the questions we've asked (i.e., what's the complete output of zpool status in code tags). The more information you share, the better our chances of being able to help you.
 

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
I actually had something more like this in mind.
Code:
  pool: Storage
state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
  the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
  see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 83h8m with 0 errors on Wed Jul  5 11:08:21 2017
config:

  NAME  STATE  READ WRITE CKS
  Storage  DEGRADED  0  0
  raidz2-0  DEGRADED  0  0
  gptid/e30271e3-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e3bf3548-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e499b128-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e57934b8-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e654767d-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  13314010698547058984  UNAVAIL  0  94
  gptid/e8043869-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0
  gptid/e8de61a9-3cb0-11e6-84f5-0cc47a7ca68d  ONLINE  0  0

errors: No known data errors

But what you showed up gives enough of the story to tell you that you have one disk that is totally failed and needs to be replaced as soon as possible because you are only running RAID-z1 and with a failed disk you don't have any redundancy. That means that if another disk fails, you loose all your data. In addition to that one of the disks that is not failed has 18 errors and that means (to me) that it might be on the way to failure also.
It would appear that nobody has been maintaining this system for quite a while. It is many versions behind on the software also, but I wouldn't worry about that right now.
It is probably time to consider a new system here.

Below is the zpool status..

[root@nas1 ~]# zpool status
pool: tank
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub repaired 0 in 57h54m with 10 errors on Tue Oct 10 09:54:51 2017
config:

NAME STATE READ WRITE CKSUM
tank DEGRADED 126 0 0
raidz1-0 DEGRADED 126 0 0
gptid/314214ac-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/31d202ce-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/3264371b-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/32f0c656-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
9773083294005733761 UNAVAIL 0 0 0 was /dev/gptid/3380f7fb-c8a9-11e2-8927-002590c1f
cf4
gptid/3418fcca-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/34b1725d-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/35421063-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/35dbbdfb-c8a9-11e2-8927-002590c1fcf4 DEGRADED 126 0 0 too many errors
gptid/36690f03-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/36fcd7d8-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
gptid/378c07c4-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0
logs
gptid/37d94c7f-c8a9-11e2-8927-002590c1fcf4 ONLINE 0 0 0

errors: 1 data errors, use '-v' for a list

..

TQVM
 
Status
Not open for further replies.
Top