The pool is not called after one hard is broken.

jangteng

Cadet
Joined
Sep 27, 2021
Messages
9
I don't know if I can ask here.
Please understand even if it doesn't make sense because it's a question through a translator.

One disk failed while using the sas disk 4 terra.
In that state, when I delete the pool and bring it back, it fails to bring it back due to an i/o error.
I don't know much because I don't have professional knowledge and only know how to use it.
Photos are the only important data, but the problem is that I'm frustrated because a problem occurs while using it without backing it up elsewhere.
I ask you to let me know the solution easily.
Please...


Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 94, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
self.logger.error(
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1248, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1213, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1219, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1146, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1120, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, it looks like you were a little hasty in exporting/disconnecting the pool before trying to resolve the failed disk.

Can you show the output from zpool import at the shell? (in code tags please)
 

jangteng

Cadet
Joined
Sep 27, 2021
Messages
9
First of all, thank you for your reply.
When you enter the command, it's printed out like this.


pool: jangteng

id: 11619446813623180558

state: FAULTED

status: The pool was last accessed by another system.

action: The pool cannot be imported due to damaged devices or data.

The pool may be active on another system, but can be imported using

the '-f' flag.

see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY

config:



jangteng FAULTED corrupted data

raidz1-0 DEGRADED

gptid/5b38bb31-d592-11eb-aa36-a0b3cce3fd61 ONLINE

gptid/5ac6f2ac-d592-11eb-aa36-a0b3cce3fd61 ONLINE

gptid/5b52ad21-d592-11eb-aa36-a0b3cce3fd61 ONLINE

gptid/5b64d0ed-d592-11eb-aa36-a0b3cce3fd61 UNAVAIL cannot open
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It seems that you may have another problem with the pool in addition to the failed disk:
jangteng FAULTED corrupted data


It could be possible that you can still get the pool to import using:
zpool import -f jangteng
 

jangteng

Cadet
Joined
Sep 27, 2021
Messages
9
Thank you for your answer.
This is the question.

root@truenas[~]# zpool import -f jangteng
cannot import 'jangteng': I/O error
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of Mon Jun 28 00:49:26 2021
should correct the problem. Approximately 3 minutes of data
must be discarded, irreversibly. Recovery can be attempted
by executing 'zpool import -F jangteng'. A scrub of the pool
is strongly recommended after recovery.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Shall we install the broken hard drive and execute the command again?
Depending if that 3 minutes of data would be important to you, and how badly "broken" the other drive is, it might make more sense to just run the suggested command and lose the last 3 minutes of data:
zpool import -F jangteng
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
what mean 3 minutes of data
In order to recover the pool, ZFS needs to ignore the transactions which happened in the 3 last minutes that the pool was working.

Any data that was written to the pool in that 3 minutes would be discarded.

You probably can't write much data in 3 minutes unless you were in the middle of a large file copy, which you could probably just re-do after you recover to normal again.
 

jangteng

Cadet
Joined
Sep 27, 2021
Messages
9
Is there a way to bring in POOL unconditionally?
There was no data up or down for about a week.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You're not in much of a position to choose, so I would say either count the pool contents as lost or attempt the import with the "small" loss.

If you didn't modify anything recently, there's a good chance you won't see anything missing.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
But then back it up. You do have a backup don't you?
 

jangteng

Cadet
Joined
Sep 27, 2021
Messages
9
In order to recover the pool, ZFS needs to ignore the transactions which happened in the 3 last minutes that the pool was working.

Any data that was written to the pool in that 3 minutes would be discarded.

You probably can't write much data in 3 minutes unless you were in the middle of a large file copy, which you could probably just re-do after you recover to normal again.

It appears in the shell that the pool was imported through several processes, but the pool is not imported and does not appear on the screen.
It says there's an error, too.
Hard is still unrecognizable.
I need to delete the abnormal hard drive and change it to a new one.


config:

NAME STATE READ WRITE CKSUM
jangteng DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/5b38bb31-d592-11eb-aa36-a0b3cce3fd61 ONLINE 0 0 0
gptid/5ac6f2ac-d592-11eb-aa36-a0b3cce3fd61 ONLINE 0 0 0
gptid/5b52ad21-d592-11eb-aa36-a0b3cce3fd61 ONLINE 0 0 0
9772081274955777253 UNAVAIL 0 0 0 was /dev/gptid/5b64d0ed-d592-11eb-aa36-a0b3cce3fd61
errors: 141 data errors, use '-v' for a list
root@freenas[~]#

Thank you always for your answer.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So, you can get a list of the corrupted/missing files by using zpool status -v

You should immediately take a copy of all important files from the pool to another location/pool.

I recommend creating a new pool using the remaining disks and a new one rather than replacing drives in the pool like it is now.
 

jangteng

Cadet
Joined
Sep 27, 2021
Messages
9
따라서 zpool status -v를 사용하여 손상되거나 누락된 파일 목록을 얻을 수 있습니다.

풀에서 다른 위치/풀로 모든 중요한 파일의 복사본을 즉시 가져와야 합니다.

지금처럼 풀의 드라이브를 교체하는 것보다 남은 디스크와 새 디스크를 사용하여 새 풀을 만드는 것이 좋습니다.
[/인용하다]

답변 주셔서 감사합니다.

거지로 해결했습니다.
영어가 부족하고 정보가 거의 없어서 무지한 방법을 사용했습니다.
하드의 컨트롤러가 살아 있는 것 같아서 풀을 뽑기 위해 엑스트라 하드에 컨트롤러를 이식했습니다.
물론 순탄치만은 않았다.

설명하자면 이식했을 때 풀이 오류 없이 올라왔습니다.
문제는 내가 가져올 풀이 아니라 내가 이식한 엑스트라 하드 풀이 올라왔다는 것이다. 놀랐습니다. 풀을 지우고 문제의 하드디스크를 제거하고 재부팅을 해보니 기존 풀이 목록에 있는 걸 보고 조금 안심이 되었지만 풀이 업로드 되지 않고 141에러가 발생했습니다.

위의 질문은 그 단계였습니다.

이번에는 여분의 하드만 설치된 하드디스크 풀을 완전히 삭제하고 다시 모든 하드드라이브를 설치해 처음과 같은 방식으로 풀을 호출했더니 원하는 풀이 올라왔다. 중요한 데이터의 중요도에 따라 바로 오는데, 약 30% 정도 복사한 후에는 속도가 약 5%로 떨어졌습니다. 중요한 데이터가 복원되었으므로 이제 연구용 풀을 저장하기 위해 갱신하는 중입니다.

과정에 대한 불필요한 설명일 수도 있지만 너무 잘못된 회복 기간이라 도움이 될까 해서 글을 씁니다.

완전 초보지만 관심과 답변으로 조금 더 성장했습니다.

정말 고맙습니다.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I see you're from Korea and the circles in the characters were telling me that's Korean also... unfortunately, I can't read Korean so I don't know if there's a question in there for me.

If it all worked out fine, that's great news.
 

jangteng

Cadet
Joined
Sep 27, 2021
Messages
9
I see you're from Korea and the circles in the characters were telling me that's Korean also... unfortunately, I can't read Korean so I don't know if there's a question in there for me.

If it all worked out fine, that's great news.


I uploaded a translator in English, but it went up in Korean.

Thank you for your answer.

I solved it in an ignorant way.
I chose an ignorant method because I lacked English and had little information.
The controller on the hard disk seemed to be alive, so I implanted it into the extra hard disk.

The conclusion was not just smooth.

To explain the process, the pool came up without errors when transplanted.
The problem is that the extra hard pool I transplanted has been uploaded, not the pool I will bring. I was surprised. When I erased the pool, removed the hard disk in question, and rebooted it, I was a little relieved to see that the existing pool was on the list, but the pool was not uploaded and 141 error occurred.

The above question was at that stage.

This time, I installed only extra hard drives, deleted the pool completely, and installed all hard drives again to call the pool in the same way as the first time, and the desired pool came up. They copy important data and copy them step by step according to their importance, but after copying about 30%, the speed dropped to about 5%. Important data has been restored, so we are now rebuilding the pool for study.

It may be an unnecessary explanation of the process, but I'm writing because it's a way to solve it in a weird way.

I'm a total beginner, but I've grown a little more with interest and answers.

Thank you so much.
 
Top