Hi guys,
I recently built the same Truenas Core as Linus Tech Tips did during this video:
I put in 5xSeagate IronWolf 10TB NAS Internal Hard Drive HDD: https://www.amazon.com.au/gp/product/B085ZB51HW/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1
I configured my pool using raidz1 for 1 drive worth of redundancy.
It was working great up until last night.
My pool went offline. I did a manual SMART test on each of the drives and they all returned successful results.
So I tried to export and re-import my pool. The export worked, but now when I try and import the pool, I get an IO error
My complete system specs are:
MotherBoard: Asus B550-I M
CPU: Ryzen 3100 (using stock cooler)
Memory: G-Skill 16Gb 3600MHz
Power Supply: Corsair SF 600
Hard Drives: 5x Seagate IronWolf 10TB NAS Internal Hard Drive HDD
SSD: Kingstone 250GB
Case: JonsboN1
OS: TrueNAS-12.0-U8
Additional: M.2 to 2x SATA adaptor - Model listed here is a 5 port SATA, but I have the 2 port version https://www.amazon.com.au/gp/product/B07T3RMFFT/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&th=1
I am using this NAS for Plex, File Storage, and a DDNS Server.
Can anyone please help me. I have done the cardinal sin of not having offsite backups as it wasn't in my budget yet.
Thanks,
James.
I recently built the same Truenas Core as Linus Tech Tips did during this video:
I put in 5xSeagate IronWolf 10TB NAS Internal Hard Drive HDD: https://www.amazon.com.au/gp/product/B085ZB51HW/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1
I configured my pool using raidz1 for 1 drive worth of redundancy.
It was working great up until last night.
My pool went offline. I did a manual SMART test on each of the drives and they all returned successful results.
So I tried to export and re-import my pool. The export worked, but now when I try and import the pool, I get an IO error
Code:
Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool self.logger.error( File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool zfs.import_pool(found, new_name or found.name, options, any_host=any_host) File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool libzfs.ZFSException: I/O error """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run await self.future File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body rv = await self.method(*([self] + args)) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool await self.middleware.call('zfs.pool.import_pool', pool['guid'], { File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call return await self._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) libzfs.ZFSException: ('I/O error',)
My complete system specs are:
MotherBoard: Asus B550-I M
CPU: Ryzen 3100 (using stock cooler)
Memory: G-Skill 16Gb 3600MHz
Power Supply: Corsair SF 600
Hard Drives: 5x Seagate IronWolf 10TB NAS Internal Hard Drive HDD
SSD: Kingstone 250GB
Case: JonsboN1
OS: TrueNAS-12.0-U8
Additional: M.2 to 2x SATA adaptor - Model listed here is a 5 port SATA, but I have the 2 port version https://www.amazon.com.au/gp/product/B07T3RMFFT/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&th=1
I am using this NAS for Plex, File Storage, and a DDNS Server.
Can anyone please help me. I have done the cardinal sin of not having offsite backups as it wasn't in my budget yet.
Thanks,
James.