Pool degraded/write errors after adding a new drive.

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Hi !
Let me start by saying I know my system is not ideal , it is a temp system until i get my new NAS in a few month.
NAS:
topton n1 NAS
AMD Athlon Silver 3050e
16GB of RAM but only 13.6GB available for NAS.

I started with my 8TB drive only and had no issues at all.
After adding my 18TB drive to the pool, I started getting write error:

de364a5e-87b8-4b39-8d76-cecc219c7bb8 DEGRADED 0 23 0 too many errors

sd 0:0:0:0: [sda] tag#5 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=26s
[89884.312206] sd 0:0:0:0: [sda] tag#5 Sense Key : Illegal Request [current]
[89884.312219] sd 0:0:0:0: [sda] tag#5 Add. Sense: Unaligned write command
[89884.312232] sd 0:0:0:0: [sda] tag#5 CDB: Write(16) 8a 00 00 00 00 00 4a 40 ff 30 00 00 00 08 00 00
[89884.312246] blk_update_request: I/O error, dev sda, sector 1245773616 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0

Disk passed short SMART and busy running long .

My question is, can this be a RAM issue?
I can make a plan to get more RAM but it wont be easy (I think my server have 2x32gb, and not sure about my work laptop but not ideal....heheh)
I prefer not to buy as my new NAS is arriving soon, however if we can confirm it's a RAM issue i'll just buy some I suppose , RAM isn't too pricey these days.

Hard drive seems to be fine so far...
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
BUMP....
can this be a RAM issue?
I can get 16gb at an ok price, but prefer not to spend due to my new NAS arriving in a few month
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
For now let me delete the "errors: Permanent errors have been detected in the following files:"
and scrub
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Thanks everyone, Useful stuff!
I think let me remove the drive from the pool and wait for my new NAS to arrive.


root@truenas[/]# zpool list -v xxx-xxx NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT xxx-xxx 23.6T 5.85T 17.8T - - 0% 24% 1.00x DEGRADED /mnt b43e1073-1684-4339-bce3-302c3d9e2e8c 7.28T 5.78T 1.49T - - 2% 79.5% - ONLINE de364a5e-87b8-4b39-8d76-cecc219c7bb8 16.4T 71.7G 16.3T - - 0% 0.42% - DEGRADED


Is there a way to determine what is in the 71GB?
I will made a backup and move back to a single drive set up for now.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Considering that it started spitting out errors exactly at the moment you added a new drive, I'd say you either bumped some cables and it gets somewhat unseated or your new drive is bad. It's possible, but too much of a coincidence that your RAM would all the sudden just go bad and start spitting out errors the same moment you installed a new HDD.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Ah forum is alive again.
I was thinking the extra 18TB required more RAM (I am sure the recommendation used to be 1GB of RAM for 1TB of data but I can't seem to find this anymore.)
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Let me open the unit up and double check everything...there are no cables but the hard drive caddies are very awkward, I think let me connect everything with the unit open
 
Joined
Oct 22, 2019
Messages
3,641
I started with my 8TB drive only and had no issues at all.
After adding my 18TB drive to the pool

Are you creating a striped pool of 8TB HDD + 18TB HDD? :oops: You're going to lose all your data.

Did I read this wrong?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Ah yeah, please post the output of zpool list and gpart list in CODE tags (so it's readable).
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Are you creating a striped pool of 8TB HDD + 18TB HDD? :oops: You're going to lose all your data.

Did I read this wrong?
You read that right, I started this thread by saying "this is not an ideal setup"
I do have backups though

Plan was to keep 2 copies on each drive of important datasets
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
You read that right, I started this thread by saying "this is not an ideal setup"
I do have backups though

Plan was to keep 2 copies on each drive of important datasets
If you are striping them, that's not how it works. It would post itself as a single large drive of 26 TB instead of two drives like you're thinking, unless you make two pools out of them with a drive each.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
If you are striping them, that's not how it works. It would post itself as a single large drive of 26 TB instead of two drives like you're thinking, unless you make two pools out of them with a drive each.
I am starting to realise that...should have made 2 pools.
aaaaaaaaaah
ok whats next ?please...

by the way long SMART test is healthy on that 18TB.
Can I maybe replace the 18TB with a 2TB (Keeping in mind that data on the 18tb is small (70GB) ? thats all I have unfortunately.

Code:
zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Bar1-8TB   23.6T  5.85T  17.8T        -         -     0%    24%  1.00x  DEGRADED  /mnt
boot-pool   111G  4.09G   107G        -         -     1%     3%  1.00x    ONLINE  -


gpart list doesn't work?
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Code:
root@truenas[~]# zpool list -v Bar1-8TB
NAME                                     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Bar1-8TB                                23.6T  5.85T  17.8T        -         -     0%    24%  1.00x  DEGRADED  /mnt
  b43e1073-1684-4339-bce3-302c3d9e2e8c  7.28T  5.78T  1.49T        -         -     2%  79.5%      -    ONLINE
  de364a5e-87b8-4b39-8d76-cecc219c7bb8  16.4T  71.7G  16.3T        -         -     0%  0.42%      -  DEGRADED
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I am starting to realise that...should have made 2 pools.
aaaaaaaaaah
ok whats next ?please...

by the way long SMART test is healthy on that 18TB.
Can I maybe replace the 18TB with a 2TB (Keeping in mind that data on the 18tb is small (70GB) ? thats all I have unfortunately.

Code:
zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Bar1-8TB   23.6T  5.85T  17.8T        -         -     0%    24%  1.00x  DEGRADED  /mnt
boot-pool   111G  4.09G   107G        -         -     1%     3%  1.00x    ONLINE  -
Yeah, it looks like you've made 1 pool only. Contrary to what you may believe, Bar1 is NOT 8 TB in size, but 23.6 TiB in size as you can see from the output, which is a combination of both your 18T and 8T drives.
Since you have backups, you can just probably destroy the pool and recreate two pools with 1 disk each as member and then restore the files from your backups.

gpart list doesn't work?
Ah sorry, I didn't realize this was SCALE forum, not CORE. In SCALE, I think it's fdisk -l. It's probably not necessary though cause I can already tell your pool topology from the zpool command.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
any way of just removing the drive and even lose that 73GB?

easyer to restore the 73GB...
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
any way of just removing the drive and even lose that 73GB?

easyer to restore the 73GB...
I'm not sure what 73 GB you are referring to, your pool is using up 5.85 TiB from what I can see, not 73 GB.

EDIT: Oh I see, the second drive has 73GB written to it. I'm afraid you can't just remove it like that. Unfortunately, striping is an irreversible operation. I think there's some Voodoo magic you can do, but it's a risky, not recommended operation and can be potentially catastrophic.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
I THINK it should be safe because i did start with the 8TB and it was working fine, only issues started once I added the 18TB.
Anyways I opened the unit, resit the drives, deleted 1 faulty file and running a scrub.
so lets see
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
so what will happen if I just take the drive out?
will it break everything?or is it not a a definate? (not that I am gonna try but definitely consider.

Another option is to switch this NAS off until my new unit arrive lol
 
Joined
Oct 22, 2019
Messages
3,641
I think it's technically possible with ZFS to remove a root level vdev from a pool (in this case the "vdev" is actually a single-stripe drive), as long as there is ample space for the records that currently exist on the to-be-removed drive to be relocated to the remaining drive.

(I used the terms "drive" and "vdev" almost interchangeably for the sake of this specific thread.)


I've never tried this.

I don't know the success rate.

It might even destroy all your data, for all I know.
 
Last edited:

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
I think it's technically possible with ZFS to remove a root level vdev from a pool (in this case, the "vdev" is actually a single-stripe drive), as long as there is ample space for the records that currently exist on the to-be-removed drive to be relocated to the remaining drive.

I've never tried this.

I don't know the success rate.

It might even destroy all your data, for all I know.

Ok thanks.
Will confirm backup is good, have some not so important stuff that is not backed up but will do a cloud backup for the next few days.
Also let me try and fix the issues...I really dont think its the drive.
 
Top