[FreeNAS-9.1.0-RELEASE-x64 (dff7d13)]Replace ZFS pool disks to grader disks

Status
Not open for further replies.
Joined
Aug 6, 2013
Messages
3
Hi
i have freenas 8.3.1 with RaidZ3 with 5x 500GB disks
total space of tank is ~900Gb
i wanted to change pool size to bigger, i bought 5 disks with 2TB size:
And started to replace with one by one Detach=>Replace=>Resilver=>Next
with this method i upgraded 2 disks , then i upgraded Freenas to Newer Version 9.1.0,
now i`m trying to continue my hardware upgrading but there are issues on standard operation,
when trying replace ,the console says

Aug 6 19:59:17 nas manage.py: [middleware.exceptions:38] [MiddlewareError: Disk replacement failed: "cannot replace 4930337263668200990 with gptid/25b986a4-feb1-11e2-b820-00270e0368a9: devices have different sector alignment, "]

Please help
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's gonna require a developer response...

Developers Developers Developers!!!

My guess of what's going on:

Your old disks were 512-bytes-per-sector while the new disks are 4kbytes-per-sector. Your old zpool has an ashift=9(512bytes per sector) and not an ashift=12(4kbytes per sector). Ideally you want the ashift to match your bytes per sector. In your case FreeNAS 9 is recognizing that you are trying to do something that doesn't make much sense... use a 4k sector drive in a zpool with an ashift of 9.

I think the easiest solution right now is to go back to 8.3.1 to complete the zpool "upgrade". But at some point you should probably wipe your pool and create it with an ashift=12 so you are compatible with future hard drives. Obviously at some point you may want to upgrade again and you'll be stuck because all drives made will be 4k by then.
 

Simon00

Dabbler
Joined
Jan 22, 2012
Messages
17
Why so complicated... many ways to do it... simplest, checksum the files... copy files to external storage... label order of disks & remove old disks, safe guard it for now. install new disks & freenas 9.1.0... copy files over to new raid setup. run checksum on files in new setup. if it verifies, recycle old disks for other purposes. I always use the KISS rule... :smile:
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Hi
when trying replace ,the console says

Aug 6 19:59:17 nas manage.py: [middleware.exceptions:38] [MiddlewareError: Disk replacement failed: "cannot replace 4930337263668200990 with gptid/25b986a4-feb1-11e2-b820-00270e0368a9: devices have different sector alignment, "]

Please help


Hi,

We are aware of this issue and the alert system will be extended to help assist in this case.

For now, please try setting this in your sysctl variable, as described at http://doc.freenas.org/index.php/Sysctls :
Code:
vfs.zfs.vdev.larger_ashift_minimal = 0

Fine print: setting this to 0 would make FreeNAS use ashift=9 if disk is not "AF" ("Advanced Format" or 4K sector), which allows using non-AF disks to replace existing disk in non-AF ZFS pools, but will make it hard for future
And see if this would solve your problem. If not, you will have to additionally set:
Code:
vfs.zfs.vdev.larger_ashift_disable = 1

Fine print: setting this to 1 would make FreeNAS ignore the fact that you are using AF disk. This would negatively affect performance but will make it possible to use an AF disk in a non-AF pool.
Note that we would recommend backing up all of your data and recreate the pool without these tunables, as that would give you a more future-proof pool.
 
Joined
Aug 6, 2013
Messages
3
Hi,

We are aware of this issue and the alert system will be extended to help assist in this case.

For now, please try setting this in your sysctl variable, as described at http://doc.freenas.org/index.php/Sysctls :
Code:
vfs.zfs.vdev.larger_ashift_minimal = 0

Fine print: setting this to 0 would make FreeNAS use ashift=9 if disk is not "AF" ("Advanced Format" or 4K sector), which allows using non-AF disks to replace existing disk in non-AF ZFS pools, but will make it hard for future
And see if this would solve your problem. If not, you will have to additionally set:
Code:
vfs.zfs.vdev.larger_ashift_disable = 1

Fine print: setting this to 1 would make FreeNAS ignore the fact that you are using AF disk. This would negatively affect performance but will make it possible to use an AF disk in a non-AF pool.
Note that we would recommend backing up all of your data and recreate the pool without these tunables, as that would give you a more future-proof pool.
HI
Thank you , that helps me a lot , it works(Apache@)
 
Joined
Aug 6, 2013
Messages
3
abide away, did not planning in future releases upgrading shift from 9 to 12 without recreating zfs pool ?
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
abide away, did not planning in future releases upgrading shift from 9 to 12 without recreating zfs pool ?


No, at least not in the near future as this is not trivial. The operation would require some significant rewrites of data (arguably, in real world the situation would be less bad as one do not have to rewrite a block if it's already properly aligned) online, which would be slow and risky (e.g. a power outage in the middle must not lose data).
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I agree with Simon, backup your data, it's not terribly much if you have 900MB usable space. Install 9.1.0 and blow away your current pool and recreate it with 4k sectors. Lastly restore your data. This will restore your ability to use 4k sector replacement drives without any unique software changes which you may forget about in the future.
 

djseto

Dabbler
Joined
Aug 19, 2013
Messages
32
I got the same error message except I was simply trying to replace a drive that went bad. I replaced a 3+ year old 500GB HD with one I just bought. When I tried to "replace", it threw the exact same error. I had to use both sysctl parameters to get it to replace the disk. It's now doing a 10hr Resilver. I only stumbled upon this thread (and solution) via Googling the error message.

Am I also going to have to migrate my data and copy it back to a new pool? I have one pool with several datasets (2 for Apple Time Machine). I'm OK running with those sysctl params for a while provided it doesn't hurt the integrity of my data.
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
I got the same error message except I was simply trying to replace a drive that went bad. I replaced a 3+ year old 500GB HD with one I just bought. When I tried to "replace", it threw the exact same error. I had to use both sysctl parameters to get it to replace the disk. It's now doing a 10hr Resilver. I only stumbled upon this thread (and solution) via Googling the error message.

Am I also going to have to migrate my data and copy it back to a new pool? I have one pool with several datasets (2 for Apple Time Machine). I'm OK running with those sysctl params for a while provided it doesn't hurt the integrity of my data.


Running with the sysctl does not hurt integrity of your data, the only downside is that you may get poorer performance (this can be addressed at a later time when you migrate your data to a new pool).
 

djseto

Dabbler
Joined
Aug 19, 2013
Messages
32
Thanks. When I delete my pool, I'ma assuming all the shares/services associated with it get deleted too which means I need to rebuild my shares to match the two datasets I have for my Apple Time Machine backups?
 

Quadgnim

Dabbler
Joined
Aug 14, 2013
Messages
10
I hit the same problem. I actually just bought 4 new 2 TB drives and built 2 mirrored pools. Then after a few days one of the drives was giving errors. I went back to newegg and ordered a replacement. I also upgraded from 8 to 9 during this time. Now that I'm getting the error, does that mean newegg sold me old drives? Do I need to go back a bitch to them? If I use these 3 drives plus the new one giving the error (total = 4 drives) will it resolve the problem or are the other 3 drives not capable of the 4k alignment?

One more question, instead of running 2 mirrored pairs, will I get better performance if I move everything to one pair, then break the mirror so I have one drive in the pool, then add the 3 remaining drives back in as a striped mirror? Can I do that through the GUI, or will that require I do something at the command line? I run two esx hosts in a lab, with redundant 1 gig links to a dedicated switch supporting jumbo frames and the freenas box is a quad core amd with 8 gig ram. ESX keeps giving latency errors and running DD from a VM shows results all over the map. I'm wondering if resilvering the pools to v9 and striping across all 4 spindles would fix my issues?

Thanks
 

djseto

Dabbler
Joined
Aug 19, 2013
Messages
32
BTW, when I created my new pool, there was no option I could see to change the ashift. I did delete the sysctl parameters before creating the pool. Does FreeNAS 9 just use a 4k sector by default or will I have to delete and rebuild the pool again with some custom option to force this? I want to not deal with this ever again...
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
BTW, when I created my new pool, there was no option I could see to change the ashift. I did delete the sysctl parameters before creating the pool. Does FreeNAS 9 just use a 4k sector by default or will I have to delete and rebuild the pool again with some custom option to force this? I want to not deal with this ever again...


FreeNAS 9 just use 4K sector by default unless you instruct it not to do so (with sysctl). You can check ashift after creation by doing 'zdb -C | grep ashift', where 12 means 4k and 9 means 512.
 

djseto

Dabbler
Joined
Aug 19, 2013
Messages
32
UGHHHHHH...

I just ran this command and it shows ashift=9. I'm definitely using FreeNAS 9 and I did a new pool creation. WTF? I just spent 3 days moving close to 1TB worth of data to another drive so I could create a new pool only to have it be the wrong size again.

Do you need certain drives to support 4k sectors? I certainly have drives that are 2-3 years old in my setup (5x500GB in RAIDZ2).
 

Hugo

Dabbler
Joined
Sep 8, 2013
Messages
11
Hi all,

I would like to comment on this thread as I expierenced a similar issue this morning.
This night 1 disk failed in an raidz1 pool of 5 drives (2TB each). Although it had an hotspare configured, it did not get in use. At that point I was amazed why that didn't happen.
When I issued the "zpool replace" command on the command line, I got the message "zpool cannot replace ... with .... devices have different sector alignment".
The pool was created on freenas 8.x and recently upgraded to 9.1.1.

By setting the system variable "vfs.zfs.vdev.larger_ashift_minimal = 0", I could replace the faulty disk with the hotspare.

I understand FreeNAS now uses a default of 4k sector size. For large (2TB+) I understand that is recommended/necessary. Is this also recommended on smaller (max 2TB) disks? What's the gain? Reading this thread it's advised to upgrade the pools to a 4k sector size. But for pools loaded with data and on production systems that is not an easy task.

It's also quite a bummer when you have configured an hotspare on a pool, that does not kick in when 1 member fails (just what an hotspare is for...). So I would like to warn others with a similar configuration to check their sector size and system variables. I think you should modify the system variable mentioned above in advance on systems with an hotspare (and running pools with ashift=9) to make sure you're hotspare is used in case of a drive failure.

Just my 2 cents...
 

Hugo

Dabbler
Joined
Sep 8, 2013
Messages
11
I checked the drive specs (Seagate ST32000644NS) and the sector size is 512 bytes. If all drives on a pool have a sector size of 512, I think it's recommended to use also 512 as sector size on your pool. Or am I wrong here?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Hotspares have never worked in FreeNAS. It's one of those things that will work when zfsd is finished. It's been a "work in progress" for FreeBSD for a couple of years. It's well documented in the forums and has been for quite some time.

And, if you read more of the forums, you'll find that RAIDZ1 is not safe and that most people that lose their data had a RAIDZ1. Read the link in my sig for the technical details.
 

Hugo

Dabbler
Joined
Sep 8, 2013
Messages
11
Hi Cyberjock,

Thanks for your reply. I'm fairly new to FreeNAS but have used opensolaris based OS'es before using ZFS (on which hotspares do work). I totally missed that hotspares do not work within FreeBSD/FreeNAS. You can even configure an hotspare using the GUI, so this can be a bit misleading (at least for me). Also thank you for pointing out that raidz1 can cause issues. I will read up on that.
It seems that I'm perhaps better off using raidz2 without hotspares.

And what are your thoughts about the use of 4k sectors, even on 512b drives? Is this "best practice"? And is it recommended to upgrade your pool to a 4k sector size?

Thanks again.

Hugo
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Personally, I just force 4k for everything, 512bytes or 4k sector size. There is a slight penalty for using 4k on 512byte sector drives, but I'd rather be forward- compatible. As I'm pretty sure you are aware(or will find out shortly), it's far from trivial to switch to 4k if you already have a pool and its wrong.
 
Status
Not open for further replies.
Top