SOLVED Lots of g_dev_taste: make_dev_p() failed (gp->name=zvol/pool/.../...@auto-20131006-010000s1, error=6

Status
Not open for further replies.

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
Hi,

I am new to freenas and trying to migrate from nas4free. I am getting tons of this errors
when importing a pool

g_dev_taste: make_dev_p() failed (gp->name=zvol/pool/.../...@auto-20131006-010000s1, error=63)

What could it be? It only happens to zfs volumes (all with zvol/...).
Also, GUI is very unresponsive and takes minutes to import and minutes to show snapshots.
I have about 40 datasets, 20 zvols and 9000 snapshots. I am guessing it has to do with the mentioned error. Please see this nas4free thread for detailed information:

http://forums.nas4free.org/viewtopic.php?f=58&t=6712&p=38815#p38815

I would very much appreciate any help.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
Did you already install FreeNAS? Or are you first trying to fix your problem with your existing NAS4Free installation before proceeding to install FreeNAS?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yes, you shouldn't be importing pools from other oses in FreeNAS. It's well documented that it doesn't work well and when it does there's still a significant chance it will work fine until one day when it doesn't. And when you find out things went bad you lost the pool.

My advice is if you want to migrate from NAS4Free to FreeNAS you create a pool in FreeNAS and move your data from your NAS4Free machine to your FreeNAS machine.
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
@eraser
I already migrated to Freenas. The errors and slow behaviour I mentioned above I also experienced in Freenas.

@cyberjock
I already migrated to Freenas. And yes, I had a bit of hazzle because of the encryption but it went ok. Pool is online, status fine (though, complaining about the features as
n4f obviously is using a different zfs version. But I will not upgrade ZFS). So I am interchaning the systems at the moment. Setting up / redo configuration on Freenas in spare times and switching to production with n4f for the "production" times.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@cyberjock
I already migrated to Freenas. And yes, I had a bit of hazzle because of the encryption but it went ok. Pool is online, status fine (though, complaining about the features as
n4f obviously is using a different zfs version. But I will not upgrade ZFS). So I am interchaning the systems at the moment. Setting up / redo configuration in spare times and switching to production for the "production" times.

Sorry but my recommendation above still applies. You continue down this path and you may fall into that category where everything works and one day your system crashed and on reboot your pool is damaged and your data lost.

Good luck.
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
@cyberjock
I thought so and have ordered two more disks. If I do so, I have a few questions regarding encryption:

1. Will Freenas create a partition to be encrypted or just encrypt the full disk (like n4f).
2. Will it be correctly aligned at 4k boundary in either case?
3. Will it automatically use 4k sectors with Geli?

I mean, will it be done when using the GUI (automatically) or has it to be done manually?

Thanks a lot.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@cyberjock
I thought so and have ordered two more disks. If I do so, I have a few questions regarding encryption:

1. Will Freenas create a partition to be encrypted or just encrypt the full disk (like n4f).
2. Will it be correctly aligned at 4k boundary in either case?
3. Will it automatically use 4k sectors with Geli?

I mean, will it be done when using the GUI (automatically) or has it to be done manually?

Thanks a lot.

1. The ZFS partition is encrypted
2. Yes
3. Yes

Do it from the GUI.. that's the only supported and recommended way to create pools, encrypted or not.
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
1. The ZFS partition is encrypted
2. Yes
3. Yes

Do it from the GUI.. that's the only supported and recommended way to create pools, encrypted or not.

ZFS partition? That is new to me. I thought that Geli would be the layer below zfs. Or is my understanding wrong? Thought it would be gpart, geli and then zfs on top. Or did you mean just a partition of type zfs.
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
The migration plan is:

1. Create the nas with freenas on the new disks
2. Import the n4f volume also within freenas
3. Do a zfs replication to the new disks.

Would this be OK? Or are there any smoother ways for it?
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
Hi,

I am new to freenas and trying to migrate from nas4free. I am getting tons of this errors when importing a pool

g_dev_taste: make_dev_p() failed (gp->name=zvol/pool/.../...@auto-20131006-010000s1, error=63)

Not sure if I am looking at the correct list of error numbers, but according to the sys/errno.h file for FreeBSD, error 63 is ENAMETOOLONG - "File name too long".

Not sure if that helps at all. Just curious - How many characters long is the full path to the snapshot listed in your error message?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The migration plan is:

1. Create the nas with freenas on the new disks
2. Import the n4f volume also within freenas
3. Do a zfs replication to the new disks.

Would this be OK? Or are there any smoother ways for it?

Not sure how good/bad ZFS replication is. It may or may not work with the pool. Personally I'd just do a cp command to move the data, but that's me. ZFS replication keeps alot of metadata on files and that metadata might not bode well for FreeNAS' pool.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
ZFS partition? That is new to me. I thought that Geli would be the layer below zfs. Or is my understanding wrong? Thought it would be gpart, geli and then zfs on top. Or did you mean just a partition of type zfs.

So you end up with pseudo devices. It will be like this:

ada0p1 - swap partition (default 2GB and should be left alone)
ada0p2 - encrypted partition
ada0p2.eli - decrypted partition, device presented by geli after providing password and key.

ZFS goes on the .eli device which is inside the ada0p2 device.
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
Not sure if I am looking at the correct list of error numbers, but according to the sys/errno.h file for FreeBSD, error 63 is ENAMETOOLONG - "File name too long".

Not sure if that helps at all. Just curious - How many characters long is the full path to the snapshot listed in your error message?

Hi eraser,

very cool and precise answer. Why they just do not give the name of the constant in the log (ENAMETOOLONG)? Ok, but i looked at it. The longest name length including snapshot is 68 characters. It is also throwing the error with 64 characters. Did not know of any limitations, though. Should not be this days, or are we back to dos? (just joking).

What is the limit then? Do you know? The name length could be the case as I took part of the dataset as name for the zvols, wich gives me a nice naming structure.
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
Not sure how good/bad ZFS replication is. It may or may not work with the pool. Personally I'd just do a cp command to move the data, but that's me. ZFS replication keeps alot of metadata on files and that metadata might not bode well for FreeNAS' pool.

And replication of zvols? With dd? They are sparse vols. Some of them over provisioned. What is the best method for it?
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
So you end up with pseudo devices. It will be like this:

ada0p1 - swap partition (default 2GB and should be left alone)
ada0p2 - encrypted partition
ada0p2.eli - decrypted partition, device presented by geli after providing password and key.

ZFS goes on the .eli device which is inside the ada0p2 device.

Isn't this for the freenas install itself? I keep freenas on usb stick (16GB). The disks will then be created with the zfs volume manager. Same structure?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
2. Will it be correctly aligned at 4k boundary in either case?
In case of the disks that are not Advanced Format, there was a bug in FreeNAS, and partition end would not be 4k aligned for non-AF disks until 9.2.1.6 (the fix is already present in 9.2.1.6-BETA).

Likely all your disks are AF, but just wanted to give you heads up.

I understand that http://forums.nas4free.org/viewtopic.php?f=58&t=6712&p=38815 describes you n4f hardware. If you are using the same hardware for FreeNAS, it is inadequate: not enough of RAM, and no ECC RAM (it was inadequate for n4f too...).
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
In case of the disks that are not Advanced Format, there was a bug in FreeNAS, and partition end would not be 4k aligned for non-AF disks until 9.2.1.6 (the fix is already present in 9.2.1.6-BETA).

Likely all your disks are AF, but just wanted to give you heads up.

I understand that http://forums.nas4free.org/viewtopic.php?f=58&t=6712&p=38815 describes you n4f hardware. If you are using the same hardware for FreeNAS, it is inadequate: not enough of RAM, and no ECC RAM (it was inadequate for n4f too...).

Thanks for heads up! You are right, all disks are AF. Regarding the ECC, do you really think it is the cause of the problem? I need some more information on why ECC is so important for ZFS? I am running lots of other servers, which do have more ram and do not have ECC and have no problem for years. I agree that it helps in rare cases. This errors should be sporadic errors, right? I can not imagine a ECC problem here as it is reproducible and memory has been intensively checked.

Then, regarding the amount of RAM. Shouldn't 8GB be enough for <3TB of data? I have only 2x3TB mirror. I thought, roughly, that 1GB for 1TB would be enough and 8GB are stated as the minimum. What I do not understand is, that the amount of RAM is always given as explanation for unpredictable behaviour. For an enterprise system there should always be a well defined and predictable behaviour. There should be no doubt at all as NAS systems always grow. So memory should be the cause for performance degredation but not for critical errors and data loss. If this would be the case, no company would by such a system ever! So I guess it is more like "it could be" "who knows" (because we do not know the real answer).... If it really would be the case that ZFS is so sensitive to RAM size that it would cause data corruption then I would turn away from it. But this is in contrary to my experience. Because of experience I changed to ZFS a longer time agou and also use it heavily in linux with ZoL. I even use it for my disks which I swap between workstations and laptop. Works for years now like a charm. And I configured RAM limitation on all computers. I came to it because I spent a lot of time on finding the cause of bluescreens etc. on a hybrid disk with broken firmware. ZFS was the only file system which directly again and again showed up errors immediately. Since then this is my fs nr. 1.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Can we say that predictability (stability) starts at 16GB of ECC RAM ?

With non-ECC RAM you might not see problems and have a false sense of security.

ECC RAM does not give you any better performance. It significantly lowers chances of loosing your entire ZFS pool without any chance of recovery. That is ZFS specific and applies to all ZFS implementations. By the way, did you notice that big iron always comes with ECC RAM ? ECC RAM is not a fashion statement, and many desktop users wish that they had ECC RAM, so they would not have unexplainable crashes...
 

BuddyButterfly

Dabbler
Joined
Jun 18, 2014
Messages
28
Can we say that predictability (stability) starts at 16GB of ECC RAM ?

With non-ECC RAM you might not see problems and have a false sense of security.

ECC RAM does not give you any better performance. It significantly lowers chances of loosing your entire ZFS pool without any chance of recovery. That is ZFS specific and applies to all ZFS implementations. By the way, did you notice that big iron always comes with ECC RAM ? ECC RAM is not a fashion statement, and many desktop users wish that they had ECC RAM, so they would not have unexplainable crashes...

Thanks for keeping in conversation even I started a fundamental discussion. You are right, all big irons come with ECC. Is it special to zfs because it uses so much memory as a cache? I am not into the details, but it should have some smart cache handling and eviction policies, read mostly etc. I guess it will not keep all its transactions completely in memory, does it? This would negate the d in acid. If there would be any problem then DBs would gain a lot of ground with their heavily hyped in memory databases which are still transaction save. Maybe they will become faster than FSes xD.

Ok, so you basically confirmed the sporadic nature of such problems. And this is why I just will not by a new mb for this prototype.

You say, 16GB are safe? Is this true for all combinations of disks? Also for 24x3TB? If not then this is not tolerable for an enterprise system as one could add disks and risk data corruption when not adding more memory.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
predictability (stability) starts at 16GB of ECC RAM
I am only guessing that high memory usage is a side-effect of ZFS benefits.

Let me stress, that stability (predictable behaviour as you named it) is a function of RAM quantity.

Chances of permanently loosing the pool significantly increase if RAM has no quality (i.e. non-ECC RAM is used).

For a server that is just a replica, non-ECC RAM might be of course acceptable.

On the other hand, for testing or for a proof of concept, non-ECC RAM means a different motherboard, with a different chipset, different RAM (of course ;) ) and likely even a different CPU, so... the results obtained with non-ECC RAM might bear no resemblance whatsoever to the ultimate production system...
 
Status
Not open for further replies.
Top