Darrell Patenaude
Dabbler
- Joined
- Mar 29, 2015
- Messages
- 40
Hello TrueNAS Scale 23.10.1.3
Serverpool had 8 10tb drives in a raidz2.
I pulled 5 16TB drives from my back up server, and bought 3 seagate enterprise used drives.
This is on a Supermicro X9DRH-7TF/7F/iTF/iF
Model:Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz 2 cpu
Memory:252 GiB
in a super micro 4u with 36 drive bays.
I replaced each drive one by one by replacing a drive so not pulling out any drives)
Everything worked fine. I've done this before from 4tb drives to 10tb.
But. Truenas did not expand the storage space this time. Before going from 4 to 10tb drives it did it automatically
So now I click on expand and I get an error code saying too many files open. And then after I click ok.
[Efault] Command partprobe /dev/sdc failed (code 1 ) Failed to add inotify watch for /run/udev: too many open files
Error Partition(s) 1 on /dev/sdc have been written but we have been unable to inform the kernal of the change
probably because it/they are in use. As a result, the old partition(s) will remain in use You should reboot now before making further changes. Failed to add inotify watch for /runudev: too many open files
then after this. it tells me one of my WD 16tb drives are bad. But I just copied data to it replacing a 10tb drive and it didn't cry. also I do scrubs on the old back up server, and i dont have any issues. so a scrub on this drive was recently done.
So then, I reboot. I can replace the bad drive with it self. Wait 20 hours for it to finish.
No errors. no degraded raid.
Reboot. Try this again. as soon as it boots i try to expand the raid and it says success. then shows a drive again that is unavailable again.
Smart shows short and extended off line show good.......
I am not at the server currently so I am going to try to pull the drive and try a different slot but I am still at a loss how to expand the raid..
Can someone tell me a away to expand before too many open files are up?
Do I run another 20 hour re sliver ?
Serverpool had 8 10tb drives in a raidz2.
I pulled 5 16TB drives from my back up server, and bought 3 seagate enterprise used drives.
This is on a Supermicro X9DRH-7TF/7F/iTF/iF
Model:Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz 2 cpu
Memory:252 GiB
in a super micro 4u with 36 drive bays.
I replaced each drive one by one by replacing a drive so not pulling out any drives)
Everything worked fine. I've done this before from 4tb drives to 10tb.
But. Truenas did not expand the storage space this time. Before going from 4 to 10tb drives it did it automatically
So now I click on expand and I get an error code saying too many files open. And then after I click ok.
[Efault] Command partprobe /dev/sdc failed (code 1 ) Failed to add inotify watch for /run/udev: too many open files
Error Partition(s) 1 on /dev/sdc have been written but we have been unable to inform the kernal of the change
probably because it/they are in use. As a result, the old partition(s) will remain in use You should reboot now before making further changes. Failed to add inotify watch for /runudev: too many open files
then after this. it tells me one of my WD 16tb drives are bad. But I just copied data to it replacing a 10tb drive and it didn't cry. also I do scrubs on the old back up server, and i dont have any issues. so a scrub on this drive was recently done.
So then, I reboot. I can replace the bad drive with it self. Wait 20 hours for it to finish.
No errors. no degraded raid.
Reboot. Try this again. as soon as it boots i try to expand the raid and it says success. then shows a drive again that is unavailable again.
Smart shows short and extended off line show good.......
I am not at the server currently so I am going to try to pull the drive and try a different slot but I am still at a loss how to expand the raid..
Can someone tell me a away to expand before too many open files are up?
Do I run another 20 hour re sliver ?