badblocks testing: Inappropriate ioctl for device

Status
Not open for further replies.

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I just installed some new disks in my FreeNAS 9.3-RELEASE server, and I'm trying to test them before putting data on them. In the past I've run badblocks on a spare Linux machine, but I don't have one of those handy at the moment, so I've installed the drives in my FreeNAS box and will just test them there. The disks are connected to an LSI 9211-8i, flashed to IT mode with the P16 firmware.

Following https://forums.freenas.org/index.php?threads/how-to-hard-drive-burn-in-testing.21451/, I issued "sysctl kern.geom.debugflags=0x10", and then "badblocks -wsv /dev/da0". It responded with "Testing with pattern 0xaa: set_o_direct: Inappropriate ioctl for device", but then appears to run without issues. Is this something I should be concerned about?
 
D

dlavigne

Guest
I asked our GEOM guru who says:

FreeBSD does not do any caching for block device I/O. That makes O_DIRECT flag setting pointless there. I am not sure whether it should cause error, but I don't think that error should cause any problems if the tool is still working.

0x10 value for kern.geom.debugflags is still correct if user really wants to access raw device, that is mounted by the system. But it should be used with care, since it may be a way to shoot his own foot.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Thanks for the info. Badblocks seems to (still) be running without problems, though it'll likely be a couple more days before it finishes with these drives. Once that's done I'll reboot to clear the debugflags (and install an update).
 

paylesspizzaman

Explorer
Joined
Sep 1, 2015
Messages
92
I just installed some new disks in my FreeNAS 9.3-RELEASE server, and I'm trying to test them before putting data on them. In the past I've run badblocks on a spare Linux machine, but I don't have one of those handy at the moment, so I've installed the drives in my FreeNAS box and will just test them there. The disks are connected to an LSI 9211-8i, flashed to IT mode with the P16 firmware.

Following https://forums.freenas.org/index.php?threads/how-to-hard-drive-burn-in-testing.21451/, I issued "sysctl kern.geom.debugflags=0x10", and then "badblocks -wsv /dev/da0". It responded with "Testing with pattern 0xaa: set_o_direct: Inappropriate ioctl for device", but then appears to run without issues. Is this something I should be concerned about?
I see you ran "badblocks -wsv /dev/da0" I ran "badblocks -ws /dev/da0" any idea what difference the "v" makes? Also, let me say thank you for asking your question. I am burning in a new FreeNAS System and just noticed the same error. Now I know everything should be fine.
 

paylesspizzaman

Explorer
Joined
Sep 1, 2015
Messages
92
Thank you, I kind of wish I had added the "v" to my test now. Like an idiot, I did type the -ws into google for a second, before I realized there is no way I know of to google that.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Thank you, I kind of wish I had added the "v" to my test now. Like an idiot, I did type the -ws into google for a second, before I realized there is no way I know of to google that.
When I execute badblocks, I do not use -v as it is too verbose for me. On the other hand, since I have AF disks, I use -b 4096 and that additionally helps with 6TB disks, since they would be otherwise too big for badblocks.
 

paylesspizzaman

Explorer
Joined
Sep 1, 2015
Messages
92
I'm not sure if I should just start a new thread for this question, but why is badblocks taking forever? Badblocks is 3% done with its second testing pattern after 211 hours. I started it Dec 18th. All the drives are about the same percent done. I am running badblocks -ws on all 16 drives at the same time. The 16 4TB sata drives I am testing that are connected to a 3ware 9650SE raid card (jbod mode). My motherboard is a Supermicro X10SDV-4C-TLN2F with a single 32GB stick of ram. The board only has a single PCIe slot and no drivers for the onboard LAN, so I am running a bifurcation adapter for the 3ware card and an Intel LAN card. I knew badblocks takes a lot of time, but this is insane! Is there something wrong with my setup that I'm missing? I'm planning to order another stick of RAM on the 8th. I also had trouble with the ctrl+B" command with tmux, both in the freenas shell and in putty it wouldn't do anything. I ended up opening 16 putty windows, one for each drive. Was that a mistake? I also have left my PC, with all 16 putty windows, running the past week because the tmux attach command only reattaches to the last putty session. Please help me. I'm about to scrap the whole freenas idea and sell the parts. This whole experience has been terrible so far.

Thanks for your time
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I would not run simultaneously more badblocks processes, than I have CPU cores.

OK, maybe one more...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

paylesspizzaman

Explorer
Joined
Sep 1, 2015
Messages
92
The guide I have been following is https://forums.freenas.org/index.php?threads/how-to-hard-drive-burn-in-testing.21451/ and says "In my experience, the tests run just as fast with all drives testing as with a single drive". Which is why I ran them all at once. If the instances of Badblocks tests should be limited to the number of cpu threads, could a moderator please update the guide, so no one else has to wait a month for their tests to finish?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The guide I have been following is https://forums.freenas.org/index.php?threads/how-to-hard-drive-burn-in-testing.21451/ and says "In my experience, the tests run just as fast with all drives testing as with a single drive". Which is why I ran them all at once. If the instances of Badblocks tests should be limited to the number of cpu threads, could a moderator please update the guide, so no one else has to wait a month for their tests to finish?
The guide is correct. You want to run it an all drives at once.

My advice was a bit more general, but, in the specific case of badblocks, it doesn't even make sense to run multiple instances per drive, so my advice is expressed in a rather inaccurate way.

Running badblocks on all drives is definitely the way to go. Let's imagine each individual test takes 90% longer - it's still faster and your scaling will be much better. Negligible, even.

The CPU is constantly incurring thread switching penalties anyway it's inevitable on a real OS. Native Command Queuing also reduces the CPU overhead. It probably takes the drive longer to flush the queue than the CPU needs to issue four of them.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I'm not sure if I should just start a new thread for this question, but why is badblocks taking forever? Badblocks is 3% done with its second testing pattern after 211 hours. I started it Dec 18th. All the drives are about the same percent done. I am running badblocks -ws on all 16 drives at the same time. The 16 4TB sata drives I am testing that are connected to a 3ware 9650SE raid card (jbod mode). My motherboard is a Supermicro X10SDV-4C-TLN2F with a single 32GB stick of ram. The board only has a single PCIe slot and no drivers for the onboard LAN, so I am running a bifurcation adapter for the 3ware card and an Intel LAN card. I knew badblocks takes a lot of time, but this is insane! Is there something wrong with my setup that I'm missing? I'm planning to order another stick of RAM on the 8th. I also had trouble with the ctrl+B" command with tmux, both in the freenas shell and in putty it wouldn't do anything. I ended up opening 16 putty windows, one for each drive. Was that a mistake? I also have left my PC, with all 16 putty windows, running the past week because the tmux attach command only reattaches to the last putty session. Please help me. I'm about to scrap the whole freenas idea and sell the parts. This whole experience has been terrible so far.

Thanks for your time
It's probably your 3ware 9650SE card that is causing problems. I don't think it is a good hardware choice and will make things really slow. You need a card that isn't a raid card and is just a HBA.
 

paylesspizzaman

Explorer
Joined
Sep 1, 2015
Messages
92
I have considered using the recommended M1015 HBA, but I would need 2 and I only have 1 PCIe slot due to the current lack of network adapter drivers. I did read the freenas worst practices guide, which claims at least 5% penalty for raid cards. I guess what I could test is to run only 2 badblocks tests, one on a mobo port and the other on the 3ware. That would tell me if the 3ware is what's slow and also if the problem was just too many tests at once. BTW I am already regretting my decision of using the raid card.
 
Last edited:

paylesspizzaman

Explorer
Joined
Sep 1, 2015
Messages
92
You were right about the 3ware raid card being the problem. Motherboard ports run badblocks about 20 times faster then the 3ware. I even tried the 3ware card directly in the mobo pcie slot, without the bifurcation adapter. I also swapped one of the slow HDDs on the 3ware, to a mobo port and it runs fast there. I dug up a USB network adapter to get around no drivers for built in NICs. Too bad the adapter is only 100mbps. Does USB 3.0 work in freenas and would there be any freenas compatible USB3.0 network adapters available? I just need something to get by till there is driver support. The other options are wait for freenas 10 for the mobo NIC drivers or buy a different mobo. I still don't get why the 3ware performance is soooo terrible. I have a second 3ware card, I may swap it and do some more tests.
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Usb3 isn't supported very well but can work. Here is how:

To see if USB 3.0 support works with your hardware, follow the instructions in Tunables to create a “Tunable” named xhci_load, set its value toYES, and reboot the system.
Your raid card is slow because it tries to cache the data and change how it is written to disk. So the cache size and way it flushes to disk is much different than freenas which makes it inefficient. With some heavy tuning I think it could get faster but by just using a hba you get full speed with zero tuning.

What nic drivers are you waiting for? If it's not supported now it might not get any better in the future unless it's a new Intel driver that is just to new to have a good driver.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Was the 3ware card even running the test? Blinkylights flashing on the drives? I got a pile of 450GB SAS drives with an odd sector format (528-byte, from a NetApp) and got the same "inappropriate ioctl" message... badblocks would run, albeit terribly slowly, but wouldn't actually *do* anything.

You don't need two HBAs, you could do the same thing with a SAS expander. Do some reading on the forums on this topic. My single 9211-8i is quite capable of handling the 36 drive bays in my big Supermicro chassis, thanks to the use of expander backplanes.

You need to lose the 3ware card. You're just setting yourself up for failure.
 

paylesspizzaman

Explorer
Joined
Sep 1, 2015
Messages
92
My board is a xeon D and it only has the 10gig soc network adapters. I believe they are the new intel x552/x557-at2. I'm really thinking about getting an hba. I wish I would have not been so cheap, and followed the good advice on the forums.
 
Status
Not open for further replies.
Top