Issue formatting drives, adding to pool

T_Haggerty

Cadet
Joined
Nov 3, 2020
Messages
8
System Info:

TrueNAS-12.0-RC1
Dell R620
32 gb RAM
Intel(R) Xeon(R) CPU E5520 @ 2.27GHz

Using a Netapp DS2246 connected via iSCSI, have 7 drives working, getting an error with 14 others when trying to format or add to the pool. Tried a few things I have found in other posts but have not had any luck, any tips or advice would be appreciated,


[EFAULT] Failed to wipe disk da9: [EFAULT] Command dd if=/dev/zero of=/dev/da9 bs=1M count=32 failed (code 1): dd: /dev/da9: Invalid argument 1+0 records in 0+0 records out 0 bytes transferred in 0.000830 secs (0 bytes/sec)
Error: Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 361, in run
await self.future
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 397, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 816, in do_update
enc_disks = await self.middleware.call('pool.format_disks', job, disks, {'enc_keypath': enc_keypath})
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1233, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1191, in _call
return await methodobj(*prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
await asyncio_map(format_disk, disks.items(), limit=16)
File "/usr/local/lib/python3.8/site-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
return await asyncio.gather(*futures)
File "/usr/local/lib/python3.8/site-packages/middlewared/utils/asyncio_.py", line 13, in func
return await real_func(arg)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/format_disks.py", line 29, in format_disk
await self.middleware.call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1233, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1202, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1106, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/local/lib/python3.8/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/disk_/format.py", line 25, in format
raise CallError(f'Failed to wipe disk {disk}: {job.error}')
middlewared.service_exception.CallError: [EFAULT] Failed to wipe disk da9: [EFAULT] Command dd if=/dev/zero of=/dev/da9 bs=1M count=32 failed (code 1):
dd: /dev/da9: Invalid argument
1+0 records in
0+0 records out
0 bytes transferred in 0.000830 secs (0 bytes/sec)
 

T_Haggerty

Cadet
Joined
Nov 3, 2020
Messages
8
Hi Swissroot,

I have tried those steps before, but had no luck.

dd if=/dev/zero of=/dev/da3 bs=512 count=1 && dd if=/dev/zero of=/dev/da3 bs=512 count=1
dd: /dev/da3: Invalid argument

dd if=/dev/zero of=/dev/ada3 bs=512 count=1 && dd if=/dev/zero of=/dev/ada3 bs=512 count=1
dd: /dev/ada3: Operation not supported
 

Tigersharke

BOfH in User's clothing
Administrator
Moderator
Joined
May 18, 2016
Messages
893
I believe that such drives cannot be mounted if they are to be formatted or erased or any similar operation, doing so should give a 'busy' or 'in use' error of some sort (unsure of exact wording). I discovered for myself that a drive attached to the sata port on my motherboard will show as adaX while one attached to my HBA card presents as daX. There are other commands for preparing a drive which include gpart destroy but of course be very certain you indicate the correct drive.
I wish you success.
 

T_Haggerty

Cadet
Joined
Nov 3, 2020
Messages
8
Tigersharke -

Looking at the dev directory, here is my output:

root@freenas[/dev]# ls -l
crw-r----- 1 root operator 0xcd Nov 2 13:52 da0
crw-r----- 1 root operator 0xf8 Nov 2 13:52 da0p1
crw-r----- 1 root operator 0xfa Nov 2 13:52 da0p2
crw-r----- 1 root operator 0xcf Nov 2 13:52 da1
crw-r----- 1 root operator 0xe1 Nov 2 13:52 da10
crw-r----- 1 root operator 0xe3 Nov 2 13:52 da11
crw-r----- 1 root operator 0xe5 Nov 2 13:52 da12
crw-r----- 1 root operator 0xe7 Nov 2 13:52 da13
crw-r----- 1 root operator 0xe9 Nov 2 13:57 da14
crw-r----- 1 root operator 0x11b Nov 2 13:57 da14p1
crw-r----- 1 root operator 0x12e Nov 2 13:57 da14p2
crw-r----- 1 root operator 0xeb Nov 2 13:57 da15
crw-r----- 1 root operator 0x15f Nov 2 13:57 da15p1
crw-r----- 1 root operator 0x163 Nov 2 13:57 da15p2
crw-r----- 1 root operator 0xed Nov 2 13:52 da16
crw-r----- 1 root operator 0xef Nov 2 13:52 da17
crw-r----- 1 root operator 0xf1 Nov 2 13:52 da18
crw-r----- 1 root operator 0xf3 Nov 2 13:52 da19
crw-r----- 1 root operator 0xfc Nov 2 13:52 da1p1
crw-r----- 1 root operator 0xfe Nov 2 13:52 da1p2
crw-r----- 1 root operator 0xd1 Nov 2 13:52 da2
crw-r----- 1 root operator 0xf5 Nov 2 13:52 da20
crw-r----- 1 root operator 0xf7 Nov 2 13:52 da21
crw-r----- 1 root operator 0x120 Nov 2 13:52 da21p1
crw-r----- 1 root operator 0x121 Nov 2 13:52 da21p2
crw-r----- 1 root operator 0x100 Nov 2 13:52 da2p1
crw-r----- 1 root operator 0x102 Nov 2 13:52 da2p2
crw-r----- 1 root operator 0xd3 Nov 2 13:52 da3
crw-r----- 1 root operator 0xd5 Nov 2 13:52 da4
crw-r----- 1 root operator 0x104 Nov 2 13:52 da4p1
crw-r----- 1 root operator 0x106 Nov 2 13:52 da4p2
crw-r----- 1 root operator 0xd7 Nov 2 13:52 da5
crw-r----- 1 root operator 0x108 Nov 2 13:52 da5p1
crw-r----- 1 root operator 0x10a Nov 2 13:52 da5p2
crw-r----- 1 root operator 0xd9 Nov 2 13:52 da6
crw-r----- 1 root operator 0x10c Nov 2 13:52 da6p1
crw-r----- 1 root operator 0x10e Nov 2 13:52 da6p2
crw-r----- 1 root operator 0xdb Nov 2 13:52 da7
crw-r----- 1 root operator 0x110 Nov 2 13:52 da7p1
crw-r----- 1 root operator 0x112 Nov 2 13:52 da7p2
crw-r----- 1 root operator 0xdd Nov 2 13:57 da8
crw-r----- 1 root operator 0x80 Nov 2 13:57 da8p1
crw-r----- 1 root operator 0x117 Nov 2 13:57 da8p2
crw-r----- 1 root operator 0xdf Nov 2 13:52 da9

I see da3 listed, but when trying to run this command it says invalid argument?

root@freenas[/]# dd if=/dev/zero of=/dev/da3 bs=512 count=1 && dd if=/dev/zero of=/dev/da3 bs=512 count=1
dd: /dev/da3: Invalid argument
1+0 records in
0+0 records out
0 bytes transferred in 0.000140 secs (0 bytes/sec)
root@freenas[/]#

Any thoughts on that? Also getting an error when running gpart destroy:

root@freenas[/dev]# gpart destroy da3
gpart: arg0 'da3': Invalid argument
 

Tigersharke

BOfH in User's clothing
Administrator
Moderator
Joined
May 18, 2016
Messages
893
Try gpart show, also you can check output of mount to see if it is in use, and dmesg may also list newly noticed media. Other than that, I really do not know. I have only dealt with hard drives and formatting them or attaching them to a pool a couple times myself. The listing of things in /dev should reflect what is accessible but I think much of it is caused to exist by the /etc/fstab, and so may not actually indicate recent additions. I am certainly not an expert, I expect others here on the forum to chime in and offer further assistance.
 

T_Haggerty

Cadet
Joined
Nov 3, 2020
Messages
8
Thanks Tigersharke, I appreciate it. Output of gpart show and mount below:

root@freenas[/dev]# gpart show
=> 40 467664816 mfid0 GPT (223G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 463470424 2 freebsd-zfs (221G)

=> 40 779091888 mfid1 GPT (372G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 774897496 2 freebsd-zfs (369G)

=> 40 1172123488 da0 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167929096 2 freebsd-zfs (557G)

=> 40 1172123488 da1 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167929096 2 freebsd-zfs (557G)

=> 40 1172123488 da2 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167929096 2 freebsd-zfs (557G)

=> 40 1171874920 da4 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167680528 2 freebsd-zfs (557G)

=> 40 1171874920 da5 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167680528 2 freebsd-zfs (557G)

=> 40 1171874920 da6 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167680528 2 freebsd-zfs (557G)

=> 40 1171874920 da7 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167680528 2 freebsd-zfs (557G)

=> 40 125031600 da21 GPT (60G)
40 532480 1 efi (260M)
532520 124485632 2 freebsd-zfs (59G)
125018152 13488 - free - (6.6M)

=> 40 1171874920 da8 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167680528 2 freebsd-zfs (557G)

=> 40 1171874920 da14 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167680528 2 freebsd-zfs (557G)

=> 40 1171874920 da15 GPT (559G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1167680528 2 freebsd-zfs (557G)

root@freenas[/dev]# mount
freenas-boot/ROOT/FreeNAS-12.0-RC1 on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
tmpfs on /etc (tmpfs, local)
tmpfs on /mnt (tmpfs, local)
tmpfs on /var (tmpfs, local)
fdescfs on /dev/fd (fdescfs)
SSD_RAID on /mnt/SSD_RAID (zfs, local, nfsv4acls)
SSD_RAID/VM_Storage on /mnt/SSD_RAID/VM_Storage (zfs, NFS exported, local, nfsv4acls)
HDD_RAID on /mnt/HDD_RAID (zfs, local, nfsv4acls)
HDD_RAID/Haggerty-Share on /mnt/HDD_RAID/Haggerty-Share (zfs, local, nfsv4acls)
NETAPP on /mnt/NETAPP (zfs, local, nfsv4acls)
NETAPP/PLEX_Storage on /mnt/NETAPP/PLEX_Storage (zfs, local, nfsv4acls)
SSD_RAID/.system on /var/db/system (zfs, local, nfsv4acls)
SSD_RAID/.system/cores on /var/db/system/cores (zfs, local, nfsv4acls)
SSD_RAID/.system/samba4 on /var/db/system/samba4 (zfs, local, nfsv4acls)
SSD_RAID/.system/syslog-6c934beb66de482e8faef2d3b30acc82 on /var/db/system/syslog-6c934beb66de482e8faef2d3b30acc82 (zfs, local, nfsv4acls)
SSD_RAID/.system/rrd-6c934beb66de482e8faef2d3b30acc82 on /var/db/system/rrd-6c934beb66de482e8faef2d3b30acc82 (zfs, local, nfsv4acls)
SSD_RAID/.system/configs-6c934beb66de482e8faef2d3b30acc82 on /var/db/system/configs-6c934beb66de482e8faef2d3b30acc82 (zfs, local, nfsv4acls)
SSD_RAID/.system/webui on /var/db/system/webui (zfs, local, nfsv4acls)
SSD_RAID/.system/services on /var/db/system/services (zfs, local, nfsv4acls)
 

swissroot

Dabbler
Joined
Oct 19, 2020
Messages
10
Sorry have written the last part on my Ipad and missed a part of it. I was not able to "reformat" the disk with dd. I had to use sg_format to get it proper working. On some topics I've read that may putting in a windows box and kill all partition there so that the black unallocated is there helped them. Unfortunately I had no winbox with SAS available...

had to use the kernel flags first
- sysctl kern.geom.debugflags = 0x10

and then with sg_format
- sg_format --format --size=512 --six -v /dev/da9

Then after this I was able to create a pool on those disks without any issues. But they were from an EMC so they had their own formatting on it. When I checked on the hba before the format the reported sector size 0kb.

but keep in mind the sg_format takes a while!!! for an 10tb disk I had around 12h+ until it was finished.

not clear sure about the kernel flag, not really deep in freebsd. but before I set it I was not able to do it... may some freebsd guru here knows more about them!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
This is the "520-bytes per sector" format that is common with NetApp/EMC drives purchased secondhand. I don't know that the debug flags are necessary, I don't recall needing it (and I'll have to dig around to see if I still have a 520b disk that I haven't converted already in order to test it)

Edit - Found one, I needed --fmtpinfo=0 as an additional param, did not need --six but that's drive-specific. No debugflags needed. (Now we play the waiting game.)

The sg_format command posted by @swissroot is correct; as suggested by @Mlovelace in another thread, kick off a tmux session beforehand as this command will take a very long time.
 
Last edited:

T_Haggerty

Cadet
Joined
Nov 3, 2020
Messages
8
Thanks Swissroot and HoneyBadger.

Looks like the SGFormat is working, these are 600gb drives, so not too long of a wait.
 

T_Haggerty

Cadet
Joined
Nov 3, 2020
Messages
8
Yes - they are 15k SAS units. Getting an error when trying to wipe still, may need the --fmtpinfo=0 will try that on the next one
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Can you dump the output of smartctl -a for a device? If it shows as "Formatted with type 1 protection" or similar just below the LU size then you'll need to force with --fmtpinfo=0

You may also need to hard power cycle the devices between attempts if they've failed. It may have the disk in a locked state. Hotswap is your friend here.
 

T_Haggerty

Cadet
Joined
Nov 3, 2020
Messages
8
I added --fmtpinfo=0 to a drive and was able to wipe it.

here is cmrtctl -a for one that I did without --fmtpinfo=0
root@freenas[~]# smartctl -a /dev/da3
smartctl 7.1 2019-12-30 r5022 [FreeBSD 12.2-PRERELEASE amd64] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor: NETAPP
Product: X422_HCOBD600A10
Revision: NA02
Compliance: SPC-4
User Capacity: 600,127,266,816 bytes [600 GB]
Logical block size: 512 bytes
Rotation Rate: 10020 rpm
Form Factor: 2.5 inches
Logical Unit id: 0x5000cca025c20838
Serial number: PVKER13B
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Wed Nov 4 09:53:57 2020 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature: 25 C
Drive Trip Temperature: 85 C

Manufactured in week 25 of year 2012
Specified cycle count over device lifetime: 50000
Accumulated start-stop cycles: 29
Specified load-unload count over device lifetime: 300000
Accumulated load-unload cycles: 2149
Elements in grown defect list: 0

Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 1495751736 691967 23 0 16307264 76931.494 0
write: 2 97582 8 0 107379 7294.609 0
verify: 17694212205 6491846 171 0 3448870 176852.683 0

Non-medium error count: 6736

No Self-tests have been logged
 

TripitakaBC

Cadet
Joined
Mar 10, 2015
Messages
4
I just want to add my thanks to the OP for creating and particularly to swissroot for the link that made it all come together.

I bought 6 4Tb Seagate drives from a recycler thinking that I could just plug them into my Dell T620 bays and load them up as mirrored vdevs. Its a new build that I'm playing with and while I waited for my H220 flashed to LSI HBA from The Art of Server, I noticed that the H710P raid card could see all the devices but only one was open and usable and five were marked as 'blocked'. That is a well-documented issue that should be fixed by updating the firmware of the card but mine wasn't playing ball. I figured that the H220/LSI would see the drives and make them available when I installed it.

No bueno! The H220 could see the drives but at 0.00Gb; at least I could format them through the BIOS interface for the H220 if only one at a time and 15 hours each. I thought that was the end of the issue but I suspected it might not be.

Fast forward, all got formatted through the H220, I fired up TrueNAS core and it could see all the drives but threw an error when I tried to create the vdevs. Searching that error brought me to this thread and the link provided by swissroot provided the info I needed to sg_format the drives in parallel to a 512 byte sector rather than 520 byte.

For those reading this with the same issue, what I've learned so far is that used HDD are often good value but if they have been used in Netapp/EMC hardware, they need to be mounted on an HBA (not a raid card) and re-formatted to 512 byte sectors in order for them to work and be mounted as useable drives in Freenas/Truenas.

The reason this is pretty unique and hard to track down is that it seems to be an issue that is limited or confined to situations where drives that have been used in other storage arrays that utilise 520 bytes sectors are used in equipment that is looking for 512 byte sectors.
 

vLabStu

Cadet
Joined
Mar 4, 2021
Messages
3
Just wanted to say thanks to everyone on this post.

Managed to get an IBM 5887 from work as was no longer needed and I could use this in my lab.

This has 24 x 300GB disks in it, could not get this working at all till I found this post and realized that the disks are formatted with 528 bytes.

Managed to use the above command, "sg_format --format --size=512 --six -v /dev/daX" with X being the disk number and this successfully created a pool on the disk after, now need to re-run this for 23 disks :(

Maybe see if there is a way to do more than one at a time, TripitakaBC managed this, so must be a way with the software. Will keep digging.

Thanks again, this is the start of my TrueNAS journey existing SAN is a Dell R510 with 12 disks running Windows for an iSCSI target, and it's just soooo slow, even with 10GBe nics. think this is more as its SATA disks and mix of a lot of older disks. This shelf will just be a Mirrored Raid z2 pool for hopefully fastish access to VMs will have more disks in the HP DL380P for other LUNs.
 

vLabStu

Cadet
Joined
Mar 4, 2021
Messages
3
Hi jgreco.

apologies still working out the levels on TrueNAS, ZFS. I am more familiar with RAID10 which I thought that was, ovs more reading to do and figuring out how best to configure this. Need another HBA before it’s fully getting setup as only running on 1 controller the now, but will get there.

Thanks for the link will take a look over this
 

zookeeper21

Dabbler
Joined
Jan 25, 2021
Messages
33
I am having same issue now with two drives. I tried all said in here but same problem no mater what. Is it some kind a bug?
 
Top