Old disk from Intel Raid and Unable to GPT format the disk "ada0": gpart: geom 'ada0': Operation not

Status
Not open for further replies.

yacenty

Explorer
Joined
Apr 30, 2018
Messages
54
Hi,
I'm trying to build my first freenas to check if it's possible to run it with ResourceSpace. I was trying on teh vbox but had some issues so I've decided to make a trial on my personal PC
It's Intel i5-7500, asus z270-prime -p, 16gb ram. Main disk is nvme: samsung 960pro.

For FreeNas I'm using usb flash sandisk 16gb and I've found 2 old WD RE3 drives each 1TB - which was runing in raid in my very old Intel personal PC.
Now - in the bios those disk are set up as AHCI but the freenas sees theose disk like a GEOM_RAID: INTEL-823caed8
And I can not format them

Code:
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <WDC WD1002FBYS-02A6B0 03.00C06> ATA8-ACS SATA 2.x device
ada0: Serial Number WD-WMATV6906363
ada0: 300.000MB/s transfersda0 at umass-sim0 bus 0 scbus5 target 0 lun 0
 (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 953869MB (1953525168 512 byte sectors)
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
da0: <SanDisk Ultra USB 3.0 1.00> Removable Direct Access SPC-4 SCSI device
da0: Serial Number 4C530001210616101524
da0: 400.000MB/s transfers
da0: 14663MB (30031250 512 byte sectors)
da0: quirks=0x2<NO_6_BYTE>
ada1: <WDC WD1002FBYS-02A6B0 03.00C06> ATA8-ACS SATA 2.x device
ada1: Serial Number WD-WMATV7046118
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 953869MB (1953525168 512 byte sectors)
random: unblocking device.
Trying to mount root from zfs:freenas-boot/ROOT/default []...
GEOM_RAID: Intel-823caed8: Array Intel-823caed8 created.
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
Root mount waiting for: GRAID-Intel
GEOM_RAID: Intel-823caed8: Force array start due to timeout.
GEOM_RAID: Intel-823caed8: Disk ada0 state changed from NONE to STALE.
GEOM_RAID: Intel-823caed8: Disk ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-823caed8: Subdisk Volume0:0-ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-823caed8: Array started.
GEOM_RAID: Intel-823caed8: Disk ada0 state changed from STALE to ACTIVE.
GEOM_RAID: Intel-823caed8: Subdisk Volume0:1-ada0 state changed from NONE to NEW.
GEOM_RAID: Intel-823caed8: Subdisk Volume0:1-ada0 state changed from NEW to ACTIVE.
GEOM_RAID: Intel-823caed8: Volume Volume0 state changed from STARTING to OPTIMAL.
GEOM_RAID: Intel-823caed8: Provider raid/r0 for volume Volume0 created.



IS there any way to destroy this old array?
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
try the following commands to destroy all partition information on the disks. This assumes that the WD reds have no data you want to save and are still located at ada0 and ada1.
gpart destroy -F ada0
gpart destroy -F ada1

It looks like its trying to build a geom based array.
 

yacenty

Explorer
Joined
Apr 30, 2018
Messages
54
thanks, now it's working fine
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I'm happy to help!
 

yacenty

Explorer
Joined
Apr 30, 2018
Messages
54
My plan is to buy LSI9211 and 5 disk (WD RED 4TB)
Would it be easy to extend what I already have on this old WD RE3? (1TB)
Or better is to do nothing special and starts once more when LSI and new hard drives would be in pc?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
That kind of depends on how you plan to use your pool. Personally I have 8 3TB drives setup as striped mirrors so I lose half of my space but get great performance for my virtual environment.

Generally there are few to no issues with expanding a pool. ZFS will balance vdevs and new data is written but will not redistribute existing data. Also you can not extend a vdev but you can add vdevs.
 

yacenty

Explorer
Joined
Apr 30, 2018
Messages
54
I thought I should use all new drives 5 in one volumen - in Raid6, so I would still get over 10TB of space and I hope performance would be not so bad.
In long term I would get rid of those old 1TB WD RED - those drives are about 10 years old
Main purpose for my freenas is backup of pictures and some documents.
Would be great to set up one DAM aplication but - I've gave up today - there is no tutorial for freebsd, and this operating system is completely new for me. In general I'm debian-boy, with average knowlegde.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
RAID6 huh? cough RAIDZ2 cough
But yeah, 2 pools would be nice too. That give you the flexibility to retire the old disks without having to destroy and rebuild your arrray.
 

yacenty

Explorer
Joined
Apr 30, 2018
Messages
54
what about drive for freenas? usb or small ssd?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I use kingston datatraveler SE9 drives. Its SLOW on system updates but has otherwise been rock solid. Most people will say you should use an SSD but whatever. More bays for storage.
 
Status
Not open for further replies.
Top