As noobsauce80 said delete your zpool first. The OS is protecting the GPT partition info, I believe.[root@freenas] ~# dd if=/dev/zero of=/dev/ada0 bs=2048k count=50k
dd: /dev/ada0: Operation not permitted
Looks like I can't write directly to the dev - not sure if this is normal or not.
the whole point of ZFS is flawed if there is an underlying file system on the disk, which there is in ESXi.
dd if=/dev/zero of=/dev/ada0 bs=2048k count=25k 25600+0 records in 25600+0 records out 53687091200 bytes transferred in 469.029439 secs (114464225 bytes/sec) dd if=/dev/ada0 of=/dev/null bs=2048k count=25k 25600+0 records in 25600+0 records out 53687091200 bytes transferred in 400.069497 secs (134194413 bytes/sec)
dd if=/dev/zero of=/dev/ada2 bs=2048k count=25k 25600+0 records in 25600+0 records out 53687091200 bytes transferred in 430.287495 secs (124770280 bytes/sec) dd if=/dev/ada2 of=/dev/null bs=2048k count=25k 25600+0 records in 25600+0 records out 53687091200 bytes transferred in 410.287855 secs (130852255 bytes/sec)
They look fine to me. Wait nevermind. Someone changed the test. It's 50k not 25k.From what I can tell, these results are within the expected range for 7200RPM SATAII drives. Let me know if I'm out of line here.
Raw before I created the mirror:
This will do terrible things to your array. Do not try it on a disk in any type of array.
Code:# dd if=/dev/zero of=/dev/ada0 bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 543.106874 secs (197703597 bytes/sec) # dd if=/dev/ada0 of=/dev/null bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 537.182353 secs (199884046 bytes/sec)
You need to test both ways if there were any issues. In a zpool all the drives will be accessed concurrently.I did run both at the same time in separate terminal windows. I thought that may cause issues, but I thought that was recommended.
OK, got it.The drives were connected directly to the motherboard ports via a SATA cable - no drive bays.
If a single drive comes back with the same rate than you don't need to test the 2[sup]nd[/sup] one.I'll retest again at 50K counts one at a time and repost results.
dd if=/dev/zero of=/dev/ada0 bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 937.129852 secs (114577699 bytes/sec) dd if=/dev/ada0 of=/dev/null bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 815.340635 secs (131692421 bytes/sec)
w/o backplane [root@localhost ~]# dd if=/dev/zero of=/dev/sdb1 bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes (107 GB) copied, 999.778 seconds, 107 MB/s [root@localhost ~]# dd if=/dev/sdb1 of=/dev/null bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes (107 GB) copied, 998.793 seconds, 108 MB/s w/ backplane [root@localhost ~]# dd if=/dev/zero of=/dev/sdb1 bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes (107 GB) copied, 999.573 seconds, 107 MB/s [root@localhost ~]# dd if=/dev/sdb1 of=/dev/null bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes (107 GB) copied, 999.061 seconds, 107 MB/s
Motherboard: Supermicro H8QGi-F, south bridge is SP5100 HDD: two WD RE4 500GB, both connect from on-board SATA ports to CSE-M35T-1
I can't for the life of me figure out why I have three of these backplanes doing the exact same thing, but when bypassed, the drives perform well. There are about a hundred reviews on newegg, and none that I saw complained about speed problems. I have run across one person on smallnetbuilder that used one and had a similar problem.
They are thinking that the mobo SATA signals may not be strong enough for the added circuitry of the backplane.
Time to throw it in the trash.The most recent SMART test failure I've had is Reallocated_Sector_Ct. One drive had this failure numerous times and was increasing steadily. I think it was on about 1224 or so when I gave up on it. I have that drive disconnected now.