SAS Drives - Extremely Poor IOPS

iamrt

Cadet
Joined
Jun 14, 2020
Messages
4
Good afternoon,

I am experiencing an issue with some SAS HGST DKR5D-J900SS 900GB 10k 2.5" drives. They are experiencing extremely low IOPS, to the tune of ~30kb/s mean during a badblocks -ws run.

My hardware is as follows:
Dell R730xd 24x2.5"
2 x E5-2680 V3
Dell H730 - HBA Mode (confirmed)

Attached is a gstat pulled while running badblocks on every drive.


I have verified that write-cache is enabled on each drive. I stopped on one of the drives and restarted after to see if that would make a difference but to no avail.

Are these drives trash or are there settings that may need to be tweaked in the BIOS or freenas?

Any assistance would be greatly appreciated!
 

Attachments

  • 1592161136382.png
    1592161136382.png
    55.5 KB · Views: 174

iamrt

Cadet
Joined
Jun 14, 2020
Messages
4
Forgot to get include SMART data for one of the offending drives.

1592162116439.png


They've been around awhile but I would hope that they would be a little more performant than kb/s with SAS
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
badblocks is a strange way to look at drive performance... I'd suggest using a more normal testing process with known block sizes.
 

iamrt

Cadet
Joined
Jun 14, 2020
Messages
4
I was running through the burn-in testing guide from qwertymodo. This one just a more substantial issue aside from the fact that I was unable to create an encrypted pool with these same drives. I kept receiving geli attach errors to a random gptid. Also, zero wiping the drive was looking to take nearly a week for a 900GB drive.

With that being said, I'd be happy to run through some dd tests to provide a little more context. I'll post some results as soon as I can.
 

iamrt

Cadet
Joined
Jun 14, 2020
Messages
4
Ran a couple of dd if=/dev/zero tests on a couple of pools, MirrorTest with the questionable drives, and MirrorTest2 with known-working drives.

Here are the results of dd if=/dev/zero of=/mnt/* bs=4M count=10000.

Ctrl-t during the run with gstat running:

dd-2mirrors.png


Completed test:

dd-mirror-comparison.png


I let the test complete and came back to a Degraded state on the first pool. I guess that's my answer, just trash drives.

Side note: these tests were run with write-cache disabled. I planned on running another test with write-cache enabled but the degraded state message made it moot (cow's opinion).

One question I would like to ask though, how to run a quick wipe on multiple drives via command line? I saw mention of the dd command to do so, but what parameters would equate to the quick wipe via gui?
 
Top