Poor write performance

Status
Not open for further replies.
Joined
Dec 29, 2014
Messages
1,135
I am running FreeNAS 11.1-U4 on a Cisco UCS C240-M3, and I am experiencing what seems to be poor write performance. The system has dual E5-2660 V2 CPU's, 128GB of RAM, LSI 9271 controller with all drives in JBOD mode. FreeNAS is acting as an NFS datastore for ESXi 6.5 and 6.0 hosts. The NIC where FreeNAS talks to the ESXi hosts is a Chelsio T520-CR. The drives are Seagate ST91000640NS (1TB SATA, 7200 rpm). I have two vdevs of 8 drives each, and a spare drive. These are in a in a single pool configured as RAID-Z2. I will attach the hardware config. When I move VM's from FreeNAS to local storage on the ESXi hosts, I get between 6.5 and 8.0GB throughput, and I am thrilled with that. When I move the VM's from local storage on the ESXi host back to FreeNAS, and I getting 400-500M throughput. I expected something of a dropoff, but not one quite that severe. I have messed with the hardware config over the past year, so I turned off autotune and deleted all tunables. Then I rebooted, enabled autotune, and rebooted again. I read through the hardware and ZFS guides to try and do everything the right way, but no joy so far. I don't know if it is a problem with my hardware (drives in particular) or not. Looking at the stats, the drive near 100% utilization when reading, but seem to cap around 30% when I am writing. I have 2 hosts with local storage of comparable vintage, so I don't think it the hosts initiating the write would cap things so much. I am running out of ideas.
 

Attachments

  • freenas2-dm.txt
    35 KB · Views: 361
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
LSI 9271 controller
Is that in the hardware guide?
I have a feeling it isn't and it is probably one of the sources of the trouble you are having.

FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

I have two vdevs of 8 drives each, and a spare drive. These are in a in a single pool configured as RAID-Z2.
This is also not a recommended configuration for virtualization. Are you using iSCSI / block storage? Even so, the workload is likely a lot of small, random writes. Mirror vdevs are called for, to get the IOPS. VMs (ESXi for certain) uses sync writes in an effort to prevent corruption of the VM.
Looking at the stats, the drive near 100% utilization when reading, but seem to cap around 30% when I am writing.
The problem with your write performance is likely caused by the ZFS Intent Log. The function of that portion of the file system is to store sync writes to stable storage before acknowledging the write to the VM. The way this works without a SLOG (Separate LOG device) is that the write is committed to what I think of as a temporary working space on the pool, then the ack goes to the VM, then the write is done again when the data is written to the permanent storage space on the pool when the regular transaction group is committed. This makes everything much slower. If you add a SLOG to the system, it will make it faster. Here is some info on that:

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561
I have messed with the hardware config over the past year
I am running out of ideas.
I sure wish you would have asked sooner
Here are some suggested SLOG devices:
https://www.servethehome.com/buyers...as-servers/top-picks-freenas-zil-slog-drives/
 
Joined
Dec 29, 2014
Messages
1,135
Is that in the hardware guide?
I have a feeling it isn't and it is probably one of the sources of the trouble you are having.

Yes, it is. It does say that LSI controllers tend to be stable. It also mentions that if it is a hardware RAID capable controller that you shouldn't use that. You should make all the drives JBOD and let FreeNAS control them, which is what I have done. I tried to take into account what the ZFS primer says. FYI, here is the output of a zpool status.

pool: RAIDZ2-I
state: ONLINE
scan: scrub repaired 0 in 0 days 03:10:01 with 0 errors on Mon Apr 9 18:42:12 2018
config:

NAME STATE READ WRITE CKSUM
RAIDZ2-I ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/bd041ac6-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/bdef2899-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/bed51d90-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/bfb76075-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c09c704a-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c1922b7c-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c276eb75-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c3724eeb-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/a1b7ef4b-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a2eb419f-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a41758d7-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a5444dfb-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a6dcd16f-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a80cd73c-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a94711a5-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/aaa6631d-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
spares
gptid/4abff125-23a2-11e8-a466-e4c722848f30 AVAIL

errors: No known data errors

pool: SYS-MiRROR
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:07 with 0 errors on Sun Apr 8 00:00:07 2018
config:

NAME STATE READ WRITE CKSUM
SYS-MiRROR ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/3c0e5fc1-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0
gptid/3dd26070-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:04:06 with 0 errors on Sat May 12 03:49:06 2018
config:

NAME STATE READ WRITE CKSUM
SYS-MiRROR ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/3c0e5fc1-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0
gptid/3dd26070-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:04:06 with 0 errors on Sat May 12 03:49:06 2018
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da19p2 ONLINE 0 0 0

errors: No known data errors


This is also not a recommended configuration for virtualization. Are you using iSCSI / block storage? Even so, the workload is likely a lot of small, random writes. Mirror vdevs are called for, to get the IOPS. VMs (ESXi for certain) uses sync writes in an effort to prevent corruption of the VM.

The problem with your write performance is likely caused by the ZFS Intent Log. The function of that portion of the file system is to store sync writes to stable storage before acknowledging the write to the VM. The way this works without a SLOG (Separate LOG device) is that the write is committed to what I think of as a temporary working space on the pool, then the ack goes to the VM, then the write is done again when the data is written to the permanent storage space on the pool when the regular transaction group is committed. This makes everything much slower. If you add a SLOG to the system, it will make it faster. Here is some info on that:

It is defined as an NFS datastore in ESXi. I am starting to wonder if perhaps some of this is issues with ESXi and NFS. I do use the FreeNAS for other things (a few CIFS shares and such). I have 17 drives that are allocated for storage user facing functions. With 2 vdevs of 8 disks each in RAID-Z2, loosely that should give me 75% of the physical space to use. If I understand what you are saying, you are suggesting that I should do 8 vdevs of RAID1. Is that correct? Loosely speaking, that would give me 50% of the physical space to use which is a non-trivial loss. It is a home lab, so I don't need to wring every possible IOP out of it. If my write performance was 33-50% of the read performance as compared to current 14%, I think I would be happy with that. I am going to investigate the ESXi NFS write performance as well as the SLOG. I have enough space available in other places that I can shuffle it around and rebuild things if that is what is required. I also have a fair amount of RAM to work with, and well as solid UPS protection (easily 20-30 minutes runtime, and FreeNAS is monitoring the UPS).
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Another option is to put your NFS VM data on a dedicated SSD pool (perhaps mirrored).

Anyway, ESXi NFS writes are sync writes. Sync writes to an HD based pool without a SLOG are *very* slow.

The IOP thing is another element to that, as even with fast sync writes, you'll still be very random i/o bound, as each vdev controbutes the IOPS of a single disk to the pool, which is why having more vdevs (with 2 disks per vdev in mirror) in your pool increases the IOPs. Also, a raidz2 vdev has a larger minimum block write than a mirror vdev, which is another reason to use mirros with block and vm storage.

But depending on your VM requirements, it may be best to have HD RaidZ2 pool for bulk storage, and a fast SSD pool for the VM storage.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
of RAID1. Is that correct?
No, ZFS does not use the terminology. A mirror vdev is a mirror vdev, there is no RAID1.
For example, my pool that I use for iSCSI:
Code:
  pool: iSCSI
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:08:25 with 0 errors on Mon May 14 00:08:27 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		iSCSI										   ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			gptid/a1ae863b-2a0a-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
			gptid/a41c596b-2a0a-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
		  mirror-2									  ONLINE	   0	 0	 0
			gptid/0243b38d-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
			gptid/05e0a0d6-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
		  mirror-3									  ONLINE	   0	 0	 0
			gptid/0ca57011-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
			gptid/0dce1e70-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
		  mirror-4									  ONLINE	   0	 0	 0
			gptid/11269a55-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
			gptid/1237afc0-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
		  mirror-5									  ONLINE	   0	 0	 0
			gptid/70a040a4-5163-11e8-a76f-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/194a8867-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
		  mirror-6									  ONLINE	   0	 0	 0
			gptid/1ee68b6a-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
			gptid/229c431a-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
		  mirror-7									  ONLINE	   0	 0	 0
			gptid/25ec5bdf-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
			gptid/275c216f-2a0b-11e8-bbf6-002590aecc79  ONLINE	   0	 0	 0
		  mirror-8									  ONLINE	   0	 0	 0
			gptid/a9cd1b9f-2af2-11e8-8661-002590aecc79  ONLINE	   0	 0	 0
			gptid/ab1dbee6-2af2-11e8-8661-002590aecc79  ONLINE	   0	 0	 0

errors: No known data errors
Loosely speaking, that would give me 50% of the physical space to use which is a non-trivial loss.
That is true, which is the reason I have another pool that I use for mass storage:
Code:
  pool: Emily
 state: ONLINE
  scan: scrub repaired 0 in 0 days 02:28:24 with 0 errors on Thu May 10 10:28:25 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		Emily										   ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/1b9f316d-da26-11e7-b781-002590aecc79  ONLINE	   0	 0	 0
			gptid/bbf7a1c8-73ee-11e7-81aa-002590aecc79  ONLINE	   0	 0	 0
			gptid/55f074a3-cdb5-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0
			gptid/78e8e147-d97d-11e7-b781-002590aecc79  ONLINE	   0	 0	 0
			gptid/78b8bf90-d9b1-11e7-b781-002590aecc79  ONLINE	   0	 0	 0
			gptid/90d74abf-d18c-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0
		  raidz2-1									  ONLINE	   0	 0	 0
			gptid/87a407df-d20b-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0
			gptid/feeeb5ef-d307-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0
			gptid/ee8fb387-d2ba-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0
			gptid/5c9ac389-d262-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0
			gptid/9615c0df-d2e3-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0
			gptid/2698b33e-d23a-11e7-bdb7-002590aecc79  ONLINE	   0	 0	 0

errors: No known data errors
Virtual machines will not run well from a RAIDz2 pool because the IOPS are limited. My iSCSI pool isn't very good right now either because the SLOG is not connected. I was doing some testing and removed it and have not put it back yet.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

DaveY

Contributor
Joined
Dec 1, 2014
Messages
141
A quick way to see if it's caused by sync write penalty is just to disable sync writes and test again. If speed improves, then add a SLOG and you're good to go. It could also be the block size that esxi is using for NFS. Try stepping it up to 64K max and see if speed improves. You can go higher if using NFS4
 
Joined
Dec 29, 2014
Messages
1,135
Virtual machines will not run well from a RAIDz2 pool because the IOPS are limited. My iSCSI pool isn't very good right now either because the SLOG is not connected. I was doing some testing and removed it and have not put it back yet.

I ordered an Intel Optane SSD 900P to use as an SLOG device, and it arrives tomorrow. I will re-run the tests and see how things perform after that is up and going.
 
Joined
Dec 29, 2014
Messages
1,135
What's the output of camcontrol devlist?

[root@freenas2 ~]# camcontrol devlist
<CISCO UCS 240 0809> at scbus1 target 26 lun 0 (ses0,pass0)
<ATA ST91000640NS CC03> at scbus1 target 27 lun 0 (pass1,da0)
<ATA ST91000640NS CC03> at scbus1 target 28 lun 0 (pass2,da1)
<ATA ST91000640NS CC03> at scbus1 target 29 lun 0 (pass3,da2)
<ATA ST91000640NS CC03> at scbus1 target 30 lun 0 (pass4,da3)
<ATA ST91000640NS CC03> at scbus1 target 31 lun 0 (pass5,da4)
<ATA ST91000640NS CC03> at scbus1 target 32 lun 0 (pass6,da5)
<ATA ST91000640NS CC03> at scbus1 target 33 lun 0 (pass7,da6)
<ATA ST91000640NS CC03> at scbus1 target 34 lun 0 (pass8,da7)
<SEAGATE ST9300653SS 0005> at scbus1 target 35 lun 0 (pass9,da8)
<SEAGATE ST9300653SS 0005> at scbus1 target 36 lun 0 (pass10,da9)
<ATA ST91000640NS CC02> at scbus1 target 46 lun 0 (pass11,da10)
<ATA ST91000640NS CC02> at scbus1 target 47 lun 0 (pass12,da11)
<ATA ST91000640NS CC03> at scbus1 target 48 lun 0 (pass13,da12)
<ATA ST91000640NS BK03> at scbus1 target 49 lun 0 (pass14,da13)
<ATA ST91000640NS BK03> at scbus1 target 50 lun 0 (pass15,da14)
<ATA ST91000640NS BK03> at scbus1 target 51 lun 0 (pass16,da15)
<ATA ST91000640NS BK03> at scbus1 target 52 lun 0 (pass17,da16)
<ATA ST91000640NS BK03> at scbus1 target 53 lun 0 (pass18,da17)
<ATA ST91000640NS BK03> at scbus1 target 54 lun 0 (pass19,da18)
<HV Hypervisor_0 1.01> at scbus3 target 0 lun 0 (pass20,da19)

All of the ST91000640NS drives are part of the user facing pool. The mirrored ST9300653SS drives are just for system info, and HV Hypervisor_0 1.01 is mirrored SD cards that hold the freenas-boot file system.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Ok, so the HBA is presenting the disks properly (it's a PCIe 3.0 card, so it supports mrsas.
 
Joined
Dec 29, 2014
Messages
1,135
I ordered an Intel Optane SSD 900P to use as an SLOG device, and it arrives tomorrow. I will re-run the tests and see how things perform after that is up and going.

All I can say is WOW! Just adding this card as an SLOG (ZIL in the GUI when you extend the volume) increases the write performance on the pool by 4-5X. I mean, WOW! Thanks for the suggestions. I definitely won't forget about this. It was about $370 on Amazon, but worth every penny.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Damned fast delivery, too, it seems.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
I am running FreeNAS 11.1-U4 on a Cisco UCS C240-M3, and I am experiencing what seems to be poor write performance. The system has dual E5-2660 V2 CPU's, 128GB of RAM, LSI 9271 controller with all drives in JBOD mode. FreeNAS is acting as an NFS datastore for ESXi 6.5 and 6.0 hosts. The NIC where FreeNAS talks to the ESXi hosts is a Chelsio T520-CR. The drives are Seagate ST91000640NS (1TB SATA, 7200 rpm). I have two vdevs of 8 drives each, and a spare drive. These are in a in a single pool configured as RAID-Z2. I will attach the hardware config. When I move VM's from FreeNAS to local storage on the ESXi hosts, I get between 6.5 and 8.0GB throughput, and I am thrilled with that. When I move the VM's from local storage on the ESXi host back to FreeNAS, and I getting 400-500M throughput. I expected something of a dropoff, but not one quite that severe. I have messed with the hardware config over the past year, so I turned off autotune and deleted all tunables. Then I rebooted, enabled autotune, and rebooted again. I read through the hardware and ZFS guides to try and do everything the right way, but no joy so far. I don't know if it is a problem with my hardware (drives in particular) or not. Looking at the stats, the drive near 100% utilization when reading, but seem to cap around 30% when I am writing. I have 2 hosts with local storage of comparable vintage, so I don't think it the hosts initiating the write would cap things so much. I am running out of ideas.
From my experience doing very similar things:
Your hardware choice and resources are good. 128GB, nice work. 10G-E, excellent.

Botton Line Up Front (BLUF), where I suggest you start doing testing is your pool config. Quick recommendation: Try using mirrors even though you might hate the loss of capacity if you want to achieve higher performance.

Longer explanation:
In my hardware list you'll see I have three disk enclosures (SuperMicro SAS1, HP SAS1 (MSA70), HP SAS2 (D2700)). I loaded them with old school SAS disks and did a bunch of performance testing using VMware's IO Analyzer. I tried different pool configs like groups of Z1, Z2, etc. All of them were terrible for write performance accept mirrors. There's no write caching options that can make up for the slow write times and parity calculation of Z1/2. I didn't try getting crazy and tweaking the transaction group sizes or anything like that. The thought of that is fun, but likely it wouldn't help much and supporting it would be difficult, plus can't imagine how many reboots and tests I would have needed, yuck :/

The pool config that gave me the best overall results between read and write was mirrors hands down. Maybe give that a try if you wish. The reason I chose to use a VMware performance test instead of doing it on the FreeNAS console was to ensure I knew what my VMs were going to get, not just what the disk subsystem could do. I wanted to take everything into account (Fibre Channel, ESX, FreeNAS, etc.).

Usually, more vDevs = higher IOPS for writing.

Here's a slice my test data. Won't be the same as your system, but you can at least see the relative numbers between two configs. I use mirrors, sync=always, striped SLOG, L2ARC. I get better performance with the newer SATA disks than the old SAS disks, so that's all I use now.
 

Attachments

  • Pool Config.pdf
    139.9 KB · Views: 401

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
All I can say is WOW! Just adding this card as an SLOG (ZIL in the GUI when you extend the volume) increases the write performance on the pool by 4-5X. I mean, WOW! Thanks for the suggestions. I definitely won't forget about this. It was about $370 on Amazon, but worth every penny.
With the addition of that 900P, I'd love to see performance data comparing between Z2 pool and mirrors pool. I'd still wager a beer that mirrors might beat the Z2, but I have seen systems where they got just as good or better from Z2, so I don't think it's a hard/fast rule all the time.
 
Joined
Dec 29, 2014
Messages
1,135
With the addition of that 900P, I'd love to see performance data comparing between Z2 pool and mirrors pool. I'd still wager a beer that mirrors might beat the Z2, but I have seen systems where they got just as good or better from Z2, so I don't think it's a hard/fast rule all the time.

I am sure the mirrors would perform better as well. If this were supporting a bunch of production VM's, I would definitely configure it that way. That isn't the case here since most of the VM's are part of a lab I don't run all that often. Besides, I am thrilled with where things are now with the addition of the 900P. My rough throughput numbers for moving a VM from FreeNAS to local storage is about 8Gb/sec, when is pretty darn good. That is also where it was before. The biggest difference in on the writes. I was getting about 500Mb/sec moving VM's from local storage to FreeNAS before, but I am now getting 4Gb/sec with the 900P. I can transition the 3 VM's I run all the time in less than 10 minutes which just fine for what I need. I am not prepared to drop another $400 on my backup NAS, but I think I may trying doing something with an HDD that either jgreco or cyberjock suggested. That is to do a small RAID1 under the control of the RAID controller as an SLOG. That way the RAID controller can do some write caching, but the actual storage would be JBOD under the control of FreeNAS/ZFS. I don't have enough extra drives lying about at the moment to give that a try, but I may try it just for fun when I have the spares.

Funny you mentioned the HP stuff. The FreeNAS that I retired with the current version was an DL380 G6 with a P822 controller and a D2700 external enclosure with 25 x 300GB drives. I never figured out how to do JBOD on the P822, and I didn't have enough extra space to move the files around to re-configure it anyway. I may give that a try for old times sake. The D2700 replaced an MSA70, and my first version of FreeNAS was on a DL380 G3 with an MSA20. There is a way back entry for you!
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Ha! Cool. Yeah, I started on G5, and did the same thing with a RAID controller for SLOG. Funny.

People lost their minds when they read anything about using RAID, but it was only for an SLOG. It couldn't sustain long writes, but it could absorb small bursty writes for a few. Poor man's NVE I guess.

I wish I had some 10G, I'm using dual fabric 4gb FC. I'm going to re-run some testing with the new S3700s and see how performance looks, now that we know what the latency numbers for the devices look like.

I wish I had more PCIe slots, I'd think about getting something P series since it clearly dominates!
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
I may give that a try for old times sake
If you pull out those drives, is there any way I could talk you into running a test on a mirrored config?

If you have time or interest, I attached my test results. I would be super curious if your system does better with the drives than mine. I always felt like the SAS drives should have performed a little better. Maybe I had something not optimally configured?

This was HP MSA70 with 24 146GB SAS1 10K and D2700 with 12 300GB SAS2 10K.
 

Attachments

  • HP SAS.xlsx
    17.2 KB · Views: 368
Joined
Dec 29, 2014
Messages
1,135
If you pull out those drives, is there any way I could talk you into running a test on a mirrored config?

I may give that a try at some point, but not right now. I am currently a little challenged for rack space, and the D2700 is buried under a bunch of other heavy stuff on a shelf. My curiosity isn't sufficiently piqued at the moment to rip apart my office to drag that out. :smile: I am sure I will at some point, but that is likely a ways off.
 
Joined
Dec 29, 2014
Messages
1,135
This was HP MSA70 with 24 146GB SAS1 10K and D2700 with 12 300GB SAS2 10K.

Just curious, but what were you using as an HBA? Curiosity got the best of me, and I dragged out the D2700. It has 25 x 300G 10k drives. I was trying to get the P822 RAID controller to do JBOD, but no joy at all. I bought a used LSI 9207-8E to try and drive the D2700, and now I am anxiously waiting its arrival.
 
Status
Not open for further replies.
Top