FlyingYeti
Cadet
- Joined
- Feb 20, 2018
- Messages
- 5
Hi,
I'm having a strange drive throughput problem. I have 12 HGST 4TB 74K000 drives. I'm running FreeNas 11.1-U2 w/48GB RAM with an M1015 flashed to IT mode hooked up to an IBM EXP3000.
No matter the pool configuration, if I have 6 drives, it can get about 160MB / sec per drive, as reported by:
zpool iostat -v 1
I've tried both:
sudo dd if=/dev/zero of=./test2.dat bs=2M count=100K
and
sudo iozone -a -s 50G -r 2048
and get the same results.
However,
as I add more drives to a vdev/pool, I get less throughput per drive, eventually going as low as 80 MB/s.
I thought perhaps it's a controller/enclosure throughput issue, But if I stripe all 12 drives, I get 80 MB/s, but a total throughput of about 1GB/s.
Does anyone know what might be going on?
I'm having a strange drive throughput problem. I have 12 HGST 4TB 74K000 drives. I'm running FreeNas 11.1-U2 w/48GB RAM with an M1015 flashed to IT mode hooked up to an IBM EXP3000.
No matter the pool configuration, if I have 6 drives, it can get about 160MB / sec per drive, as reported by:
zpool iostat -v 1
I've tried both:
sudo dd if=/dev/zero of=./test2.dat bs=2M count=100K
and
sudo iozone -a -s 50G -r 2048
and get the same results.
However,
as I add more drives to a vdev/pool, I get less throughput per drive, eventually going as low as 80 MB/s.
I thought perhaps it's a controller/enclosure throughput issue, But if I stripe all 12 drives, I get 80 MB/s, but a total throughput of about 1GB/s.
Does anyone know what might be going on?