benderunit1
Cadet
- Joined
- Jan 30, 2016
- Messages
- 3
I have setup a basic NAS server via CIFS share - starting out with 3x4TB hard drives. I put them in RaidZ1 to maximize some space. After some network troubleshooting, I got my transfer speed up towards a gigabit maximum of ~125MB/sec. That's better than I need, and things were running great. I quickly filled it all up, and moved forwards with my NAS project.
So I went into the GUI, expanded it with 3x4TB more (also in RaidZ1). The expansion was successful, all my old data was fine.
I was running two of the drives through a PCI sata card that was known to be good, and the last one in the MB.
The problem was - now my transfer speeds to the server are 350KB/sec or worse!
Hard drive total is less than 50% full.
I figured, the PCI card had to go. I got a 5gbps raid pcie raid card, put all three drives into it - and it really didn't help. Best case is now I see bursts at 500KB/sec.
I did a scrub and reboot - took just under 13 hours - no change in speed/behavior.
I can still watch movies (upwards of 14GB files) just fine. Large file transfer from server is a paltry 30MB/sec. But, still leaps ahead of my transfer to the server.
Common sense tells me a hard drive is bad. But, I used two of those hard drives to load onto the first array, and the other was at least at USB2.0 maximum ~20MB/sec (ripped it out of the external case - my likely culprit). But, maybe I'm missing something? Can I prove it using the shell fairly easily? Is expanding a known issue and I'm an idiot for trying to avoid two network drives?
Here are some system specs:
Supermicro X7DA3 motherboard (has Intel 82563EB Dual Port Gigabit)
16GB DDR2 ECC RAM
2x Intel Xeon X5355 @ 2.66GHz
Boot is 16GB SanDisk Cruzer
6x4TB (Share size 14TB)
I would not like to spend a fortune on this one, and preferably not lose the data transferred - I spent years collecting some stuff. I understand if I shot myself in the foot, but I am an engineer and like to debug before I just scrap it.
All suggestions are welcome, I will help as much as I can.
So I went into the GUI, expanded it with 3x4TB more (also in RaidZ1). The expansion was successful, all my old data was fine.
I was running two of the drives through a PCI sata card that was known to be good, and the last one in the MB.
The problem was - now my transfer speeds to the server are 350KB/sec or worse!
Hard drive total is less than 50% full.
I figured, the PCI card had to go. I got a 5gbps raid pcie raid card, put all three drives into it - and it really didn't help. Best case is now I see bursts at 500KB/sec.
I did a scrub and reboot - took just under 13 hours - no change in speed/behavior.
I can still watch movies (upwards of 14GB files) just fine. Large file transfer from server is a paltry 30MB/sec. But, still leaps ahead of my transfer to the server.
Common sense tells me a hard drive is bad. But, I used two of those hard drives to load onto the first array, and the other was at least at USB2.0 maximum ~20MB/sec (ripped it out of the external case - my likely culprit). But, maybe I'm missing something? Can I prove it using the shell fairly easily? Is expanding a known issue and I'm an idiot for trying to avoid two network drives?
Here are some system specs:
Supermicro X7DA3 motherboard (has Intel 82563EB Dual Port Gigabit)
16GB DDR2 ECC RAM
2x Intel Xeon X5355 @ 2.66GHz
Boot is 16GB SanDisk Cruzer
6x4TB (Share size 14TB)
I would not like to spend a fortune on this one, and preferably not lose the data transferred - I spent years collecting some stuff. I understand if I shot myself in the foot, but I am an engineer and like to debug before I just scrap it.
All suggestions are welcome, I will help as much as I can.