Robert Thomspon
Patron
- Joined
- Jun 24, 2017
- Messages
- 338
OK... so, Im still tackling this god awful machine that runs like molasses in the winter...
Original Build was:
2x Xeon X5667 (3.07GHz/ea)
84GB RAM (9x8 4x4) all ECC Registered... (I ran out of 8GB sticks...)
1x6TB Seagate Baracuda and 5x5TB Seagate Baracuddas (the 5x5 were running on my T3500 just fine)
1x 2 port Intel SATA controller
1x4 port Marvell SATA controller
... ran god awful.
Changed out 1 of the 5TB drives for another 6TB drive as it was showing a lot of what appeared to be queued writes under gstat.
Pulled the mismatched RAM to match Dell's recommended configuration at 8GB in Banks 1,2,3 and in the Riser for 2nd CPU, banks 1,2,3...
Switched out the SATA controller cards for an Intel RAID card flashed to IT mode... (Model RS2WC040) and the remaining 2 drives connected to on board SATA... (Card runs at SATA 3, on board runs at SATA 2... I believe, this effectively makes everything run at SATA 2 speed... ) anyway... there's enough drives that the reduced speed should make little to no difference on performance...
gstat still kicks back with a lot of %busy on the drives (and I am copying data over to them)... but no single drive sticks out like the 1 5TB drive did before being replaced (it would normally sit at 80-140% busy ... now all drives show roughly the same wait time... sometimes spiking individually above 100%... but not constantly)
BUT... it still runs like total crap (though, a bit better crap than OG build... I used to top out around 10MB/s across the network... Im not around 35MB/s on data transfers across an entirely gigabit LAN... )
But worst of all is the performance of the VM machine (I have an ubuntu VM running all my gadgets like SABNZB, Sonarr... etc expect Transmission which runs in a jail behind a VPN. Best example I can offer for comparison is the installation of boot-repair tools from command line (it's a server build, so no GUI... but I do know how long it SHOULD take (my T3500 installs this in about 5 minutes)... this install took about an hour... Even simple things like losing in via SSH seem to lag massively (VM is built with 2 cores, 8GB of RAM... probably a little overkill for what is needed... but wanted to rule out the idea that maybe the VM is running out of resources...)
Im tempted to pull the second CPU (riser) to see if maybe there's an issue with the CPU? im honestly at a loss on what else could be the issue with this things speed/responsiveness...
dd if=testfile of=/dev/zero bs=1024 bytes=50000
kicks back read times of between 58-66 MB/sec
dd if =/dev/zero of=testfile bs=1024 bytes=50000
kicks back write times of between 170-190 MB/sec
Can anyone point me in a direction to test the machine to kick back legit numbers? or where to look for why this machine feels SO bogged down?
Sorry for the novel.
Original Build was:
2x Xeon X5667 (3.07GHz/ea)
84GB RAM (9x8 4x4) all ECC Registered... (I ran out of 8GB sticks...)
1x6TB Seagate Baracuda and 5x5TB Seagate Baracuddas (the 5x5 were running on my T3500 just fine)
1x 2 port Intel SATA controller
1x4 port Marvell SATA controller
... ran god awful.
Changed out 1 of the 5TB drives for another 6TB drive as it was showing a lot of what appeared to be queued writes under gstat.
Pulled the mismatched RAM to match Dell's recommended configuration at 8GB in Banks 1,2,3 and in the Riser for 2nd CPU, banks 1,2,3...
Switched out the SATA controller cards for an Intel RAID card flashed to IT mode... (Model RS2WC040) and the remaining 2 drives connected to on board SATA... (Card runs at SATA 3, on board runs at SATA 2... I believe, this effectively makes everything run at SATA 2 speed... ) anyway... there's enough drives that the reduced speed should make little to no difference on performance...
gstat still kicks back with a lot of %busy on the drives (and I am copying data over to them)... but no single drive sticks out like the 1 5TB drive did before being replaced (it would normally sit at 80-140% busy ... now all drives show roughly the same wait time... sometimes spiking individually above 100%... but not constantly)
BUT... it still runs like total crap (though, a bit better crap than OG build... I used to top out around 10MB/s across the network... Im not around 35MB/s on data transfers across an entirely gigabit LAN... )
But worst of all is the performance of the VM machine (I have an ubuntu VM running all my gadgets like SABNZB, Sonarr... etc expect Transmission which runs in a jail behind a VPN. Best example I can offer for comparison is the installation of boot-repair tools from command line (it's a server build, so no GUI... but I do know how long it SHOULD take (my T3500 installs this in about 5 minutes)... this install took about an hour... Even simple things like losing in via SSH seem to lag massively (VM is built with 2 cores, 8GB of RAM... probably a little overkill for what is needed... but wanted to rule out the idea that maybe the VM is running out of resources...)
Im tempted to pull the second CPU (riser) to see if maybe there's an issue with the CPU? im honestly at a loss on what else could be the issue with this things speed/responsiveness...
dd if=testfile of=/dev/zero bs=1024 bytes=50000
kicks back read times of between 58-66 MB/sec
dd if =/dev/zero of=testfile bs=1024 bytes=50000
kicks back write times of between 170-190 MB/sec
Can anyone point me in a direction to test the machine to kick back legit numbers? or where to look for why this machine feels SO bogged down?
Sorry for the novel.