Read and Write at 1MB/Sec: Not sure what changed

Status
Not open for further replies.

sasonic

Cadet
Joined
Dec 26, 2012
Messages
6
So I have been up and running with freenas storage system hosted on a VM server (drive passthrough so direct connection on storage to freenas) and no issues for 7+ months. A month ago, I noticed movie play back was acting up and since then I have gone to town with changing settings (hopefully changing back) and actually have left off with autotune turned on. The one constant is all of this is the horrible 1.17MB read and 1.18MB/sec write (CIFS based on the NAS Performance tester 1.4). I have watched FTP transfer and it appears to be just as slow (though no official measurement numbers). As an FYI: I have updated zfs version to 28 and have done a ZFS scrub.

I really am at a loss as to why such a horrible performance on something that has been working decently for so long and not sure at all any steps to take except maybe create another storage pool (ufs) and copy everything over and ditch using zfs.

the esxi server shows no signs of bottlenecks (not the main box or the specific vm) and having all other vms turned off (freenas only hosts data, no vms) doesn't change anything.

Thanks for reading this and any thoughts you could share!!!

Here is some information on my freenas system (not sure what will be helpful)
Build :FreeNAS-8.3.0-RELEASE-x64 (r12701M)
Platform: Intel(R) Core(TM) i5 CPU 760 @ 2.80GHz
Memory: 10224MB
System Time: Wed Dec 26 13:01:19 PST 2012
Uptime: 1:01PM up 26 mins, 0 users
Load Average: 0.25, 0.11, 0.04

Name Serial Description Transfer Mode HDD Standby Advanced Power Mgmt Acoustic Level Enable S.M.A.R.T. S.M.A.R.T. Extra Options
da1 ML4220F3153AKK Member of nas1 raidz Auto Always On Disabled Disabled TRUE
da2 ML4220F3153J6K Member of nas1 raidz Auto Always On Disabled Disabled TRUE
da3 ML4220F3153J8K Member of nas1 raidz Auto Always On Disabled Disabled TRUE
da4 ML4220F31569VK Member of nas1 raidz Auto Always On Disabled Disabled TRUE

nas1 /mnt/nas1 4.0 TiB (76%) 1.2 TiB 5.2 TiB HEALTHY

zdb -U /data/zfs/zpool.cache | grep ashift
ashift: 12

Reporting
System Load:
1 min .07 Avg
5 min .07 Avg
15 min .04 Avg
CPU: never gets to 60
Swap Utilization: 8GB freee
Physical mem utilization: Active doesn't cross 1GB

Fragmentation: .7 and .1 % (not sure the commands to see so not sure which refers to the system drive vs the storage) but either way, very low number

CIFS Settings
Authentication Model: Anonymous
DOS charset: CP437
UNIX charset: UTF-8
Log level: Minimum
Local Master: UNchecked
Time Server for Domain: UNchecked
Guest account: guest
File mask: empty
Directory mask: empty
Large RW support: checked
Send files with sendfile(2): checked
EA Support: UNchecked
Support DOS File Attributes: checked
Allow Empty Password: checked
Auxiliary parameters: empty
Enable home directories: UNchecked
Enable home directories browsing: UNchecked
Home directories: empty
Homes auxiliary parameters: empty
Unix Extensions: checked
Enable AIO: UNchecked
Minimum AIO read size: 4096
Minimum AIO write size: 4096
Zeroconf share discovery: checked
Hostnames lookups: checked
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So based on previous experience with other people that see a sudden drop in performance I'd say you likely have a drive that is going bad. I'd start with a long SMART test of your disks. A failing drive may appear fine on a scrub and a long SMART test too(go figure). Add to that the fact that you are using ESXi and things can get even more complicated.

Maybe one of the ESXi wizards can provide some insight. Be warned that they may not answer because they see people do really stupid configurations and they get sick of helping people that do dumb things knowing its dumb. I dabbled with ESXi a few weeks ago and it was a mess. ESXi was very unstable with my hardware and PCI passthrough was an epic fail. You appear to have done a disk passthrough(which I think is different) and that's not recommended. The ESXi wizards pretty much have the opinion of "PCI passthrough or epic fail".

There's plenty of threads on why doing anything except PCI passthrough can have terrible consequences. There is one thread where you could physically remove a hard drive and FreeNAS wouldn't even acknowledge that a disk was removed because ESXi was doing black magic in the background and continued to fool FreeNAS into thinking it was still attached.

Edit: For sh*ts and giggles have you tried shutting down the whole server and booting it back up to see if that helps?
 

sasonic

Cadet
Joined
Dec 26, 2012
Messages
6
Thank you for the info (and speedy reply): that is scary. The VM setup is a mapped raw LUN (was unclear above) so I thought that vmware is basically uninvolved in communicating with the drives which is always why i tend to think its a freenas/drive issue, not a VM related issue. did the s.m.a.r.t. short.. i will try the long and hope something comes up... though scary now to replace if vmware hides it.

Side note: I have done a restart of the esxi server as well several times.
 

sasonic

Cadet
Joined
Dec 26, 2012
Messages
6
Okay, so this thread can be closed I think... still looking into the issue, but I have confirmation its not related to freenas. the machine i was doing the test on (and playing videos on) is getting 1MB read and write (through two switches). I logged onto a VM on the server (so no network involved) and it reads/writes to the freenas at 100/109 and when I do that on another physical machine going through one switch its read/write at 10/11 (still not great) but looks like some odd network issue or bad switch. Embarrassed I just now saw this and I have been looking at this for a few weeks.
 

phoenix

Explorer
Joined
Dec 16, 2011
Messages
52
There really should be no problem with PCI Passthrough but I would recommend against RDM for your HD subsystem, it's more trouble than it's worth to configure. I've recently configured my first freeNAS server running as a VM in ESXi 5.x but I decided to install an IBM M1015 SAS/SATA raid card and flash it to an HBA then use that in Passthrough mode - that has given me no trouble in the (admittedly) few weeks it's been running and I get acceptable performance from it via CIFS shares. I used this guide to spec my server and (mostly) followed it with a slightly different case & HDs, I must add that the description of that build is easy to follow and is a relative recent article unlike many you'll find on the internet.
 
Status
Not open for further replies.
Top