questions: dedicated network link and slow zfs

Not open for further replies.


Nov 12, 2011
i want to move to a freenas setup to secure my data, and zfs seems to be the best solution. while i wait for hard drives to come back down in price, i have been playing with a test rig that is spec'ed out as follows:

ibm thinkcentre with p4(3.0ghz)
4gb ram (1gb is used by the system and 3gb is aviable to freenas) (pc2-4500)
i beleive its a gigabyte board, and does have pci-e slots, this is the same board:
onboard gigabit port, sata-300 controller
200gb seagate sata drive
freenas 8.0.2

ultimately i will be using the same computer, just a new pair of hard drive (2tb each) in a mirrored configuration.

my two questions are:

1. most of the trnasfers would be with one computer, my switch, although gigabit, does not support lacp. can i have a dedicated network run between the freenas computer and my windows machine and use that to mount the share (wiht a crossover cable), then have all the other computers and the web interface on a seperate network interface trhough my switch? the objective would be to avoid bottlenecks causes by high bandwidth over the general network by other users. to clarify, i would use two nics, one would be a dedicated share to one computer, and the other would be shares to all other users and the web interface.

2. my zfs is slow (and i have searched!). i understand that im using a less than optimal system, and i have tried the standard suggestions for tweaking in loader.conf, and have tried freenas .7 as well with no improvment. my read speeds over the network are 4x faster with ufs, and write speeds are 2x faster. using dd locally, the speeds are respectable. iperf shows full gigabit speeds. i suspect that my pci-e bus is probably slowing me down when i combine zfs and the network as either on their own are fine, its only when i use them together i have serious speed issues. any suggestions (other than a new computer, or more ram) to fix this? would moving either the sata or the network to a pcie card help the problem? from what i can find, the pci-e on the board is only 1.x spec.



Oct 19, 2011
At least the PCI-E 1.x should be fine even if it's just an 1X slot (250MB/s) and with PCI-E you don't have the problems that you have with PCI where the bandwidth is shared between devices.

There are others on here with a lot more experience then me on running on 32-bit hardware so I won't go into that being the cause or not.


Oct 19, 2011
i would advice not to run zfs in ia32 systems. Yes it works, sortof, but it's a lot of tuning
and you might still experience hangs.
Get a dedicated something with a proper 64-bit cpu. A proliant microserver ( N36 or n40) is a cheap device
that works fine with freenas.


Resident Grinch
May 29, 2011
Then I probably shouldn't talk about how I'm running i386 with ZFS and two 240GB SSD's to provide a VMware iSCSI datastore ... on 512M memory, eh.

ZFS is not that zippy if you're stressing it. Not enough CPU is a big stress. Not enough memory can be a big stress (and almost certainly is, unless you have a situation where it isn't). In my case it seems like I can get away with exporting a zvol for iSCSI without actually *needing* a lot of RAM. That's quite possibly an edge case due to the reduced filesystem requirements, so it may not be all that useful generally. But it's still interesting. ;-)
Not open for further replies.