nfsd question

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
Hi, Everyone

My server configuration is 2xAMD 7713, 1T memory, 24nvme 7.68T (2vdev, 11 raidz-2 and 1 hotspare), 100G network adpater, only use NFS service.

Now I am using DD for test the r/w speed, from another server with 100G ethernet adapter, I opened four sessions with dd if=/dev/zero of=/ssd2/testfile3 bs=1G count=200

But only one nfsd is reponding the the four session requests, and the maximum write speed is less than 1.5GiB. is this correct? I would think my all nvme ssd vdev could get more r/w io.

I set nfsd number of servers to 256 or 128, seems no change.

when i use truenas scale, I can see several nfsd responding, all cpu is responding rather than a few cores is peaking, so not sure if there is any configuration i missed?

anyone please kindly help.

thanks
Jeremy
 

Attachments

  • nfsd.PNG
    nfsd.PNG
    35.2 KB · Views: 68
  • cpu111.PNG
    cpu111.PNG
    26.8 KB · Views: 60

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
I did test from 6 servers today, but still only one nfsd is respoisng to the data access, the maximum write speed was limited to 1.5GiB.

can anyone give me some guide for configuration?

fio test result as attached.

thanks
Jeremy
 

Attachments

  • fio.PNG
    fio.PNG
    379 KB · Views: 59
Last edited:

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
@Patrick M. Hausen

Hi, Patrick

Sorry to bother you, but I really need some help. no mater how many server I used to test or how many job set in the fio test, there is always only one nfsd responding to data access.

I am keepign searching the internet, but seems I didn't get any luck to find similar issue.

Is there any direction you can guide me for troubleshooting?

Btw, the intel E810C- doesn't show unless I set tunable with "hw.nvme.num_io_queues=64" for my other nvme ssd beside the 16 nvme. is it normal?

Your help will be highly appreicated.

Thanks
Jeremy
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I am not an expert for NFS but if I am not mistaken multiple servers load-balance for multiple client systems. A single NFS client will always only use one server. I might be mistaken. The details are probably in the source code or in the Daemon Book.
 

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
I am not an expert for NFS but if I am not mistaken multiple servers load-balance for multiple client systems. A single NFS client will always only use one server. I might be mistaken. The details are probably in the source code or in the Daemon Book.
Thanks, Patrick.

my understanding is same as you.

But what quite confused me is I only saw one nfsd with very high cpu usage in the system process when I used 6 server to access my nas to test in parellel. Should I see 6 nfsd in the system process list? 6 linux clients got total 2.4GB wirte speed, and never going up.

when I use DD for write tesing. seems the access speed is getting better, got aorund 8GiB write speed.



@firesyde424 how do you test your 24 nvme nas? is my test result normal?
 

Attachments

  • ddtest.PNG
    ddtest.PNG
    136.7 KB · Views: 50
  • ddtest-64k.PNG
    ddtest-64k.PNG
    62.2 KB · Views: 41
  • fio-test-128k.PNG
    fio-test-128k.PNG
    354.7 KB · Views: 45
  • nfsd.PNG
    nfsd.PNG
    46.2 KB · Views: 55

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Each nfsd is in itself multi threaded. The total number should be scaled up according to the parallelism needed according to the documentation. Unfortunately the documentation does not tell any numbers.

I'll check what the Daemon book might have to say on the topic ...
 

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
Each nfsd is in itself multi threaded. The total number should be scaled up according to the parallelism needed according to the documentation. Unfortunately the documentation does not tell any numbers.

I'll check what the Daemon book might have to say on the topic ...
The attached is the screenshot from my another truenas scale build. there are lots of nfsd running in parellel, but I didn't see another nfsd in my current core build. So I am confused if they should looks the same on both core and scale. As there is a number of server configuration in the nfsd service on core and scale both, it's default is 16.

also I observed some information on nas console.
 

Attachments

  • nfsdprocess.PNG
    nfsdprocess.PNG
    151.8 KB · Views: 55
  • nfsdconfig.PNG
    nfsdconfig.PNG
    41.1 KB · Views: 46
  • error.PNG
    error.PNG
    41.7 KB · Views: 49
Last edited:

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
@Patrick M. Hausen

I figured out that on truenas core, one nfsd process serves lots of threads. It looks different with Truenas Scale.

but when I compare the difference between core ans scale.
1, on truenas core, the setting is called number of servers
2, on truenas scale, the setting is called number of threads

So I am not sure if these two naming should be exchanged, as on freebsd, nfsd settings is threads, not server, while on linux , the nfsd is called servers, so we can see lot's nfsd running in parellel.

And then I found an interesting thing on truenas core.

when I set threads as 64, the actual thread was 60,
when I set threads as 128, the actual threads was 120,
when I set threads as 200, the actual threads was 192,
when I set threads as 250, the actual threads was 240,
when I set threads as 256, then the threads was 256,

not sure how this happens.
 

Attachments

  • truenascale.PNG
    truenascale.PNG
    1.7 KB · Views: 39
  • truenascore.PNG
    truenascore.PNG
    3.9 KB · Views: 49
  • 64threads.PNG
    64threads.PNG
    42.8 KB · Views: 51
  • 128threads.PNG
    128threads.PNG
    41.4 KB · Views: 49
  • 200threads.PNG
    200threads.PNG
    43 KB · Views: 45
  • 250.PNG
    250.PNG
    42.5 KB · Views: 36
  • 256nfsd.PNG
    256nfsd.PNG
    42.4 KB · Views: 53

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Thanks, Patrick.

my understanding is same as you.

But what quite confused me is I only saw one nfsd with very high cpu usage in the system process when I used 6 server to access my nas to test in parellel. Should I see 6 nfsd in the system process list? 6 linux clients got total 2.4GB wirte speed, and never going up.

when I use DD for write tesing. seems the access speed is getting better, got aorund 8GiB write speed.



@firesyde424 how do you test your 24 nvme nas? is my test result normal?
It's been long enough I don't remember the particulars, however, I do remember being very disappointed with our initial NFS and ISCSI results.

Our test consisted of a copy of a database being worked on a different system. We literally just copied data from one table to another and benchmarked the speed. We did see some improvement via NFS tuning on both sides, but performance remained decidedly underwhelming.

The servers also had quad port 25Gbe NICs so we connected those as well. If configured properly, Oracle will use multi pathing for both NFS and ISCSI. We were surprised to see a significant bump in throughput and IO when using 4 x 25Gbe NICs with their own IP addresses vs a single 100Gbe link. After some additional testing, we settled on 4 x 100Gbe links between the Oracle and TrueNAS servers. Our testing showed slightly better performance with ISCSI vs NFS so went with ISCSI.

Currently, the DB server will regularly use 150 - 200Gb/sec of bandwidth spread across all four 100Gbe links. It's still not nearly as fast as the pool is capable of, but it is well within the performance goals requested by the client.

For reference, here are the specs of TrueNAS server
  • Dell PowerEdge R7525(Chosen because, at the time, Dell would only certify the R7625 for 16 NVME drives instead of the 24 we needed)
    • 2 x AMD Epyc 7H12 CPUs, 128 cores total @ 2.6Ghz
    • 1TB 3200MT/s Registered ECC DDR4 RAM
    • 2 x Chelsio dual port 100Gbe NICs. (Can't remember the model and they have zero integration with Dell's iDRAC so I can't check currently)
    • 24 x 30.72TB Micron 9400 Pro U.3 NVME drives
      • 12 x mirrored vdevs - 267TB usable @ 80%
    • TrueNAS Core 13.5
 

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
Thanks for sharing, firesyde424.

The 150-200Gb/sec is quite fast i believe.

I got 8GiB/sec with my current build, when I use 6 node to use DD for testing, each host got around 1.5GiB/sec. Maybe I could get more for more host to test. But each host didn't exceed 1.5GiB is a result, not sure why.

Also I notice that, my intel 810-C was recognized in the system, but not showed in the Web Gui, unless I change the hw.nvme.num_io_queues=64 in the tunable. it appears with other 8 nvme disk.

alos I have a question, do you see the message as attached?

thanks
Jeremy
 

Attachments

  • 121212.PNG
    121212.PNG
    79 KB · Views: 56

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Thanks for sharing, firesyde424.

The 150-200Gb/sec is quite fast i believe.

I got 8GiB/sec with my current build, when I use 6 node to use DD for testing, each host got around 1.5GiB/sec. Maybe I could get more for more host to test. But each host didn't exceed 1.5GiB is a result, not sure why.

Also I notice that, my intel 810-C was recognized in the system, but not showed in the Web Gui, unless I change the hw.nvme.num_io_queues=64 in the tunable. it appears with other 8 nvme disk.

alos I have a question, do you see the message as attached?

thanks
Jeremy
We do not. Although when I checked I did see one of the 100Gbe interfaces had some issues with link flapping so I'll have to look into that.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
The attached is the screenshot from my another truenas scale build. there are lots of nfsd running in parellel, but I didn't see another nfsd in my current core build. So I am confused if they should looks the same on both core and scale. As there is a number of server configuration in the nfsd service on core and scale both, it's default is 16.

also I observed some information on nas console.
BSD PS does not show threads by default..
If you want to see all the threads associated with BSD processes, you need to add -H to the PS command line, for example:

Code:
root@nas1[~]# ps -waugx | grep nfs
root        3343    1.6  0.0   12760   2784  -  S    12Oct23    422:14.56 nfsd: server (nfsd)
root        3342    0.0  0.0  100824   2076  -  Is   12Oct23      0:00.15 nfsd: master (nfsd)
root       76724    0.0  0.0    4652   2288  1  R+   23:20        0:00.00 grep nfs

vs
Code:
root@nas1[~]# ps -Hwaugx | grep nfs
root        3343   1.1  0.0   12760   2784  -  S    12Oct23     0:30.07 nfsd: server (nfsd)
root        3342   0.0  0.0  100824   2076  -  Is   12Oct23     0:00.15 nfsd: master (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23     6:38.63 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.72 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.54 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     1:49.93 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:05.10 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.65 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:05.09 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.63 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:12.65 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.86 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     1:31.74 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.90 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    18:17.53 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    12:54.66 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:35.28 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:06.45 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    12:14.18 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:36.20 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:24.18 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:39.58 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:40.61 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:13.40 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:45.21 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:36.60 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    16:54.84 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    17:56.05 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    19:24.95 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    18:11.45 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    16:53.18 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    17:56.60 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    16:57.40 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    18:00.49 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    19:08.29 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    18:42.69 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    21:23.96 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    17:12.69 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:15.89 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     3:46.71 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     8:42.35 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:25.93 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     5:04.62 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     1:42.81 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:31.84 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23     1:54.21 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     4:15.71 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     7:49.70 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     7:37.03 nfsd: server (nfsd)


NFS tuning is a bit of an art.. Some of BSD's defaults are somewhat.. constrained but despite this, the TrueNAS defaults (aside from total threads/servers) should be pretty sane for most applications. If you plan to really push NFS with BSD, you will want to look at not only the number of threads/worker processes, but also the buffer sizes on the NFS mount, how the underlying filesystem is handling access times and sync writes and a number of other system and ZFS tunables..

Those tunables may have different defaults on Linux, so it's not always apples to apples to compare raw performance between default tuned OSs.

I had to tune some buffer and zfs timers to get decent performance for my applications. We can now fill a 25Gb/s link.. But there is an upper limit to what 1 thread can do on a single CPU core.

My advice is to pick the platform stack you're going to use, study your actual workload (rarely do workloads resemble DD), learn the tunables and then start tinkering with tuning the relevant nfs and zfs settiings until you get the performance you need.

and remember, don't drag race the bus.. :) https://sc-wifi.com/2014/07/08/drag-racing-the-bus/
 
Last edited:

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
We do not. Although when I checked I did see one of the 100Gbe interfaces had some issues with link flapping so I'll have to look into that.
Thanks, no worreis, I just want to see if anyone met it before, or it is critical to the vDev.
 

Jeremy Guo

Dabbler
Joined
Jul 28, 2023
Messages
37
BSD PS does not show threads by default..
If you want to see all the threads associated with BSD processes, you need to add -H to the PS command line, for example:

Code:
root@nas1[~]# ps -waugx | grep nfs
root        3343    1.6  0.0   12760   2784  -  S    12Oct23    422:14.56 nfsd: server (nfsd)
root        3342    0.0  0.0  100824   2076  -  Is   12Oct23      0:00.15 nfsd: master (nfsd)
root       76724    0.0  0.0    4652   2288  1  R+   23:20        0:00.00 grep nfs

vs
Code:
root@nas1[~]# ps -Hwaugx | grep nfs
root        3343   1.1  0.0   12760   2784  -  S    12Oct23     0:30.07 nfsd: server (nfsd)
root        3342   0.0  0.0  100824   2076  -  Is   12Oct23     0:00.15 nfsd: master (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23     6:38.63 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.72 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.54 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     1:49.93 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:05.10 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.65 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:05.09 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.63 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:12.65 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.86 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     1:31.74 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:04.90 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    18:17.53 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    12:54.66 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:35.28 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:06.45 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    12:14.18 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:36.20 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:24.18 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:39.58 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:40.61 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:13.40 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    13:45.21 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23    10:36.60 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    16:54.84 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    17:56.05 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    19:24.95 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    18:11.45 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    16:53.18 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    17:56.60 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    16:57.40 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    18:00.49 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    19:08.29 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    18:42.69 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    21:23.96 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23    17:12.69 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:15.89 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     3:46.71 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     8:42.35 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:25.93 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     5:04.62 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     1:42.81 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     0:31.84 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  S    12Oct23     1:54.21 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     4:15.71 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     7:49.70 nfsd: server (nfsd)
root        3343   0.0  0.0   12760   2784  -  I    12Oct23     7:37.03 nfsd: server (nfsd)


NFS tuning is a bit of an art.. Some of BSD's defaults are somewhat.. constrained but despite this, the TrueNAS defaults (aside from total threads/servers) should be pretty sane for most applications. If you plan to really push NFS with BSD, you will want to look at not only the number of threads/worker processes, but also the buffer sizes on the NFS mount, how the underlying filesystem is handling access times and sync writes and a number of other system and ZFS tunables..

Those tunables may have different defaults on Linux, so it's not always apples to apples to compare raw performance between default tuned OSs.

I had to tune some buffer and zfs timers to get decent performance for my applications. We can now fill a 25Gb/s link.. But there is an upper limit to what 1 thread can do on a single CPU core.

My advice is to pick the platform stack you're going to use, study your actual workload (rarely do workloads resemble DD), learn the tunables and then start tinkering with tuning the relevant nfs and zfs settiings until you get the performance you need.

and remember, don't drag race the bus.. :) https://sc-wifi.com/2014/07/08/drag-racing-the-bus/
Thank you very much for your advice.

That's true, real emviroment is a better place to tune the nfsd service.

I am quite new to freebsd and linux, so actually I have lots of silly questions ;)


Jeremy
 
Top