Network connection drops after non-use

Status
Not open for further replies.

JohnFLi

Contributor
Joined
Sep 26, 2016
Messages
139
I am still working on setup, but one thing I have noticed.....
I have a cifs share, and I have a pc that has the share open. Not transfer files or anything, just windows explorer open to the share.
After x amount of time (not sure on the exact amount) when I go to click on that share, there is a delay before anythign happens. Almost like the share had disconnected with non-use. I have had this with multiple pc's.

Any thoughts?

FreeNAS-9.10.1 (d989edd)
PlatformIntel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Memory65393MB
System TimeWed Oct 19 09:00:48 PDT 2016Uptime9:00AM up 18 hrs, 0 usersLoad Average0.54, 0.16, 0.05
 

JohnFLi

Contributor
Joined
Sep 26, 2016
Messages
139
ummmmmm, not that I know of.
disks.jpg
 

JohnFLi

Contributor
Joined
Sep 26, 2016
Messages
139
yup currently only using 18 drives (3 vDevs of 6)
 

Noctris

Contributor
Joined
Jul 3, 2013
Messages
163
Have you done a full disk check of each disk ( smart) ? I had a server with a couple of different pools where 2 of them where brand new disks and 1 was an old pool from a different machine to transfer the data. 2 of the disks in the old pool were a bit wonky ( a bunch of reallocated sectors). Even though freenas gave it as healthy, i experienced similar "delays" as you do. After transferring the data and removing the disks from the system, performance increased dramatically.

So i chalked it up to 2 possible things:

A) it was because my server was a little light on ram for the 3 pools ( although 32GB of ram for a 2X3TB mirror, 7 x 3TB RaidZ2 and the old 7 X 1 TB RaidZ2 looks like enough to me)

or

B) It was because of the lesser then great disks which caused a delay because of the problem.

No certainty , just an experience.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Have you done a full disk check of each disk ( smart) ? I had a server with a couple of different pools where 2 of them where brand new disks and 1 was an old pool from a different machine to transfer the data. 2 of the disks in the old pool were a bit wonky ( a bunch of reallocated sectors). Even though freenas gave it as healthy, i experienced similar "delays" as you do. After transferring the data and removing the disks from the system, performance increased dramatically.

So i chalked it up to 2 possible things:

A) it was because my server was a little light on ram for the 3 pools ( although 32GB of ram for a 2X3TB mirror, 7 x 3TB RaidZ2 and the old 7 X 1 TB RaidZ2 looks like enough to me)

or

B) It was because of the lesser then great disks which caused a delay because of the problem.

No certainty , just an experience.

The solnet array test script is good for finding these performance issues. It's part of the disk burnin procedure
 

Noctris

Contributor
Joined
Jul 3, 2013
Messages
163
The solnet array test script is good for finding these performance issues. It's part of the disk burnin procedure
I know, but since it was an old pool that just needed to be copied once onto a new machine, i didn't really bother. I knew there were disks nearing the end of lifetime and decided to build a new machine when the first disk was starting to realloc sectors. When setting up the old machine i tested it with the solnet script ( and i did with the new one too).

But for the new build of OP, i would definitely do a burn in test. I've seen many, many, many cases where , despite being new, the disks were flawed due to manufacturing or just rough delivery.

Had several bad experiences like raid-5 sets never completing their initial build with brand new disks or worse, failing right after being taken in production.
 

JohnFLi

Contributor
Joined
Sep 26, 2016
Messages
139
ok, so how do I go about 'testing' the disks?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

JohnFLi

Contributor
Joined
Sep 26, 2016
Messages
139
Ok..... the unit is not in production yet so no worries
 
Status
Not open for further replies.
Top