Plans to build a 12 drive SSD array

Status
Not open for further replies.

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Well look at that. I didn't even have to wait until tomorrow to know it failed.

Replication Storage -> 127.0.0.1:Storage-Backup failed: Failed: No ECDSA host key is known for [127.0.0.1]:8086 and you have requested strict checking. Host key verification failed.

Any ideas?

You need to follow the manuals instructions to setup the host keys. Even though it’s localhost
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
@Stux The manual provided the data needed to get the process going but I still had to do a little trial & error. It's working (weather or not it's configured optimally I cannot say) and like you said since I created the snapshot to cover all the data in the pool the replication task made a copy of all the data in the pool. It took overnight to finish and it seems to be checking & or updating it ever hour which is essentially what I wanted.

I recall coming across a FreeNAS Worst Practices guide and in there it mentioned Too Many Snapshots. How many snapshots is too many? The way I configured the task the most I should have at any given time is 196. The paragraph is talking in magnitudes of 100,000 which makes me think 196 is nothing but I though I should ask anyways.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
How many snapshots is too many?
When it takes longer to list them than you're willing to wait (including during management operations like pruning old ones by hand). During usage, the performance impact is zero.
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
When it takes longer to list them than you're willing to wait (including during management operations like pruning old ones by hand). During usage, the performance impact is zero.
If I delete or edit something I didn't mean or want to I think the default 2 weeks is more than enough time and I set it to only run the task during the hours I'm typically awake. I would expect any system to be strong enough to query a list of 196 items fairly quickly so waiting for the list to load shouldn't be an issue. Though I don't expect this feature to see much use from me. I just wanted something to backup my primary array and people seemed to point to using replication.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
It’ll be fine. 196 is nothing.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Even thousands of snapshots are not too bad, though the old GUI did get rather slow. The new GUI is a bit better in that regard.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
So what is best practice of scrubs and SMART schedules for SSDs?
I could've sworn I discussed it here recently but can't find the post :(

Pretty sure the scrub is almost irrelivant?

Someone told me that they have the server email them periodic reports for SMART and said reports contain a few more values which are applicable to SSD life (I forget the terms but the extra SMART values for flash)

Finally, I've asked this for 2 months now,.

HOW do partition my SSDs 1 or 2% smaller, so that if one goes bang and I replace it, a different 500GB model might be useful? I tried swap space, didn't seem to change anything.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Pretty sure the scrub is almost irrelivant?
The scrub is how ZFS checks for and corrects errors. You should do this no matter what the storage media is. I run a scrub once a month on my system at home, but I have a system at work that I scrub every weekend and I may change that in future, but I was really worried about it when I initially set it up.
Someone told me that they have the server email them periodic reports for SMART and said reports contain a few more values which are applicable to SSD life (I forget the terms but the extra SMART values for flash)
Different SSDs support SMART differently. You need to specify a model and it might be that someone has experience. I have some Samsung SSDs that appear to give useful SMART data, but I have some from Intel where the SMART data is totally useless.
Finally, I've asked this for 2 months now,.
Where did you ask because I didn't see it. I am guessing that you are asking about partition size in an effort to 'over-provision' the drive. My suggestion would be to use the manufacturer tool to set the over-provision first, then create your pool and ZFS will recognize the drive as being the size you selected with the over-provision. I have a 400GB Intel NVMe card that I configured to behave like there is only 128GB of storage to reserve the rest for the over-provision. The Samsung drives I use also have an over-provisioning setting.
It does matter what you are planning to use this for. Would you share some details?
 
Status
Not open for further replies.
Top