Looking for dedicated, slightly masochistic BETA testers!

Status
Not open for further replies.

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
@jkh - I like virtual box jail ! Any plans of making a new menu called "VMs" besides "Jails" and integrating that GUI with BHyVe ?

Another question ... is there a current reporting email being sent when users exceed 80% zvol size ?
 
J

jkh

Guest
@jkh - I like virtual box jail ! Any plans of making a new menu called "VMs" besides "Jails" and integrating that GUI with BHyVe ?
BHyve isn't supported in FreeBSD 9.x, which is what FreeNAS 9.x is based on. The VMs / BHyve oriented GUI will come along in FreeNAS 10.
Another question ... is there a current reporting email being sent when users exceed 80% zvol size ?
From the documentation: 'FreeNAS® provides an Alert icon in the upper right corner to provide a visual indication of events that warrant administrative attention. The alert system automatically emails the root user account whenever an alert is issued. ' So yes.
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
BHyve isn't supported in FreeBSD 9.x, which is what FreeNAS 9.x is based on. The VMs / BHyve oriented GUI will come along in FreeNAS 10.

From the documentation: 'FreeNAS® provides an Alert icon in the upper right corner to provide a visual indication of events that warrant administrative attention. The alert system automatically emails the root user account whenever an alert is issued. ' So yes.

hmm... i'm gonna check into this, i have a pool at home with over 80% and no email for that ... what version was that implemented in ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The warning email is at 95% which coincides with the change in ZFS behavior with regards to pool writes. It used to be 80% and anyone still saying 80% hasn't been updated with the changes to ZFS in the past 2 years. :P
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
The warning email is at 95% which coincides with the change in ZFS behavior with regards to pool writes. It used to be 80% and anyone still saying 80% hasn't been updated with the changes to ZFS in the past 2 years. :p

Wait what's this ... NEWS to me! So ... does that mean the pool doesn't get "auto" corrupt when it reaches 100% anymore ?
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
oh look great feature request, GUI screen to adjust "warning" and "critical" percentages for pool getting full e-mail notifications!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Wait what's this ... NEWS to me! So ... does that mean the pool doesn't get "auto" corrupt when it reaches 100% anymore ?

The pool can still be corrupted at 100%. But the write behavior of ZFS changes at 95% from optimal write to maximize disk space utilization. One gives great write performance while one maximizes ZFS' use of space.

If you get to 100% you can still kill your pool because of an improper transaction. I don't think there's much chance of that changing anytime soon since you shouldn't be letting a pool get to 95% full anyway. Most ZFS guys i've chatted with consider corruption at 100% to be admin failure and is "deserved" because you failed to do your job.
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
The pool can still be corrupted at 100%. But the write behavior of ZFS changes at 95% from optimal write to maximize disk space utilization. One gives great write performance while one maximizes ZFS' use of space.

If you get to 100% you can still kill your pool because of an improper transaction. I don't think there's much chance of that changing anytime soon since you shouldn't be letting a pool get to 95% full anyway. Most ZFS guys i've chatted with consider corruption at 100% to be admin failure and is "deserved" because you failed to do your job.


So if that's the case ... you just need to set a quota on your zroot to be 95% ? or you just have to monitor like a hawk ? Will that transfer to the datasets and their snapshots ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, you set a quota of 95% and you create new problems because you make the pool unwriteable and requires work to restore it. You're just trading one problem for another.

You don't have to monitor it like a hawk, that's why it emails you when you get to 95% full. ;) You get the email, you order more disks and expand your pool(or delete stuff you don't need).
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
Well, you set a quota of 95% and you create new problems because you make the pool unwriteable and requires work to restore it. You're just trading one problem for another.

You don't have to monitor it like a hawk, that's why it emails you when you get to 95% full. ;) You get the email, you order more disks and expand your pool(or delete stuff you don't need).

okay you make your point, sir... except I will go in and modify the variable that says "95%" ( if i can find it ) in the code and set it to 90% because in this particular case getting those drives can take time ...
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
you know what would be nice ? having a "ZFS" quota and then having an option to have some kind of "CIFS" quota that is 3 percent smaller or whatever than the ZFS one ... then basically have cifs reject writing new data but have the actual file system have the ability to delete files still... not sure if that makes sense or not .... i mean ... lets say i make a new dataset for 1 user, and then he goes and fills up that dataset like a retard ... Can i really control him ? no ... will i get an e-mail alert ? no, because it's not the whole pool that is getting full, just that dataset that has the quota ...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
you know what would be nice ? having a "ZFS" quota and then having an option to have some kind of "CIFS" quota that is 3 percent smaller or whatever than the ZFS one ... then basically have cifs reject writing new data but have the actual file system have the ability to delete files still... not sure if that makes sense or not .... i mean ... lets say i make a new dataset for 1 user, and then he goes and fills up that dataset like a retard ... Can i really control him ? no ... will i get an e-mail alert ? no, because it's not the whole pool that is getting full, just that dataset that has the quota ...

I understand what you are asking for, and I understand the desire to have this feature. Unfortunately this would mean that Samba, when starting up, would have to calculate all these quota values as well as maintain them. Do you think this would be a quick and easy thing to do when you have tens of thousands of users and potentially hundreds of TBs of data using hundreds of millions of files? Sure, for small servers it might be useful. But scaling up, it quickly becomes something that I don't see how it would be possible to realistically implement.

The only thing you can really do is set a quota for a directory and warn people that if you lock out that dataset you'll have to contact the admin to fix it. They do that once or twice and hopefully they'll never do that again. ;) At least, all of the places I've worked if you do something like max out your account and need an admin to fix it you end up waiting 2+ days for an admin to get around to it. Meanwhile you're spending time doing without(which isn't fun) and you learn your lesson... manage your data more closely.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
you know what would be nice ? having a "ZFS" quota and then having an option to have some kind of "CIFS" quota that is 3 percent smaller or whatever than the ZFS one ... then basically have cifs reject writing new data but have the actual file system have the ability to delete files still... not sure if that makes sense or not .... i mean ... lets say i make a new dataset for 1 user, and then he goes and fills up that dataset like a retard ... Can i really control him ? no ... will i get an e-mail alert ? no, because it's not the whole pool that is getting full, just that dataset that has the quota ...

I understand what you are asking for, and I understand the desire to have this feature. Unfortunately this would mean that Samba, when starting up, would have to calculate all these quota values as well as maintain them. Do you think this would be a quick and easy thing to do when you have tens of thousands of users and potentially hundreds of TBs of data using hundreds of millions of files? Sure, for small servers it might be useful. But scaling up, it quickly becomes something that I don't see how it would be possible to realistically implement.

The only thing I can think of that you can really do is set a quota for a directory and warn people that if you lock out that dataset you'll have to contact the admin to fix it. They do that once or twice and hopefully they'll never do that again. ;) At least, all of the places I've worked if you do something like max out your account and need an admin to fix it you end up waiting 2+ days for an admin to get around to it. Meanwhile you're spending time doing without(which isn't fun) and you learn your lesson... manage your data more closely.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
ok, I don't want to hijack this thread, but why not use the reserved space option on a empty volume. in my testsetup this worked (could free up space and the other test volume was writeabel again)
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
The warning email is at 95% which coincides with the change in ZFS behavior with regards to pool writes. It used to be 80% and anyone still saying 80% hasn't been updated with the changes to ZFS in the past 2 years. :P
On 9.2.1.3 they go out starting at 90% and multiple times after that until it is full.

Noticed this when I used an encrypted pool to write over some drives that came with an eBay server. Wanted to wipe them out before reselling so I put them in a striped pool with no swap and ran dd with random over the pool several times.

The capacity for the volume 'scratch' is currently at 90%, while the recommended value is below 80%.
 

Dennis.kulmosen

Explorer
Joined
Aug 13, 2013
Messages
96
ok, I don't want to hijack this thread, but why not use the reserved space option on a empty volume. in my testsetup this worked (could free up space and the other test volume was writeabel again)
That is the best way to reserve space for the filesystem on the pool. And then you wont mess with your datasets containing all your data. :smile:


Sent from my iPhone using Tapatalk
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
So going back to the beta ... nobody commented on my post regarding new versions of freenas ( don't know when that started ) not being able to enable HTTPS, every time i get warning that there is problem with certificate and it reverts back to HTTP. Maybe this should be addressed in the next release version ? or am i doing something wrong ? ( i basically make a new install, change the hostname, set-up IPs and then go and enable HTTP and HTTPS and then restart freenas, after boot ... you get the warning that it reverted back to HTTP
 
J

jkh

Guest
So going back to the beta ... nobody commented on my post regarding new versions of freenas ( don't know when that started ) not being able to enable HTTPS, every time i get warning that there is problem with certificate and it reverts back to HTTP.
I'm guessing nobody commented because no one else can reproduce it. If you select http+https in settings, then go to https://..., you will get the usual certificate warning from your browser (since the cert is self-signed, given no other action on your part to use a proper one), and it will happily use https or http. I just verified it with 9.2.1.6-BETA - works just fine.

I might suggest that the problem is with your browser, not FreeNAS. You probably have your security settings set such that any attempt to visit a site with a self-signed cert simply kicks you out and tries the non-SSL URL.
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
I'm guessing nobody commented because no one else can reproduce it. If you select http+https in settings, then go to https://..., you will get the usual certificate warning from your browser (since the cert is self-signed, given no other action on your part to use a proper one), and it will happily use https or http. I just verified it with 9.2.1.6-BETA - works just fine.

I might suggest that the problem is with your browser, not FreeNAS. You probably have your security settings set such that any attempt to visit a site with a self-signed cert simply kicks you out and tries the non-SSL URL.



No, i'm not getting the warning in the browser, i'm getting the warning in the same alerts area inside freenas ( where you usually see the ZFS volume status = healthy ) saying that there is an issue with certificate and it's reverting back to HTTP ). Theres interesting that nobody can replicate this because i have tried to enable this feature in the last 3 installations i did ... and this was present for at least 3 last versions, looks like ill try again and see ...
 
J

jkh

Guest
No, i'm not getting the warning in the browser, i'm getting the warning in the same alerts area inside freenas ( where you usually see the ZFS volume status = healthy )
Nope, can't reproduce that at all. Did you try to upload your own certificate, perhaps? If the cert you tried to use was malformed or in some way invalid, that would account for this too. If you did this at some point in the distant past, it would also be getting pulled forward with your upgrades. I would check!
 
Status
Not open for further replies.
Top