jgreco
Resident Grinch
- Joined
- May 29, 2011
- Messages
- 18,680
You do realize that there's people here who store more than just their downloaded 0day war3z torrent rips on their FreeNAS machines, right?
I use mine to store my recipes!
You do realize that there's people here who store more than just their downloaded 0day war3z torrent rips on their FreeNAS machines, right?
I use mine to store my recipes!
32MB SDRAM but only 16MB flash. Sounds really weird, these days.Oh man, that takes me back to the days of hacking Compaq iPAQ IA-1 devices to run a stripped-down Linux off a CF card.
I would not use SSDs as any part of a NAS.
My advise is to only use your NAS as storage, forget messing about cache drives, jails and all that rubbish.
Have a good main PC and just treat your NAS as a useful and stable storage device.
job with EMC, NetApp, Dell/Compellent, etc.
Damn,. that is a hefty amount of hardware for a terminal. I recall taking a 720k 5.25" floppy, cutting a tab in it and making it a double sided floppy, or how about the old 3.5" floppy and cutting a hole in the plastic case to make it a 1.44MB floppy disk. Then I'd insert it into a computer with a whole 512KB of RAM (had to populate the motherboard with lots of small DIP chips), lots of processing power! Back in those days I actually had a copy of Windows 1.0 and it worked okay for what it was. I'd never heard of FreeBSD or Linux but I did hear of CPM and UNIX. I led a sheltered life.Booted off a 1.44MB floppy onto a 486 box with 16MB RAM and was used to make old PC's into X-terminals running FreeBSD and XFree86.
Damn,. that is a hefty amount of hardware for a terminal. I recall taking a 720k 5.25" floppy, cutting a tab in it and making it a double sided floppy, or how about the old 3.5" floppy and cutting a hole in the plastic case to make it a 1.44MB floppy disk. Then I'd insert it into a computer with a whole 512KB of RAM (had to populate the motherboard with lots of small DIP chips), lots of processing power! Back in those days I actually had a copy of Windows 1.0 and it worked okay for what it was. I'd never heard of FreeBSD or Linux but I did hear of CPM and UNIX. I led a sheltered life.
########## SMART status report for ada1 drive (SandForce Driven SSDs: 10316511580009990080) ########## Error SMART Status command failed Please get assistance from http://smartmontools.sourceforge.net/ Register values returned from SMART Status command are: CMD=0xb0 FR =0xda NS =0xffff SC =0xff CL =0xff CH =0xff RETURN =0x0000 SMART overall-health self-assessment test result: FAILED! No failed Attributes found. ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 050 Pre-fail Always - 0/167471433 5 Retired_Block_Count 0x0033 095 095 003 Pre-fail Always - 416 9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age Always - 25443h+15m+50.850s 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 363 171 Program_Fail_Count 0x0000 000 000 000 Old_age Offline - 0 172 Erase_Fail_Count 0x0000 000 000 000 Old_age Offline - 0 174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 329 177 Wear_Range_Delta 0x0000 000 000 --- Old_age Offline - 1 181 Program_Fail_Count 0x0000 000 000 000 Old_age Offline - 0 182 Erase_Fail_Count 0x0000 000 000 000 Old_age Offline - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 194 Temperature_Celsius 0x0022 028 044 000 Old_age Always - 28 (Min/Max 0/44) 195 ECC_Uncorr_Error_Count 0x001c 117 099 000 Old_age Offline - 0/167471433 196 Reallocated_Event_Count 0x0033 100 100 003 Pre-fail Always - 0 231 SSD_Life_Left 0x0013 093 093 010 Pre-fail Always - 1 233 SandForce_Internal 0x0000 000 000 000 Old_age Offline - 10944 234 SandForce_Internal 0x0000 000 000 000 Old_age Offline - 9152 241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 9152 242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 15360
Well, ok then :)Well, almost three years of life, 9TB written, that works out to about 9GB written per day, which is pretty heavy use for a consumer SSD. Intel was rating its consumer drives at 20GB/day three years ago, IIRC.
Have you made any major changes to your system? Added more RAM (still at 16GB?)? Switched over to enterprise-grade SSDs that are better suited to this purpose?
Perhaps at some point we will be able to tell ZFS to only use L2ARC for datasets xyz...
You put it better than I ;)I would draw the conclusion that, if you don't have sufficient RAM causing cache thrashing, and you continue to use drives that aren't recommended due to their low (relative) lifetimes... and you push said drives beyond their manufacturer-specified lifetime... then yes, they'll fail.
Cool, thanks.SSH, zfs set secondarycache=none/meta/all datasetname, done.
The default is "all" so you'll want to set it to "none" or "meta" for the sets to exclude from L2ARC. I wouldn't expect it to be a magic bullet but it should mitigate the damage.
I have 16GB of RAM but never seem to see it get used. At the moment it's at using 1GB (1019.92MB). It never really climbs.
I would draw the conclusion though that home users who use the server for storage / downloading via jails / backups / other non-repetitive work should not use an SSD
I would draw the conclusion that, if you don't have sufficient RAM causing cache thrashing, and you continue to use drives that aren't recommended due to their low (relative) lifetimes... and you push said drives beyond their manufacturer-specified lifetime... then yes, they'll fail.
The Samsung 950 Pro 512GB has a specced endurance of 400TBW so I'm aiming to minimize thrashing by
yup, it does:So the ZFS tab on the Reporting page shows only 1GB of ARC used? That's definitely abnormal. Does arcstat.py from a command line agree with that?
By what, man? Don't leave us hanging!