Advice on this 8 drive build?

Status
Not open for further replies.

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
So I have a post from the other day where I was looking to build my first real FreeNAS build with a Dell C2100. Long story short, in my current house it's not a problem, but the house I'm moving to (temporarily while I build a house), the C2100 is too loud simply because my office is adjacent to the living room. C2100 = no go.

So I'm looking at a new build; in a desktop form factor.

Chassis: U-NAS NSC-800
Motherboard/CPU: Supermicro A1SAI-2750F
RAM: Intelligent Memory 16GB DDR3L ECC SO-DIMM (1x16GB) - x4
HBA: LSI 9211-8i
PSU: SeaSonic SS-350M1U
PCIe Extension: 16X PCIe Extension Kit for NSC-800 Server Chassis
SATA DOM\M: Innodisk SATADOM-SV 3ME
Storage Drives: Toshiba 5TB PH3500U-1I72 - x8
SLOG Drive: Intel DC S3710 200GB
L2ARC Drive: Intel DC S3610 400GB

So my plan is to do this in two phases.
Phase 1:
Initial build, with 2 16GB DIMM's and all 8 storage drives (which I already own)
Phase 2:
Add 2 additional 16GB DIMM's, SLOG, and L2ARC drives.

The purpose of this NAS is primarily file storage. I plan to build a new hypervisor box when the Xeon-D stuff releases, and VM's will run with local SSD storage. Nightly backups of my VM's will be made, and stored on this FreeNAS box. All my services will be run in VM's on this other box (Plex, SB/CP/HP, UniFi Controller, Domain Controllers, Guacamole, Splunk, Observium, etc), so I do not need major processing power on the FreeNAS box, outside of what ZFS needs. Essentially, I want to make sure I can saturate a GbE link (and I will be using LACP between the 4 links on this C2750 motherboard).

I chose the 400GB Intel S3610 for L2ARC because of performance. The 200GB model has half the write speed, so I figured get the bigger drive and utilize HPA (Host Protected Area) to partition it down to ~200GB, which will effectively double the wear-out time.
I chose the 200GB Intel S3710 for SLOG as it appears to be the most performant drive I could find, while having a good period of durability.
I chose that U-NAS NSC-800 chassis because it's a desktop unit, and I believe it will have better airflow than the SIlverstone DS380.

Cost is definitely a concern, but I am building here for performance. I wish a mITX board existed with 8 DIMM's, but we all know that that's not likely. These 16GB DIMM's are definitely expensive (~$350/ea), but I'll buy them if it will increase performance.

My ultimate goal is optimal performance, and data reliability. I plan to do 4 vdev's of 2 drives mirrored. Effectively a RAID 10.

Questions:
1) Any alternative recommendations for a SATA DOM? Is there something better I can choose? I was looking at the 32GB model mainly because of the r/w speeds.
2) Any chassis recommendations over this NSC-800? Looking for something with hot-swap bays. Will be sitting on my desk so I don't want it to be too loud, but it doesn't have to be silent.
3) Any idea what kind of performance I can expect? Do you think I can saturate a GbE link at phase 1 (i.e., build with 32GB RAM and no SLOG/L2ARC)?
4) Any other recommendations/concerns/things to consider?

Thanks!
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I think you have a typo (or maybe I'm confused), it sounds like you do NOT need major processing power on the FreeNAS box (the VM box is something else, right)? What do you plan to use the FreeNAS for? Just for backups? and maybe media?

1. I think that one will be fine, speed is only a factor at boot and during system updates.
3. even with the phase 1 build you should easily be saturating a 1GbE link. Especially with large sequential writes. You might even be able to use RAID Z2 (depending on your planned usage).
 

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
Ooops, I did have a typo. Fixed now.

Yeah, so all of my services (VM's) will be running elsewhere. FreeNAS is purely to be used as a NAS for documents/photos/media/backups. I will use the Crashplan plugin to backup my FreeNAS box.

Even though my data is most WORM (Write Once, Read Many), I feel safer with mirror over Z2. Now that comes from my experience with hardware RAID, where I trust RAID 10 more than RAID 6. I may do some performance testing, but I'm pretty sure I want to do mirror. I currently have about 11TB of data, so with a mirror I should have what, 18.5TB usable storage? Plenty of room to grow, which should last at least 2-3 years (at which point I will start to replace drives with larger drives, or consider a new build).

If it's thought that I can get by with 32GB RAM, I may just opt for 4x8GB SODIMM's and save a bit of money. While I wouldn't mind having a SLOG/L2ARC, if I can saturate a GbE link without them, I don't see the benefit of having them with my use case (now if I was doing NFS/iSCSI to a hypervisor, sure...I know I could increase IOPS with a SLOG, but would much rather invest in 64GB RAM first). Hmm..
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Started writing, got a "new posts" notification, so some of what I've written has already been said.

1) The DOM is fine, the biggest thing to ensure is that you're buying a genuine one and not a knock-off.
2) That case requires a flexible PCIe riser to add in a card. Having had experience with those in scrypt-miner boxes: run away now. Get the DS380, airflow shouldn't be an issue with proper fans and good cable management.
3) My original baby-NAS based on an old HP ML110 could just about saturate GbE with "home use sequential I/O" eg: media and documents, and it was a C2D-based Xeon with 8GB of RAM.
4) Thoughts follow:

If you're not actually going to run VMs from the FreeNAS box, and only use it as CIFS/NFS document sharing, you very likely don't need L2ARC or SLOG.

You also very likely don't need 64GB of RAM, so you can start with 2x8GB and see how it performs.

Mirrors will give better performance under concurrent I/O, such as playing media from the unit during backups, but it probably wouldn't be enough to impact it. Do some performance testing, but with a RAID-Z2 setup you'll gain "two drives" in size, for a total of 30GB before overhead vs. the 20GB you'd get from mirrors.
 

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
Buy a legit DOM, got it.

I've never had experience with a flexible riser, so that was something I am concerned about. I see many people using this NSC-800 chassis with the riser that U-NAS sells, but never comments on how good (or bad) they work. The ds380 is ~$50 cheaper, plus I wouldn't have to buy that $10 riser. I will have to think on it, but definitely a good argument to consider the ds380. Everything I read says the ds380 has no cable management. Are there any other cases to consider like the ds380? I was thinking Lian Li had one, but I can't seem to find it now.

I like the idea of not having to deal with L2ARC/SLOG, which will keep things less complex. I also like the idea of running with 32GB RAM, as it will save me money, assuming performance is solid.

When speaking hardware RAID, I won't even touch RAID6. I'm on the fence about raid z2, although an additional ~10TB of usable storage space would be nice. I know z2 has more of a performance overhead on the cpu because of the double parity calculations, whereas mirrors are just straight up copies. I'm talking in terms of overall system performance and not just r/w disk performance. I will play around with z2 vs mirrors before putting this build into "production", once I get to that point.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Yeah, that RAM is insanely expensive.

If you really want crazy amounts of RAM in a miniITX format, consider Xeon-D. You can use RDIMMs, so you can use up to 128GB and 64GB will be cheaper. At 350 bucks each for the UDIMMs, Xeon-D might even be slightly cheaper in the end. You also get a crapton of compute and, depending on the board, integrated 10GbE connectivity for a small premium. Oh, and more PCI-e connectivity than the relatively puny Avoton.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Flexible risers are susceptible to interference; they just go completely against the idea of ZFS being a reliable system. Not sure what else to use for a case, but if Lian-Li has one I imagine it's going to be pricier than the DS380.

Unless physical size is at a premium, how about looking at a non-SFF desktop chassis? Although you may have a hard time a hotswap capable one.

Performance wise I doubt that any modern CPU, even that Atom, will see a bottleneck from using RAIDZ2 vs mirrors. I'm sure we'll have an Avoton Atom owner chiming in at some point to report RAID-Z2 performance.
 

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
Yeah, that RAM is insanely expensive.

If you really want crazy amounts of RAM in a miniITX format, consider Xeon-D. You can use RDIMMs, so you can use up to 128GB and 64GB will be cheaper. At 350 bucks each for the UDIMMs, Xeon-D might even be slightly cheaper in the end. You also get a crapton of compute and, depending on the board, integrated 10GbE connectivity for a small premium. Oh, and more PCI-e connectivity than the relatively puny Avoton.

Yeah, I was looking into the Xeon-D, and believe that's what I will use for my next hypervisor box. I may end up just doing 32GB in the C2750 mitx build, and be done with it. While I would absolutely love the Xeon-D for FreeNAS, I know it's more than I need. I would end up wanting to use that in a 16-24 drive build, with SLOG/L2ARC...just because I can. But trying to be sensible, I don't need that.

I think I want to move away from rack-mounted stuff at home, into a box (or two) in the corner of my office.

Hmm...how about a Xeon-d build (8c/16t) as an all-in-one box? CentOS in FreeNAS jails, yes? Could I use a single SSD to store jails on, and have them back up to my zpool?


Flexible risers are susceptible to interference; they just go completely against the idea of ZFS being a reliable system. Not sure what else to use for a case, but if Lian-Li has one I imagine it's going to be pricier than the DS380.

Unless physical size is at a premium, how about looking at a non-SFF desktop chassis? Although you may have a hard time a hotswap capable one.

Performance wise I doubt that any modern CPU, even that Atom, will see a bottleneck from using RAIDZ2 vs mirrors. I'm sure we'll have an Avoton Atom owner chiming in at some point to report RAID-Z2 performance.

Physical space is not at a premium, as has never really been the case for me actually, but noise is more of a concern as I move into a temporary housing situation (for ~2 years). I could fit a full size tower wherever, but I just don't know how I want to build out everything.


So to provide more info, I currently have a 4U NAS based in the Norco RPC-4224 chassis. Hardware RAID (12 2TB drives in R10), 32GB RAM, quad GbE NIC's. I also have a Dell R610 with dual quad CPU's, 48GB RAM that I run Proxmox on. My plan was to replace the Norco box with the C2100, and move my Proxmox storage to a local SSD, but after powering everything up in the new house the other day, it's just too loud (not hair dryer loud, but the low drone is too loud). Not much I can do in the C2100 or R610 to make them quieter, so looking to move back to whitebox(es). I have a very large desk (7'x6' L-shaped solid wood) that I could easily put a ds380 on the far end, and it would be out of the way. Likewise, I could put a full size tower underneath the desk in the corner and it would be out of my way.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If physical space is not an issue, take a look at the combination I went with. It's quiet enough for an office, even with the hot swap cages' fans at full speed (the 120mm Noctua NF-F12 PPC3000 PWMs that are responsible for general airflow are really loud at full speed, but they're quiet under regular use).

From my sig, for those on mobile devices:

Sharkoon T9 Value with 2 * Icy Dock FatCage MB153SP-B 3-in-2 drive cages

The 5-bay ones require the chassis to be modified, though, so 3-in-2 and 4-in-3 are the better options.
 

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
Hi Tycoonbob, did you build this? I'm considering getting 8 of these Toshiba 5TB drives just like you, and I was wondering how your drives are doing a few months later? Thanks!

Not quite. I ended up building a Supermicro based 4U box, but did use the same Toshiba 5TB drives. I also ended up not using FreeNAS, since I wanted to do virtualization and ZFS together on the same box...thus I used Proxmox VE 3.4, which now ships with ZFSonLinux, and works great.

Specs:
Chassis: Supermicro SC846E16-R1200B (Replaced both PSU's with two Supermicro PWS-920P-SQ's which are nearly silent)
Motherboard: Supermicro X8DTE-F
CPU's: x2 Intel Xeon L5640 (Hexa-core, 2.26GHz, HT)
RAM: 96GB DDR3 (12 x 8GB PC3-10600, Registered ECC)
RAID/HBA: Adaptec RAID 6805 512MB
SAS Expander/Backplane: Supermicro BPN-SAS-846EL1 (The SAS Expander is the backplane of the chassis)
OS Drives: x2 Crucial MX100 128GB SSD (ZFS Mirror, onboard SATA)
VM Drives: x2 Mushkin Enhanced Striker 480GB 2.5" SSD (Hardware RAID 1)
Storage Drives: x10 Toshiba PH3500U-1I72 5TB 7200RPM (Zpool with 5 2-way mirror vdev's)

The server without drives, cost me exactly $1000 shipped, from eBay. The Adaptec 6805 was an unexpected surprise (along with a quad GbE Intel card and a dual GbE Intel card), so I decided to use the 6805 for my ZFS needs. All drives (except for the Mushkin 480GB SSD's) are set in JBOD mode, and passthrough to ZFS just fine. I still have access to all SMART data, and actually pull SMART stats every 30 minutes, sort the output, and send it to Splunk for monitoring of health status, temperature, reallocated sectors, power on hours, and power cycle count. Pretty cool.

Anyway, the 10 Toshiba drives are in a single zpool with 2-way mirror vdev's, and performance is great. Originally I planned to add a pair of Intel S3710's (one for SLOG and one for L2ARC), but may end up not doing that, at least not yet. All of my data is shared out via NFS, but no VM's live on this pool. Since I have a lot of NFS traffic, I may end up adding a SLOG at least, to help with those writes. Although, my write performance currently is pretty good.

I did set my upper ARC limit at 48GB RAM, to leave plenty of room for VM's and to set my Crashplan client at 16GB javamx size (just for initial backup -- once that is complete, I will drop Crashplan's java max size to 8GB, and will likely increase my ARC size to 64GB if free RAM allows for it).


Code:
root@mjolnir:~# arcstat.py -f time,read,hits,hit%,miss,miss%,arcsz,c 5 25
    time  read  hits  hit%  miss  miss%  arcsz     c
09:02:15     0     0     0     0      0    47G   48G
09:02:20    26    26    99     0      0    47G   48G
09:02:25    25    25    97     0      2    47G   48G
09:02:30    32    32    99     0      0    47G   48G
09:02:35    37    36    97     0      2    47G   48G
09:02:40    30    30    99     0      0    48G   48G
09:02:45    26    25    96     0      3    47G   48G
09:02:50    25    25   100     0      0    47G   48G
09:02:55    28    28    98     0      1    47G   48G
09:03:00    38    38    98     0      1    47G   48G
09:03:05    33    33    97     0      2    48G   48G
09:03:10    26    26    98     0      1    48G   48G
09:03:15    29    28    97     0      2    48G   48G
09:03:20    28    28   100     0      0    47G   48G
09:03:25    26    25    98     0      1    47G   48G
09:03:30   126    70    55    55     44    48G   48G
09:03:35    54    49    92     4      7    47G   48G
09:03:40    24    22    92     1      7    48G   48G
09:03:45    30    30    98     0      1    47G   48G
09:03:50    25    25    99     0      0    47G   48G
09:03:55    28    27    97     0      2    47G   48G
09:04:00    38    37    97     0      2    47G   48G
09:04:05    37    36    97     0      2    47G   48G
09:04:10    31    30    99     0      0    47G   48G
09:04:15    37    36    97     0      2    47G   48G


Sequential Writes (8 x 5TB drives):
Code:
root@mjolnir:~# time sh -c "dd if=/dev/zero of=/tanks/tank_data_01/dd_test_8g_128k.tmp bs=128k count=62500"
62500+0 records in
62500+0 records out
8192000000 bytes (8.2 GB) copied, 6.23245 s, 1.3 GB/s

real    0m6.236s
user    0m0.024s
sys    0m3.748s

root@mjolnir:~# time sh -c "dd if=/dev/zero of=/tanks/tank_data_01/dd_test_256g_128k.tmp bs=128k count=2000000 conv=fdatasync"
2000000+0 records in
2000000+0 records out
262144000000 bytes (262 GB) copied, 448.095 s, 585 MB/s

real    7m28.098s
user    0m1.417s
sys    2m38.794s


Sequential Reads (8 x 5TB drives):
Code:
root@mjolnir:~# time sh -c "dd if=/tanks/tank_data_01/dd_test_8g_128k.tmp of=/dev/null bs=128k count=62500"
62500+0 records in
62500+0 records out
8192000000 bytes (8.2 GB) copied, 12.0287 s, 681 MB/s

real    0m12.031s
user    0m0.010s
sys    0m2.722s

root@mjolnir:~# time sh -c "dd if=/tanks/tank_data_01/dd_test_256g_128k.tmp of=/dev/null bs=128k count=2000000"

2000000+0 records in
2000000+0 records out
262144000000 bytes (262 GB) copied, 413.405 s, 634 MB/s

real    6m53.411s
user    0m0.435s
sys    1m22.216s


Here's an overview of my ZFS usage:
Code:
root@mjolnir:~# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool          119G  59.4G  59.6G         -    33%    49%  1.00x  ONLINE  -
tank_data_01  22.7T  9.60T  13.1T         -    11%    42%  1.00x  ONLINE  -

root@mjolnir:~# zpool status
  pool: rpool
state: ONLINE
  scan: scrub repaired 0 in 0h5m with 0 errors on Thu Jul 16 01:05:41 2015
config:

    NAME                                               STATE     READ WRITE CKSUM
    rpool                                              ONLINE       0     0     0
     mirror-0                                         ONLINE       0     0     0
       ata-Crucial_CT128MX100SSD1_15030E7060BF-part2  ONLINE       0     0     0
       ata-Crucial_CT128MX100SSD1_15030E7059A1-part2  ONLINE       0     0     0

errors: No known data errors

  pool: tank_data_01
state: ONLINE
  scan: scrub repaired 0 in 5h10m with 0 errors on Sun Jul 12 07:10:02 2015
config:

    NAME                        STATE     READ WRITE CKSUM
    tank_data_01                ONLINE       0     0     0
     mirror-0                  ONLINE       0     0     0
       scsi-350000395bb783f80  ONLINE       0     0     0
       scsi-350000395ebe01bc8  ONLINE       0     0     0
     mirror-1                  ONLINE       0     0     0
       scsi-350000395ab78540c  ONLINE       0     0     0
       scsi-350000395ebf0327e  ONLINE       0     0     0
     mirror-2                  ONLINE       0     0     0
       scsi-350000395bb800db6  ONLINE       0     0     0
       scsi-350000395eb704d64  ONLINE       0     0     0
     mirror-3                  ONLINE       0     0     0
       scsi-350000395bb800d48  ONLINE       0     0     0
       scsi-350000395eb88298d  ONLINE       0     0     0
     mirror-4                  ONLINE       0     0     0
       scsi-350000395bb7030a9  ONLINE       0     0     0
       scsi-350000395bbf01af4  ONLINE       0     0     0

errors: No known data errors

root@mjolnir:~# zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool                  75.2G  42.0G    96K  /rpool
rpool/ROOT             59.3G  42.0G    96K  /rpool/ROOT
rpool/ROOT/pve-1       59.3G  42.0G  59.3G  /
rpool/swap             15.8G  57.8G  29.6M  -
tank_data_01           9.60T  12.7T    21K  /tanks/tank_data_01
tank_data_01/audio      534G  12.7T   534G  /tanks/tank_data_01/audio
tank_data_01/owncloud  2.63G  47.4G  2.63G  /tanks/tank_data_01/nfs/owncloud
tank_data_01/proxmox   27.0G   173G  27.0G  /tanks/tank_data_01/proxmox
tank_data_01/software  4.65G  12.7T  4.65G  /tanks/tank_data_01/software
tank_data_01/torrent    123G  77.3G   123G  /tanks/tank_data_01/nfs/torrent
tank_data_01/usenet    18.4M   200G  18.4M  /tanks/tank_data_01/nfs/usenet
tank_data_01/users      191G  12.7T   191G  /tanks/tank_data_01/users
tank_data_01/video     8.74T  12.7T  8.74T  /tanks/tank_data_01/video
root@mjolnir:~# zfs get sharenfs
NAME                   PROPERTY  VALUE                             SOURCE
rpool                  sharenfs  off                               default
rpool/ROOT             sharenfs  off                               default
rpool/ROOT/pve-1       sharenfs  off                               default
rpool/swap             sharenfs  -                                 -
tank_data_01           sharenfs  off                               default
tank_data_01/audio     sharenfs  rw=@172.16.1.0/24,no_root_squash  local
tank_data_01/owncloud  sharenfs  rw=@172.16.1.115,no_root_squash   local
tank_data_01/proxmox   sharenfs  off                               default
tank_data_01/software  sharenfs  rw=@172.16.1.0/24,no_root_squash  local
tank_data_01/torrent   sharenfs  rw=@172.16.1.111,no_root_squash   local
tank_data_01/usenet    sharenfs  rw=@172.16.1.110,no_root_squash   local
tank_data_01/users     sharenfs  rw=@172.16.1.0/24,no_root_squash  local
tank_data_01/video     sharenfs  rw=@172.16.1.0/24,no_root_squash  local

root@mjolnir:~# zfs get dedup
NAME                   PROPERTY  VALUE          SOURCE
rpool                  dedup     off            default
rpool/ROOT             dedup     off            default
rpool/ROOT/pve-1       dedup     off            default
rpool/swap             dedup     off            default
tank_data_01           dedup     off            default
tank_data_01/audio     dedup     off            default
tank_data_01/owncloud  dedup     off            default
tank_data_01/proxmox   dedup     off            default
tank_data_01/software  dedup     off            default
tank_data_01/torrent   dedup     off            default
tank_data_01/usenet    dedup     off            default
tank_data_01/users     dedup     off            default
tank_data_01/video     dedup     off            default

root@mjolnir:~# zfs get compression
NAME                   PROPERTY     VALUE     SOURCE
rpool                  compression  lz4       local
rpool/ROOT             compression  lz4       inherited from rpool
rpool/ROOT/pve-1       compression  lz4       inherited from rpool
rpool/swap             compression  lz4       inherited from rpool
tank_data_01           compression  off       default
tank_data_01/audio     compression  off       default
tank_data_01/owncloud  compression  off       default
tank_data_01/proxmox   compression  off       default
tank_data_01/software  compression  off       default
tank_data_01/torrent   compression  off       default
tank_data_01/usenet    compression  off       default
tank_data_01/users     compression  off       default
tank_data_01/video     compression  off       default


As you can see, I have pretty much every dataset shared out via NFS, and you can likely tell what each one is doing. I have OpenLDAP set up with user/group permissions setup on all shares. Took a little bit to get it set up, but it's been working great since it was setup. Also, I am not doing any dedup or compression on my primary storage pool.

I eventually want to get 6 more of these 5 TB drives, to give me a total of 16, which should be somewhere in the realm of 35TB usable, which is more than I will need in the next year or two (for sure). Some will say I'm wasting a lot of storage space by going with mirrors, but I feel more confident in RAID-10 (or stripped ZFS mirrors) than I do parity RAID (or ZFS RAIDz). I'm happy with the redundancy I have, storage space I have, and the performance I am seeing. I have 6 total NIC's in this box; 2 (in LACP) are for VM traffic only, and the other 4 (also in LACP) are for management of the box, and NFS traffic.

If there is anything else you'd like to know or see about my build, just let me know. I
 

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
Thanks so much for the detailed response. Have you noticed any noises from the Toshiba drives like what is described here?

I have not noticed that, but you now have me paranoid. I work from home, so I'm in my office at least 8-10 hours a day, and my head is no more than 12-15 feet away from the front of my server (on the other side of my desk). Now I'm probably going to spend a good portion of my day laying in the floor next to my server. :(

BONUS:
Here's a screenshot of my Splunk for Smartctl app. I use Splunk a lot at home (I'm a Linux Systems Engineer for a hosting provider, but Splunk is one of my primary systems I manage) so I built this app. I still need to get the python smartctl collection script built into the app, which will make things much more streamlined. At that point, it may end up on Github.

WLulHL5.png
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
Sorry, I don't mean to make you paranoid, and for what it's worth it sounded as if the noise was normal from that other thread. That Splunk app looks pretty cool. One more question, do you know if the Toshiba drives have TLER/ERC support? The output of "smartctl -l scterc" on one of them would be appreciated.
 

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
Sorry, I don't mean to make you paranoid, and for what it's worth it sounded as if the noise was normal from that other thread. That Splunk app looks pretty cool. One more question, do you know if the Toshiba drives have TLER/ERC support? The output of "smartctl -l scterc" on one of them would be appreciated.

The Toshiba's do support ERC, but it's disabled by default.

Code:
root@mjolnir:~# smartctl -d sat -l scterc /dev/sdi
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-2.6.32-39-pve] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

SCT Error Recovery Control:
           Read: Disabled
          Write: Disabled

root@mjolnir:~# smartctl -d sat -l scterc,80,80 /dev/sdi
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-2.6.32-39-pve] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

SCT Error Recovery Control set to:
           Read:     80 (8.0 seconds)
          Write:     80 (8.0 seconds)

root@mjolnir:~# smartctl -d sat -l scterc /dev/sdi
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-2.6.32-39-pve] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

SCT Error Recovery Control:
           Read:     80 (8.0 seconds)
          Write:     80 (8.0 seconds)


Good catch, though. I need to go through and turn them all on, and ensure they stick after a reboot.
 

JaVa

Cadet
Joined
Aug 14, 2015
Messages
5
Hi Tycoonbob,
Thanks for the information so far. Can you provide us one more update on the last bit as to whether the erc enabled as you showed above survives a reboot?
If not i am assuming it would be possible to stick something into a startup script.
 

tycoonbob

Dabbler
Joined
Nov 14, 2014
Messages
23
Hi Tycoonbob,
Thanks for the information so far. Can you provide us one more update on the last bit as to whether the erc enabled as you showed above survives a reboot?
If not i am assuming it would be possible to stick something into a startup script.

Settings did stick post-reboot. This box has been rebooted at least 3 times since my previous post, and I just confirmed that those settings were still there.
 

JaVa

Cadet
Joined
Aug 14, 2015
Messages
5
Thank you.
 
Status
Not open for further replies.
Top