Based on the output of yourHere is that output, I am a bit unsure what to do next. Is my drive dead? or Does it need reformatted or something? TIA:
zpool status -v
there are only two drives involved in the storage pool and they are in a "striped set", not in any kind of redundant pool type. If you had redundancy, you wouldn't have the errors you do with files being marked as damaged or lost. The third drive that you appear to think was part of the pool was never involved in the storage in any way and we don't know for sure if it is good or bad.Where are you seeing that? And if it's the "available" column on the Storage page, you need to add the "Used" amount to it.I wondered why by 2x2TB disks only give a 2.3TB volume (and not a 4TB)?
PS. Only having room for 3 disks doesn't give you enough room to build a proper array of any kind.
I suppose that my statement could be considered a matter of opinion, while a mirror (2 way or 3 way) is perfectly legitimate, I personally feel it is wasteful of resources. If I wanted 12 TB of usable storage, for example, I would never consider using a mirror of two 12 TB drives. To me, that would be an insufficient level of redundancy (I have had two drives in the same vdev fail at the same time) and a 3 way mirror would be wasteful as you would expend 36 TB of storage to have two disks of redundancy and realize less than 12 TB of actual usable storage.What about a 2-disk (or 3-disk) mirror?
I would never consider using a mirror of two 12 TB drives. To me, that would be an insufficient level of redundancy (I have had two drives in the same vdev fail at the same time)
Since it means raidz1 - some in our forums consider 2TB per disk the upper bound for raidz1 and some consider 1TB per disk as this bound... (because of the 2nd disk failure risk higher for larger drives and resilvering raidz1 slower than rebuilding a mirror).3x 2TB drives including redundancy
maybe some additional HBA card can be considered? (what generation server is this? Does it have more than one 5.25" bay and a 2x5.25" -> 3x3.5" adapter can be bought and mount?)old HP Proliant ML115 box with 4 sata ports and 1 IDE
I had multiple USB drive failures so went with a small SSD SATA drive - I wish I had used the IDE port for that now perhaps.I guess the OP is happy with the SSD as a boot drive so I hesitate advising switching to USB boot - USB boot might not be as reliable as SSD... Some in our forums are happy with USB boot drives anyway...
2x 5.25 available. HBA? Is that not a Fibre Card?maybe some additional HBA card can be considered? (what generation server is this? Does it have more than one 5.25" bay and a 2x5.25" -> 3x3.5" adapter can be bought and mount?)
No, a SAS HBA has nothing to do with fiber.2x 5.25 available. HBA? Is that not a Fibre Card?
If you use three 2 TB drives in RAIDz1, it will give you about 2.7 TB of usable space after reserving 20% for the copy on write feature of ZFS. If you use RAIDz2, with four 2 TB drives, it will give you almost exactly the same capacity but with two drives of redundancy. The recommendation has been (for years) that drives larger than 1 TB should not be used in RAIDz1, but if you are only doing this as a backup and you have other backups of the same data, I suppose the risk of data loss is minimal and RAIDz1 would be acceptable even with 2 TB drives.I only need 2TB ;) I was thinking I could get something to meet that requirement with 3x 2TB drives including redundancy, and this is what I will be interrogating the manual to work out hopefully.
This is something that is only sometimes true. No drive, regardless of the kind of pool it is in, can rebuild faster than the drive is able to write data. If a drive in a RAIDz pool can write data at 100MB/s, and a drive in a mirror can write at the same speed, the only way the RAIDz rebuild is going to run slower is if the system is under-powered such that the computer can't keep up with the calculations needed for processing the parity. If you computer is sufficiently powerful to process the parity calculations, the RAIDz pool will rebuild just as fast as a mirror. I can resilver a drive in my RAIDz2 array in around an hour. The long rebuilds that some people experience is due to trying to build a low-power system that doesn't have enough processing capability.I, a noob, would since from what I've read mirror rebuilds quite quick comparing to raidz1 or raidz2.
Good to hear this. I guess I must have assumed (and must have been not aware of this assumption) that mirror rebuilds relied on large block copies and raidz resilvers - on lots of IOPS. Which appears not to be true...I can resilver a drive in my RAIDz2 array in around an hour.
In ZFS, a mirror vdev is not recovered by simply copying the contents of one disk to the other. In any resilver ZFS 'walks the tree' and checks all the data against all the checksums, so the process of reading the data would be limited (in a mirror) by how fast the source disk can read. The data is then written back out to the target disk with very little processor overhead, but the speed that the target disk can write is still a limiting factor.Good to hear this. I guess I must have assumed (and must have been not aware of this assumption) that mirror rebuilds relied on large block copies and raidz resilvers - on lots of IOPS. Which appears not to be true...
No, a SAS HBA has nothing to do with fiber.
If you use three 2 TB drives in RAIDz1, it will give you about 2.7 TB of usable space after reserving 20% for the copy on write feature of ZFS. If you use RAIDz2, with four 2 TB drives, it will give you almost exactly the same capacity but with two drives of redundancy. The recommendation has been (for years) that drives larger than 1 TB should not be used in RAIDz1, but if you are only doing this as a backup and you have other backups of the same data, I suppose the risk of data loss is minimal and RAIDz1 would be acceptable even with 2 TB drives.
[root@stu-nas1 ~]# zpool status -v | more pool: data1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data1 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/97c83d74-d1fe-11e8-abcd-001e0bc68414 ONLINE 0 0 0 gptid/9a598ecb-d1fe-11e8-abcd-001e0bc68414 ONLINE 0 0 0 gptid/9bb4bea8-d1fe-11e8-abcd-001e0bc68414 ONLINE 0 0 0 errors: No known data errors pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0 days 00:01:25 with 0 errors on Wed Oct 17 03:46:27 2018 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 ada3p2 ONLINE 0 0 0 errors: No known data errors
View attachment 26161
There are some things that you should read, because I don't see where they were discussed above:
Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
That is a feature of the file system, you can't use ZFS and not use copy-on-write, and you can't use FreeNAS without using ZFS.I do not think I have COW set up