Are my HDDS damaged? :O

Status
Not open for further replies.

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Regarding expanding hdds thats quite sad I must say. If I have 20tb worth of data and I wish to add another hdds to the pool then I need to copy all my data to spare 20tb
You really should read the docs, as they'll go into much more detail than I'm about to. But in short, supposing you have a pool consisting of 6 x 6 TB disks in RAIDZ2 (24 TB* net capacity), and you need to expand it, you have two safe ways to do that:
  • Replace the 6 TB disks, once at a time, with something larger. When the last disk is replaced, the pool capacity will automatically expand to reflect the larger disk size. Replace all six disks with 10 TB disks, and your pool expands to 40 TB*.
  • Add a second redundant vdev (RAID set). Ideally this would be of similar composition (six disks in RAIDZ2, and preferably the same size), though none of those is actually required--as you see in my sig, my pool consists of three vdevs, all six disks in RAIDZ2, but each of different capacities.
  • Yes, it's possible to add a single disk to your pool. Don't do this. The technical name for that process is "hating your data", and it results in a single disk striped with your RAID set. When that disk fails, all your data goes away.
The first option doesn't take any more physical space in your enclosure, but involves throwing out perfectly good disks. That's why, when my 12-disk pool was filling up, I bought a whole "new" (used) server, moved the disks over, and added more.

Ah no sorry, I run 2x10gb sfp+ with plans to add either +40 or another 20gb sfp+ to it in few years.
Yeah, in that case, disk speed will be more of an issue. You might want to be looking more at SSDs than spinning rust.

* ignoring TB vs. TiB, ZFS overhead, etc.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
You really should read the docs, as they'll go into much more detail than I'm about to. But in short, supposing you have a pool consisting of 6 x 6 TB disks in RAIDZ2 (24 TB* net capacity), and you need to expand it, you have two safe ways to do that:
  • Replace the 6 TB disks, once at a time, with something larger. When the last disk is replaced, the pool capacity will automatically expand to reflect the larger disk size. Replace all six disks with 10 TB disks, and your pool expands to 40 TB*.
  • Add a second redundant vdev (RAID set). Ideally this would be of similar composition (six disks in RAIDZ2, and preferably the same size), though none of those is actually required--as you see in my sig, my pool consists of three vdevs, all six disks in RAIDZ2, but each of different capacities.
  • Yes, it's possible to add a single disk to your pool. Don't do this. The technical name for that process is "hating your data", and it results in a single disk striped with your RAID set. When that disk fails, all your data goes away.
The first option doesn't take any more physical space in your enclosure, but involves throwing out perfectly good disks. That's why, when my 12-disk pool was filling up, I bought a whole "new" (used) server, moved the disks over, and added more.
* ignoring TB vs. TiB, ZFS overhead, etc.
Hey
Yeah I'm slowly seeing that drawback of the freenas. No way of just expanding pool with extra disks. But at least we can upgrade their size so I can go all the way to 12TiB(I think they are now available) or even more in future.

I did read few topics when people added a disk and lost entire volume. Yeah I'm aware of that hate :- ))))
Yeah, in that case, disk speed will be more of an issue. You might want to be looking more at SSDs than spinning rust.
Yep thats what I'm looking in to. Is it true that 1gb of ram = to rought 5gb of l2arc cache? I hear that l2arc cache is limited by ram meaning there is upper max limit to how much cache you can actually make... I have 16gb of ram meaning around 75gb of l2arc that I could make but that is a bit tiny... Any hitns on this?

The first option doesn't take any more physical space in your enclosure, but involves throwing out perfectly good disks. That's why, when my 12-disk pool was filling up, I bought a whole "new" (used) server, moved the disks over, and added more.
Mmmm now I'm lost. So you bought empty enclosure? Put in ur existing hdds and then added more via second pool vdev? What is vdev is it zvol? Im still struggling to what the heck that is.

  • Add a second redundant vdev (RAID set). Ideally this would be of similar composition (six disks in RAIDZ2, and preferably the same size), though none of those is actually required--as you see in my sig, my pool consists of three vdevs, all six disks in RAIDZ2, but each of different capacities.
So I will have
VolumeA = raid-z2 5hdds
VolumeB = raid-z2 5hdds

So if lets say volA has read write 1000 mb and volB has 1000mb, when I copy file to the nas it goes at 2000 ? I'm bit lost. Do these volumes merge and are visible as 1 folder? Or As 2 folders with 2x 1000mb transfer speeds? I think they will be 2 separate volumes/drivers/folders with 1000mb transfers each and only way of getting 2000 would be total wipe + 8+2 raid-z2 or 7+3 raid-z3 ?

Thanks for help again!

Regards
Dariusz
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Yeah I'm slowly seeing that drawback of the freenas.
It's less a "drawback of FreeNAS" than it is a "limitation of ZFS", but yes, it's pretty significant, especially for the home environment.
Is it true that 1gb of ram = to rought 5gb of l2arc cache?
I believe this is roughly accurate, but compressed ARC is supposed to be coming Real Soon Now (tm), which would change the numbers.
I have 16gb of ram meaning around 75gb of l2arc that I could make but that is a bit tiny
If you have only 16 GB of RAM, you shouldn't even be thinking about L2ARC. Max out your RAM first (at least 32 GB, and 64 GB would be better), the consider whether L2ARC will be helpful.
So you bought empty enclosure?
I bought a pre-built 36-bay server (chassis, backplanes, PSUs, mobo, CPUs, RAM), moved the disks from my old 12-bay chassis (easily done; both chassis were Supermicro and used the same disk caddies), and then added six more disks.
second pool vdev? What is vdev is it zvol?
  • A "pool" is a volume. It consists of one or more vdevs. When a pool has more than one vdev, all vdevs are striped (which is why failure of any vdev results in loss of the entire pool).
  • A vdev consists of one or more disks. Vdevs are where RAID happens.
  • A zvol is a virtual block device. They're used for iSCSI and for VM storage--possibly other things, but those are the applications I'm familiar with.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
It's less a "drawback of FreeNAS" than it is a "limitation of ZFS", but yes, it's pretty significant, especially for the home environment.

I believe this is roughly accurate, but compressed ARC is supposed to be coming Real Soon Now (tm), which would change the numbers.

If you have only 16 GB of RAM, you shouldn't even be thinking about L2ARC. Max out your RAM first (at least 32 GB, and 64 GB would be better), the consider whether L2ARC will be helpful.

I bought a pre-built 36-bay server (chassis, backplanes, PSUs, mobo, CPUs, RAM), moved the disks from my old 12-bay chassis (easily done; both chassis were Supermicro and used the same disk caddies), and then added six more disks.

  • A "pool" is a volume. It consists of one or more vdevs. When a pool has more than one vdev, all vdevs are striped (which is why failure of any vdev results in loss of the entire pool).
  • A vdev consists of one or more disks. Vdevs are where RAID happens.
  • A zvol is a virtual block device. They're used for iSCSI and for VM storage--possibly other things, but those are the applications I'm familiar with.

Amazing thank you!
I think I can max out at 64, it Supermicro X11SSM-F-O model, I'll try making 70gb l2arc for start see how it works and so on. As far as I read they are an easy and non destructive things so if it goes bad or something it wont destroy my pool or anything.

  • A "pool" is a volume. It consists of one or more vdevs. When a pool has more than one vdev, all vdevs are striped (which is why failure of any vdev results in loss of the entire pool).
  • A vdev consists of one or more disks. Vdevs are where RAID happens.
  • A zvol is a virtual block device. They're used for iSCSI and for VM storage--possibly other things, but those are the applications I'm familiar with.

So I can have a pool named poolA. With vdevA raiz-z2. Then I can add another vdevB raid-z2. They both become stripped - I guess there is some kind of process of re-distributing the data across 2 zdevs ? And after that I have 2x raid-z2, meaning that in both of zdev up to 2 hdd can fail before I lose all the data? Well in this case(If that is the case) this is pretty cool. As you can have 10 hdds with 4 parity hdds? And a lot of speed as data is being stripped across 2 redundant raid-z2 systems?

Thank you! That clears up a lof of the bugs in my head.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So I can have a pool named poolA. With vdevA raiz-z2. Then I can add another vdevB raid-z2. They both become stripped
Correct.
I guess there is some kind of process of re-distributing the data across 2 zdevs ?
It's vdevs, and no, there is no such process.
And after that I have 2x raid-z2, meaning that in both of zdev up to 2 hdd can fail before I lose all the data?
Correct--you can lose up to four disks without data loss (though if you lose the wrong three disks--that is, three from the same vdev--you'll lose all data on the pool).
And a lot of speed as data is being stripped across 2 redundant raid-z2 systems?
Potentially, at least for sequential reads.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
If I could buy a WD Red Pro for the same cost as a WD Red then I would do that however if the cost were $20USD more for the Pro version, I'd likely stick with the WD Reds and save $100USD. You do have a longer warranty and you do have a faster throughput. It is likely that the WD Reds will last 5+ years (I'm at 4.8 years on my drives right now with now failures). So what you benefit from is a faster scrub/resilver time. What you don't benefit from is more heat and louder drive operation.

If I really wanted 7200 RPM drives then I would take a serious look at the Seagate Ironwolf lineup. 3 Year warranty but 7200 RPM, and typically less cost than a WD Red. I myself am seriously considering purchasing somem of these drives but I'm waiting another 3 months when I hit the 5 year point on my drives. BTW, my drives run continuiously and never park the heads, I disabled that feature.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
If I could buy a WD Red Pro for the same cost as a WD Red then I would do that however if the cost were $20USD more for the Pro version, I'd likely stick with the WD Reds and save $100USD. You do have a longer warranty and you do have a faster throughput. It is likely that the WD Reds will last 5+ years (I'm at 4.8 years on my drives right now with now failures). So what you benefit from is a faster scrub/resilver time. What you don't benefit from is more heat and louder drive operation.

If I really wanted 7200 RPM drives then I would take a serious look at the Seagate Ironwolf lineup. 3 Year warranty but 7200 RPM, and typically less cost than a WD Red. I myself am seriously considering purchasing somem of these drives but I'm waiting another 3 months when I hit the 5 year point on my drives. BTW, my drives run continuiously and never park the heads, I disabled that feature.
Almost three years ago, I bought three of the Seagate NAS Drives to run for testing purposes, before they renamed the product Iron Wolf, and they have been working for me just as well as the Seagate Desktop drives that I had before that. Personally, I won't spend the extra money to buy an Iron Wolf versus buying Seagate Desktop drives but there's nothing wrong with the Seagate NAS drives. They work just great and I would expect to Seagate Iron Wolf drive to work well also.
They are definitely a better value than the Western Digital drives

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Status
Not open for further replies.
Top