1st trueNAS build – Need pool,vdev,general install advice

ststrong

Cadet
Joined
Jan 6, 2023
Messages
3
Good day all and thanks in advance for any support.


I am in the process of building my first NAS with TrueNAS core. The computer is as follows:

- I5-7400 in an asus k31cd-k motherboard (only has 1 pci slot – to be used for SAS controller in the future)
- 16GB ram (max the MB supports)

I installed trueNas on a 250GB sata ssd and connected a single 500GB HDD to ‘play around and make the mistakes’ before setting it up ready for prime time . I managed to get the NAS, plex server, nextcloud working and even can access via my own domain with port forwarding on the router, etc… Shares are SMB since I have mostly windows PCs (however there are android, iPhone, samsungTV, and an iMAC on the network also)

My main question is on the target configuration of the pool / vdev / disks…

Before I get there – here is the final vision of the NAS:
- Same MB/RAM
- LSI 9210 SAS controller (IT mode) with 8x4TB SAS HDDs
- (maybe): dual 250GB sata SSD as boot (only have 1 SSD today)
- (maybe) 2x2TB sata HDDs (depends on the advice below) (because I have the drives sitting around and I have room in the case)
- Run Plex server for movies and photos, NextCloud, maybe a backup application for PCs and maybe android/iPhone…

NAS purpose:
- Have a storage space for important files and photos (2TB is enough) that should be as redundant as possible.
- Have a large storage space for the media and streaming (I was thinking RAIDz1 if I understand it…) I want to utilize the disk space as much as possible. I have 8x4TB SAS drives and one cold spare if it makes any difference)

Questions:
1) Looking for input on the pool configuration, best way to setup the boot disk(s), important storage disks, regular storage…
2) Input on if I can add the disks and ‘fix’ the current configuration or if it would be best to blow it all away, install all the h/w, then reinstall fresh (I think I know the answer).
3) Related to the above – how do I mirror / protect the OS boot SSD? – is this something that is done after I install the trueNAS OS or before? Can it be done anytime?
4) Regarding apps and jails – should these have their own ‘disk or pool’ or part of my ‘big’ dataset ?

5) Any other ideas or considerations (especially regarding remote access – e.g. reverse proxy ideas, general security, etc…)



To throw into the mix – my ‘main’ PC is a xeon 2698v3 (16 core 32 threads) with 128GB RAM. Lots of PCI slots, 1TB NVME and a few 2TB HDDs. Its running win10 (I know I cant get the machine above 8% CPU utilization with what I am doing today ). The plan for the machine is for network virtualization (VMWare/VBox/GNS3 stuff – but these are hobbies and the machine is not running these workloads all the time or even often). This machine has other challenges (like the Quadro video card is a beast in terms of size so putting 8 new drives will be a challenge, current case is not conducive to cooling/case fans) (but I like my 4 monitors..) – power is another concern – it has an old 600W PS – the other machine was just upgraded to a new 600W (single rail) but the other PC uses integrated GPU much less hungry CPU. (I could swap.)

TrueNAS gurus – help me sort it all out open to any ideas.

Cheers,
Steve

Thanks!
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Questions:
1) Looking for input on the pool configuration, best way to setup the boot disk(s), important storage disks, regular storage…
Make two zpools:
  1. 2x mirrored 2TB for your photos.
  2. 2x vdevs (RAIDZ1) of 4x 4TB each for your media.
2) Input on if I can add the disks and ‘fix’ the current configuration or if it would be best to blow it all away, install all the h/w, then reinstall fresh (I think I know the answer).
RAIDZ vdevs are fixed once created. The only one you can "fix" is mirrored layout.
3) Related to the above – how do I mirror / protect the OS boot SSD? – is this something that is done after I install the trueNAS OS or before? Can it be done anytime?
It isn't really necessary at all to protect the boot drive. What you should protect instead, is the config file. Back it up and you can restore your boot drive easily as long as you have the install media handy (which is easily downloadable).

Can you mirror the boot drive after install? Yes, see above (point number 2). Is it necessary? Not at all. The boot drive has nothing important except for the config file, which again, can be easily backed up.
4) Regarding apps and jails – should these have their own ‘disk or pool’ or part of my ‘big’ dataset ?
They can sit in your main pool. TrueNAS will segregate the data in its own dataset. If you're looking to run VM's though, you should have faster disks (striped SSD's), but for apps or jails, in my experience, as long as you're not running some intensively used database, they're fine even sitting on a HDD pool.
5) Any other ideas or considerations (especially regarding remote access – e.g. reverse proxy ideas, general security, etc…)
Look into setting up some kind of VPN.
To throw into the mix – my ‘main’ PC is a xeon 2698v3 (16 core 32 threads) with 128GB RAM. Lots of PCI slots, 1TB NVME and a few 2TB HDDs. Its running win10 (I know I cant get the machine above 8% CPU utilization with what I am doing today ). The plan for the machine is for network virtualization (VMWare/VBox/GNS3 stuff – but these are hobbies and the machine is not running these workloads all the time or even often). This machine has other challenges (like the Quadro video card is a beast in terms of size so putting 8 new drives will be a challenge, current case is not conducive to cooling/case fans) (but I like my 4 monitors..) – power is another concern – it has an old 600W PS – the other machine was just upgraded to a new 600W (single rail) but the other PC uses integrated GPU much less hungry CPU. (I could swap.)
That PC is much better suited as a hypervisor than a "main PC". It's being wasted in my opinion being used as a main PC unless you're running some kind of crazy non-typical home user workload. My main server doesn't even have that many cores (Xeon Silver 10c/20t) though it does have twice the RAM. Even with 9 VM's running 24/7, it still doesn't even break 40% utilization, though it does use up all the RAM thanks to ZFS wonderful aggressive caching.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I am in the process of building my first NAS with TrueNAS core. The computer is as follows:

- I5-7400 in an asus k31cd-k motherboard (only has 1 pci slot – to be used for SAS controller in the future)
Depending on the number of SATA ports on the board, you can avoid the HBA. It really(!) needs proper cooling (i.e. a strong, server-like airflow) and draws a considerable amount of power (20-30 W).
- 16GB ram (max the MB supports)
Since this is the minimum required amount for TrueNAS, you may be limited in terms of how many/what apps you can run.
- (maybe): dual 250GB sata SSD as boot (only have 1 SSD today)
The main benefit of a mirrored boot device is that the configuration and esp. encryption keys are covered (in addition to proper backup!). For boot-failover you would need a dedicated RAID controller.
- Have a storage space for important files and photos (2TB is enough) that should be as redundant as possible.
That would be RAIDZ3 with a hotspare or a 4+ way mirror. How many drives do you want to be able to loose without also loosing data?
- Have a large storage space for the media and streaming (I was thinking RAIDz1 if I understand it…) I want to utilize the disk space as much as possible. I have 8x4TB SAS drives and one cold spare if it makes any difference)
For media, and assuming that we are not talking about any significant random read/write operations, I would recommend a single RAIDZ2 vdev. The 2*RAIDZ1 that @Whattteva suggested, have roughly the same net storage capacity but without "double parity information". Yes, they offer twice the IOPS, but if those are important I would go for mirrors.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
For media, and assuming that we are not talking about any significant random read/write operations, I would recommend a single RAIDZ2 vdev. The 2*RAIDZ1 that @Whattteva suggested, have roughly the same net storage capacity but without "double parity information". Yes, they offer twice the IOPS, but if those are important I would go for mirrors.
My rationale for the 2x RAIDZ1 layout is more to avoid such a wide vdev. And since OP mentions requirements of maximum storage capacity and the files aren't as important, I figured 2x RAIDZ1 is a good middle ground.
 

ststrong

Cadet
Joined
Jan 6, 2023
Messages
3
Thanks for the information. It makes sense:
2 pools:
1) 2x2TB mirror using the motherboard sata for the important photos / documents
2) another pool with either 2 x 4x4TB RAIDz1 vDEVs or 1 x 8x4TB RAIDz2 vDEV (either option leaves me with ~ 20TB of usable zfs space.

Since I know little about the inner workings - I assume it would be faster to rebuild a single failed drive in the 4 disk RADIz1 than the same disk failing in the 8 disk RAIDz2??

Other answers heard. - Once I get it setup I am sure I will have additional questions :smile:

Regarding the point about my 'monster main PC' - Agree it is a little overkill for web browsing, netflix and email :cool: . If I am able to get this PC into a case that can handle the cooling of 8 drives, HBA and the old quadro K5000 - can anyone point me to a good starting place to virtualize Win10, etc... (or would it be simpler to leave the win10 host and virtualize the NAS stuff.) (I have experience with setting up VMs with VM Ware and Oracle VBox but these are from a windows host) - or am I overthinking it all...

Would TrueNAS scale be part of the above?

Thanks!
Steve
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Since I know little about the inner workings - I assume it would be faster to rebuild a single failed drive in the 4 disk RADIz1 than the same disk failing in the 8 disk RAIDz2??
Yes. Resilvering process is more intense the more drives you have. For each block resilvered, a block has to be read from each surviving drive in the vdev. This means that in an vdev of 8 drives, a block must be read from the 7 remaining drives vs 3 in an vdev of 4. That's more than twice the total IO load and the parity calculation also becomes gnarlier with higher Z number. Performance of the pool will also suck way more while you're resilvering since all the drives are being loaded vs only half the drives in the 2x vdev setup.
Regarding the point about my 'monster main PC' - Agree it is a little overkill for web browsing, netflix and email :cool: . If I am able to get this PC into a case that can handle the cooling of 8 drives, HBA and the old quadro K5000 - can anyone point me to a good starting place to virtualize Win10, etc... (or would it be simpler to leave the win10 host and virtualize the NAS stuff.) (I have experience with setting up VMs with VM Ware and Oracle VBox but these are from a windows host) - or am I overthinking it all...
I wouldn't virtualize TrueNAS unless you really know what you're doing. It takes a certain special "sauce" of configuration to do successfully. Many often end in tears and you can use the forum search function to find many of them.
 

oldtechie

Dabbler
Joined
Feb 13, 2023
Messages
18
Hi TrueNAS community,

I have a similar concern about the storage layout of my TrueNAS setup.

I just finished a new server build for virtualization. I wanted/need to move away from VMWare onto an open-source platform. I also need a new NAS for my home office and my tinker lab, see I’m a retired programmer that still loves technology. But I don’t do anything really serious, that would take the fun out of it.

Please indulge me for a few minutes on my background.

  • Software developer for over forty years. I started with mainframes and assembler, then C on Unix.
  • Been retired for over fifteen years. Walked away from the industry for more than 10 years. During that time I traveled and did woodworking.
  • Got back into technology when I started to make my home a smart home, and down the rabbit hole I went.
So, please be nice if ask a dumb question or two, or to slow to catch onto a concept, I’m not as quick as I use to be.

My current environment is as follows:

Build this workstation in 2017 when I caught the technology bug again.

  • Windows11 Pro
  • Ryzen 9 3900x
  • ASUS x570 -PRO
  • 64GB DDR4
  • GEFORCE RTX 2070
  • 1TB NVME boot drive
  • 2TB NVME data drive for VM and personal file storage.
  • VMWare workstation 16 for virtualization
  • Synology NAS with 8X4TB drives in raid5 (very little space left)
I use this workstation to do lite weight video editing of personal videos and video ripping my very large movie videos collection, I also have over fifteen hundred albums that I have digitized. All media is stored on my NAS.

The new server that I have just built has Proxmox installed for my windows, Linux VMs, and TrueNAS. I live in the desert so power usage and heat are a concern.

The new server is as follows:

  • Ryzen 7 5700G
  • ASUS X570-PRO
  • 128GB DDR4 dual rank ECC memory
  • LSI 9211 SAS controller in (IT mode) 2 ports 4 drives per port (passthrough from Proxmox to TrueNAS)
  • 1 1TB Samsung 980 PRO NVME M.2 SSD (Boot and VM storage)
  • Case Fractal Define R5 with 8 drive slots.
  • TrueNAS VM has its own 2TB Samsung 870 EVO SSD’s in a 3 drive RAIDZ1.
  • 10gb NIC but later.
Here is the purpose of the server as it relates to TrueNAS:

  • A Backup target for my aging Synology NAS, workstations, family laptops, and mobile devices.
  • If and when my Synology dies TrueNAS will be my main household NAS. I will most likely get another Synology NAS but much smaller than the one I currently have for the apps NVR, photos, and DSvideos playback.
  • SMB auxiliary storage for all family devices.

    Here is my Question:
  • I need 40TB to 60TB of space
  • Tradeoffs to consider are budget, drive size, and the number of drives.
  • I would like to create one big storage pool consisting of one big vdev in RAIDZ, is this a bad idea?
  • If I can create the one big vdev I would then create datasets and share them as backup, media, and personal device storage with sub-folders with permission as required. Is this possible or makes sense?
See one of the mistakes I made with my Synology was to create volumes for backup, media, etc, so, when the volumes ran out of space I was not able to expand just the volumes or shared folders that need expansion. I want to void the situation.

I’m looking at either 10TB, 16TB, or 18TB Seagate NAS drives. Since I don’t do any production work I’m tolerant of performance to some degree.

Most of the media files are recoverable, although very painful. After using a hard drive failure years ago and not having a backup, I am paranoid and have personal data backed up in several places.

Sorry for being so long-winded but wanted to provide as much info about me and my requirements. So, if I did not bore you and you have some suggestions or ideas on how I should lay out the storage I would greatly appreciate it.

regards
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Please indulge me for a few minutes on my background.
  • Software developer for over forty years. I started with mainframes and assembler, then C on Unix.
  • Been retired for over fifteen years. Walked away from the industry for more than 10 years. During that time I traveled and did woodworking.
  • Got back into technology when I started to make my home a smart home, and down the rabbit hole I went.
Cool! Like you, I'm also a software developer, though nowhere near as seasoned as you are (20 years). Started out in C and C++ writing drivers, but these days, I work at a much higher level (iOS apps).

  • I would like to create one big storage pool consisting of one big vdev in RAIDZ, is this a bad idea?
Depends on your definition of the word "big". Performance would definitely suffer big time if you make a really wide vdev. Generally, the ideal size is to dedicate around 1/3 of total storage to parity. This translates to 3-wide RAIDZ1, 6-wide RAIDZ2 and 9-wide RAIDZ3. Anything above that isn't recommended though no one will stop you if you want to do so.

  • If I can create the one big vdev I would then create datasets and share them as backup, media, and personal device storage with sub-folders with permission as required. Is this possible or makes sense?
Your vdev layout and your dataset layout are independent and mutually exclusive of each other. You can have a pool of 1-vdev RAIDZ or 2-vdev RAIDZ or a bunch of striped mirrors and you can still have the same dataset layout regardless of your vdev layout.

See one of the mistakes I made with my Synology was to create volumes for backup, media, etc, so, when the volumes ran out of space I was not able to expand just the volumes or shared folders that need expansion. I want to void the situation.
ZFS doesn't have this problem. Every dataset can have as much as the maximum capacity of your total pool unless you set quotas. Also, your pool capacity can be expanded anytime you want by either adding new disks in a new vdev or simply upgrading your existing disks with bigger capacity disks. Note that the latter method will take a considerable amount of time as it requires you to resilver each disk as you replace them one by one and you won't see the expanded capacity until you finish resilvering all disks. This is also probably another good reason why you should shy away from having too wide of a vdev because a double-digit-disk vdev will take weeks or maybe months to completely upgrade.

I’m looking at either 10TB, 16TB, or 18TB Seagate NAS drives. Since I don’t do any production work I’m tolerant of performance to some degree.

Most of the media files are recoverable, although very painful. After using a hard drive failure years ago and not having a backup, I am paranoid and have personal data backed up in several places.

Sorry for being so long-winded but wanted to provide as much info about me and my requirements. So, if I did not bore you and you have some suggestions or ideas on how I should lay out the storage I would greatly appreciate it.
Since you are paranoid about failures and not too performance-sensitive, you should probably go with RAIDZ2 at a minimum. RAIDZ3 might be overkill since you have multiple backups.
 

oldtechie

Dabbler
Joined
Feb 13, 2023
Messages
18
Cool! Like you, I'm also a software developer, though nowhere near as seasoned as you are (20 years). Started out in C and C++ writing drivers, but these days, I work at a much higher level (iOS apps).
Stuff is so cool nowadays. My last 10 year was in management and my team would let me touch the code anymore:frown:.
Depends on your definition of the word "big". Performance would definitely suffer big time if you make a really wide vdev. Generally, the ideal size is to dedicate around 1/3 of total storage to parity. This translates to 3-wide RAIDZ1, 6-wide RAIDZ2 and 9-wide RAIDZ3. Anything above that isn't recommended though no one will stop you if you want to do so.
Big to me now is 50TB plus. When I was working I did work with databases in the 200TB plus as a DBA late 90's and early 2000's.
Your vdev layout and your dataset layout are independent and mutually exclusive of each other. You can have a pool of 1-vdev RAIDZ or 2-vdev RAIDZ or a bunch of striped mirrors and you can still have the same dataset layout regardless of your vdev layout.
Ok, I think got it. I haven’t brought the drives yet. So, let's say I get 3X10TB drives and put them in a RAIDZ vdev that would give me approx 20TB for use for 1 to n datasets, correct?

Then if I later added a new vdev with 3X16TB drives in a RAIDZ the pool size would increase by 32TB making the pool 52TB in size to be used by 1 to n datasets. If the budget allowed for RAIDZ2 I could have two drive failures before losing the vdev.
ZFS doesn't have this problem. Every dataset can have as much as the maximum capacity of your total pool unless you set quotas. Also, your pool capacity can be expanded anytime you want by either adding new disks in a new vdev or simply upgrading your existing disks with bigger capacity disks. Note that the latter method will take a considerable amount of time as it requires you to resilver each disk as you replace them one by one and you won't see the expanded capacity until you finish resilvering all disks. This is also probably another good reason why you should shy away from having too wide of a vdev because a double-digit-disk vdev will take weeks or maybe months to completely upgrade.
The bigger the drive the longer resilver will take.
Since you are paranoid about failures and not too performance-sensitive, you should probably go with RAIDZ2 at a minimum. RAIDZ3 might be overkill since you have multiple backups.
Thanks for the reply.
.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Big to me now is 50TB plus. When I was working I did work with databases in the 200TB plus as a DBA late 90's and early 2000's.
I don't mean big as in capacity-wise. I mean big (how wide) as in the number of disks in the vdev. You generally don't want too many disks in one vdev because it will hurt performance.

Ok, I think got it. I haven’t brought the drives yet. So, let's say I get 3X10TB drives and put them in a RAIDZ vdev that would give me approx 20TB for use for 1 to n datasets, correct?
Yes, though you will see a bit less in practice due to some overhead.

Then if I later added a new vdev with 3X16TB drives in a RAIDZ the pool size would increase by 32TB making the pool 52TB in size to be used by 1 to n datasets. If the budget allowed for RAIDZ2 I could have two drive failures before losing the vdev.
Yes, with the same caveat as above. Do note that redundancy is at the vdev level, not at the pool level. So yes, you can have two drive failures, but not in the same vdev. So to put it more correctly, you can have 1 disk failure on each vdev.

The bigger the drive the longer resilver will take
It's not just how big the drive is individually, it's also how WIDE the vdev is, which is why I stressed that above in the first point. This is because for each block resilvered, a block has to also be read from each of the surviving drive in the vdev. The calculation also will take longer the higher up the RAIDZ number is. So a RAIDZ3 resilver will take longer than a RAIDZ2 resilver, which in turn, is longer than a RAIDZ1 resilver.
 
Last edited:

oldtechie

Dabbler
Joined
Feb 13, 2023
Messages
18
I don't mean big as in capacity-wise. I mean big (how wide) as in the number of disks in the vdev. You generally don't want too many disks in one vdev because it will hurt performance.
Thanks for clearing that up for me.
Yes, though you will see a bit less in practice due to some overhead.
Understood
Yes, with the same caveat as above. Do note that redundancy is at the vdev level, not at the pool level. So yes, you can have two drive failures, but not in the same vdev. So to put it more correctly, you can lose 1 disk failure on each vdev.
Got it. Just want to make I'm understanding. I'm not a storage guy yet. :smile:
It's not just how big the drive is individually, it's also how WIDE the vdev is, which is why I stressed that above in the first point. This is because for each block resilvered, a block has to also be read from each of the surviving drive in the vdev. The calculation also will take longer the higher up the RAIDZ number is. So a RAIDZ3 resilver will take longer than a RAIDZ2 resilver, which in turn, is longer than a RAIDZ1 resilver.
Thanks for clearing that up for me.

I have one more question before I drop a bunch of retirement dollars on drives.
Do you have any concerns with drives that are "512e and 4Kn FastFormat" capable? Seagate EXOS drives?

Whattteva, you have been great at helping me understand this new technology (new for me) and I greatly appreciated it.
I'm sure I will have more questions as I get deeper into the use of TrueNAS.
Thanks much
 
Joined
Jun 15, 2022
Messages
674
If you want to minimize your retirement expenditure, and don't mind used rack server equipment with a lot of life left in it, eBay is really, really affordable for:
- HGST 6TB and 8TB SAS drives ($24)
- LSI -16i (16 internal ports) ($36)

For cabling and power I head to Amazon.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Got it. Just want to make I'm understanding. I'm not a storage guy yet. :smile:
I just re-read the original statement on this and I want to clarify something. What I said about 1 drive per vdev relates to a 2x RAIDZ1 layout. In a single RAIDZ2 vdev, you can indeed have 2 drive failures.

I have one more question before I drop a bunch of retirement dollars on drives.
Do you have any concerns with drives that are "512e and 4Kn FastFormat" capable? Seagate EXOS drives?
I think the main concern you want to have is whether or not the drive is SMR or CMR. Do make sure whatever drive you get is CMR (which all EXOS are).

If you want to minimize your retirement expenditure, and don't mind used rack server equipment with a lot of life left in it, eBay is really, really affordable for:
- HGST 6TB and 8TB SAS drives ($24)
- LSI -16i (16 internal ports) ($36)

For cabling and power I head to Amazon.
This is actually a really good suggestion especially if you already have a good backup strategy. In fact, probably around 80% of my current build is all used enterprise gear off eBay. I myself run 4x 6TB used HGST (in my signature), but I didn't get them as cheaply. I got mine at $60 a piece, which is still pretty cheap, but with 5 years warranty, so I think the extra price is worth it for the warranty.
 
Joined
Jun 15, 2022
Messages
674
This is actually a really good suggestion especially if you already have a good backup strategy. In fact, probably around 80% of my current build is all used enterprise gear off eBay. I myself run 4x 6TB used HGST (in my signature), but I didn't get them as cheaply. I got mine at $60 a piece, which is still pretty cheap, but with 5 years warranty, so I think the extra price is worth it for the warranty.
I buy them in bulk. Look, I know they're going to fail, it's always a craps shoot. Brand new, gently used, beat like a race horse...it doesn't matter: they're going to fail whenever the worst time for them to slag off is, and they're going to do it low-level self-entitled employee style: in small groups.

For every 12 working drives I buy 3 extras--that's my "warranty." When one fails, it's immediately replaced. Another fails during rebuild? Replaced. Another fails in three weeks? Replaced. Janice from accounting looks at my discretionary expense report sideways? Replaced.

That's how I can afford to run RAID-Z3. Sure, the drives fail "a little more often" than if they were new, requiring I buy a new batch of used drives "a little more often," and they're probably the next storage size up and just that much faster than the last batch, meaning we expand storage capacity without losing speed faster than other companies, there are no resource shortages, and everyone is continually happy--at least with the network. (Rajibd should shower more often.)

In the end the hard drive budget is about 1/7th the cost of new, which is why Janice should have chosen to enjoy that company-funded bourbon back at my place instead of her younger, cuter replacement. But hey, to each their own choices. Cheers.
 
Last edited:

oldtechie

Dabbler
Joined
Feb 13, 2023
Messages
18
I just re-read the original statement on this and I want to clarify something. What I said about 1 drive per vdev relates to a 2x RAIDZ1 layout. In a single RAIDZ2 vdev, you can indeed have 2 drive failures.
Got it.
I think the main concern you want to have is whether or not the drive is SMR or CMR. Do make sure whatever drive you get is CMR (which all EXOS are).


This is actually a really good suggestion especially if you already have a good backup strategy. In fact, probably around 80% of my current build is all used enterprise gear off eBay. I myself run 4x 6TB used HGST (in my signature), but I didn't get them as cheaply. I got mine at $60 a piece, which is still pretty cheap, but with 5 years warranty, so I think the extra price is worth it for the warranty.
Ok, Well now it is time to buy some drives. I will let the forum how it going after a month of testing. Chances are if I'm back in a month it's all good.
Thanks much
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Janice from accounting looks at my discretionary expense report sideways? Replaced.
which is why Janice should have chosen to enjoy that company-funded bourbon back at my place instead of her younger, cuter replacement.
I suspect this is the same Janice that works for John Oliver? :grin::tongue:
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
If you want to minimize your retirement expenditure, and don't mind used rack server equipment with a lot of life left in it, eBay is really, really affordable for:
- HGST 6TB and 8TB SAS drives ($24)
- LSI -16i (16 internal ports) ($36)

For cabling and power I head to Amazon.
Dear God - I wish they were that price in the UK
8TB HGST - £90.00

Actually where do you buy 8TB HGST for $24 - I can't see any on ebay.com
 
Last edited:
Joined
Jun 15, 2022
Messages
674
Ok, Well now it is time to buy some drives. I will let the forum how it going after a month of testing. Chances are if I'm back in a month it's all good. Thanks much.
Oh, you'll be back sooner than that.

I suspect this is the same Janice that John Oliver talks about? :grin::tongue:
Isn't he a historian on the interwebs? Who covers really old news from last week? I live in the present, having no interest in wasting time reminiscing.

But yes, that Janice, ironically. (not safe for work)

Dear God - I wish they were that price in the UK
8TB HGST - £90.00
On being overpriced, your working girls aren't much to speak of either, but your tailors are the bees knees.
 
Last edited:

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Isn't he a historian on the interwebs? Who covers really old news from last week? I live in the present, having no interest in wasting time reminiscing.

But yes, that Janice, ironically. (not safe for work)
Haha, that Janice compilation made my day!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Generally, the ideal size is to dedicate around 1/3 of total storage to parity. This translates to 3-wide RAIDZ1, 6-wide RAIDZ2 and 9-wide RAIDZ3. Anything above that isn't recommended though no one will stop you if you want to do so.
I never heard of this rule and all the recommendations I am aware of basically go for 2 more disks. So roughly not more than 5 for RAIDZ1, 8 for RAIDZ2, 11 for RAIDZ3; as always the details depend on the workload.

Can you point me to a source for the numbers you mentioned?
 
Top