PowerVault & TruesNAS

mav359aaa

Cadet
Joined
Jun 13, 2022
Messages
6
Hi Guys

i need some help expanding an existing array

I have a Dell PowerEdge 530 running TrueNAS attached to a PowerVault MD3060e. It has 60 x 6tb disks and we have a single ZFS pool. Its all working well but i need to expand it even further. I have another PowerVault MD3060e spare and 60 x 10tb disks on the way

The first PowerVault has two controllers in it with two ports each. Each controller has a single connection to a SAS card on the PowerEdge.

If i am joining the additional PowerVault do i connect the 2nd PowerVault to the controllers on the first (essentially daisy chaining them) or does each PowerVault connect directly to the server

Also, PowerVault 1 has 6tb disks and PowerVault2 has 10tb disks, do they need to be in separate pools or can i just add the additional disks to the existing pools without losing the additional capacity of the disks

Sorry for the crude drawing but might better explain what I'm trying to archive. Any help would be apricated
 

Attachments

  • Archive.jpg
    Archive.jpg
    341.2 KB · Views: 323

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If i am joining the additional PowerVault do i connect the 2nd PowerVault to the controllers on the first (essentially daisy chaining them) or does each PowerVault connect directly to the server
Theoretically you can do either, but not quite as you have it in option 1, which would introduce a loop. That's the sort of thing that either is far more finicky than it has any right to be or a big no-no, depending on whom you ask. With SAS, simple is good.

There's also the question of how the individual controllers connect inside the chassis. There are typically several modes, such as a redundant mode and a split mode, but the manual for your model isn't loading for me. Again, the simpler the better, so my choice would be (in order of preference):
  1. Use each controller separately and wire it up directly to a SAS HBA (not great if you have limited PCIe slots, unless you go for -16e HBAs) [Option 2]
  2. Setup the controllers to daisy-chain within the chassis, connect one from each chassis up to the HBA (works neatly with a single -8e HBA) [Sorta Option 1, minus the Yellow cable]
  3. Daisy-chain all the way, both internally and on to the second chassis (limited bandwidth, but not a huge deal, typically) [Sorta Option 1, minus the Blue cable]
Sidenote: Dell may enforce specific ports for specific uses. Definitely follow the manual for your unit to figure out what connections to make.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Also, the diagram is not crude at all! I wish everyone illustrated their questions with diagrams, using actual Dell back panels, even!
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
The best option, and the one we use here in house, is to attach the 2nd MD3060e using a 2nd HBA. It's better in a few ways. The first one is that the MD3060e is a 6GB SAS JBOD and it's pretty easy to saturate the bus with 60 drives. By using a 2nd HBA, your 2nd MD3060e gets its own separate storage path and separate bandwidth. The 2nd issue it solves is that Dell requires some additional licensing to expand beyond 120 drives when you use the MD3060e as an "expansion" or daisy chain. If you are using a Dell server, it counts the server's local drives against that limit. It's random which drives are counted and the server will exclude drives over 120 in a random fashion with each reboot. By configuring each JBOD with its own SAS HBA, you get around that limit.

If you are going to daisy chain them, you want to configure the cable paths like this:
1655128569170.png


Port 1 of the SAS HBA connects to the "In" port on MD3060e 1, controller 1. The "out" port on controller 1 then connects to the "In" port on MD3060e 2, controller 1. You repeat the path with the 2nd cables set as well.

As far as your second question: You can technically add the 10TB drives to the 6TB pool, but you will lose 4TB of capacity per disk, amounting to 200-240TB of lost capacity depending on your pool config. You are far better off creating a second pool for the 10TB drives.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
As far as your second question: You can technically add the 10TB drives to the 6TB pool, but you will lose 4TB of capacity per disk, amounting to 200-240TB of lost capacity depending on your pool config. You are far better off creating a second pool for the 10TB drives.
This is not correct. Or rather, it's not correct in the scenario we're discussing.

Within a vdev, the size of the vdev is determined by the smallest disk. However, you can totally have vdevs with different sizes, different configurations, etc. It's good practice to not go too crazy, but your scenario of adding vdevs with 10 TB disks to a pool that has vdevs with 6TB disks should not be an issue. Of course, for better commentary, it's best if you post either a screenshot of the pool configuration or the output of zpool status.
 

mav359aaa

Cadet
Joined
Jun 13, 2022
Messages
6
Thanks for the replies guys

Just for further information's sakes....

This is a video archive storage solution we are expanding, the videos are recorded from our IPTV network first to a VM where they live for a month before being compressed and then written to here. They can periodically be accessed but for the most part its just an archive so read/write performance isn't critical. All of this is also backed up to backblaze how ever i REALLY want to add the additional JBOD without disturbing the existing data and having to pull all that back down.

As im understanding the replies then, either config will work.....

If i Daisy Chain (Option 1) it will work fine but if i go with option 2 and a 2nd dedicated HBA card in the sever i will get better performance and avoid a 120 disk limit which potentially i might have down the road

also depending on how the vdev was setup i may be able to add the 10tb to the existing pool, i will follow up here with some screen shots of the vdev setup a bit later (we didnt set that part up)

Thank you for all the help so far
 

Attachments

  • Archive1.jpg
    Archive1.jpg
    331.1 KB · Views: 311
Top