Recommended Memory to Boot Drive Ratio?

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
Why?
Do you mean there are not enough drives in the box to connect to two storage controllers?
Or this motherboard does not support running two controllers at the same time (that I really doubt)...?

A H330 HBA (and some others) has the same connectors as your flex adapter.
So, you could simply disconnect the cables from the flex adapter and connect to the H330...

I don't know what the other end of the cable is.
Since it is capable of accommodating SAS and SATA drives, I suspect it's some sort of a backplane...
I never thought of that, I just spent $50 on SATA power cables. Looks like the IBM M5110 cannot direct connect. Will the 530-8i work for me? I found a good deal locally.



 

Attachments

  • IMG_20220813_201010.jpg
    IMG_20220813_201010.jpg
    184.3 KB · Views: 167
  • IMG_20220807_181900.jpg
    IMG_20220807_181900.jpg
    508.2 KB · Views: 173
Last edited:

diogen

Explorer
Joined
Jul 21, 2022
Messages
72
Will the 530-8i work for me?
Yes, it should.
Both, the flex adapter and the 530-8i use a pair of SFF-8643 connectors (your first picture).

And it does end on a backplane (your second picture), so you'd need something like this (standard cables won't work)...
backplane.png



Drives that you install in the 5.25" slots can be connected to on-board SATA connectors with standard cables...
 

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
Yes, it should.
Both, the flex adapter and the 530-8i use a pair of SFF-8643 connectors (your first picture).

And it does end on a backplane (your second picture), so you'd need something like this (standard cables won't work)...
View attachment 57673


Drives that you install in the 5.25" slots can be connected to on-board SATA connectors with standard cables...
Once I concluded I needed to get rid of the flex adapter I automatically assumed selling it with the cables is best but looks like it's better to keep them installed.

I picked up a couple of 5.25 drive adapters. My plan is to install 2 x 800GB SSDs RAID1 off the onboard SATA controller as well as 1 x 2TB HDD for VCenter. Install ESXi and then create vdisks for TrueNAS, Vcenter and PFsense and passthrough the 530-8i to TrueNAS. I have 4 x 8TB HDDs, would it be best to install them in RAID5 because of my 4 Bay limit? For remaining storage options I'll have the Asus hyper card, 4 x 2.5" bays on the 530-8i and I actually have another dual M.2 NVME flex adapter.
 

diogen

Explorer
Joined
Jul 21, 2022
Messages
72
would it be best to install them in RAID5 because of my 4 Bay limit?
Die-hard TrueNAS/FreeNAS users would say you need RAIDZ2 (aka RAID6). I think RAIDZ1 over 4 drives should be ok...
Looking at the backplane, you can have 4 drives per bay, i.e. 8 drives total. And it can even handle U.2 drives.
There are probably restrictions what size (3.5" and 2.5") and type can be installed at the same time...
1 x 2TB HDD for VCenter
Why do you need vCenter? Do you run a cluster of P700?
 

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
Why do you need vCenter? Do you run a cluster of P700?

No cluster, from what understand vCenter is needed for advanced administration. For my initial issue ESXi was only giving me a "power on" error. I had to connect to vCenter and check the logs to see it was a swap disk problem.

There are probably restrictions what size (3.5" and 2.5") and type can be installed at the same time...

I don't know how 2 x 2.5" SATA/SAS SSD is possible without a 3.5" to 2.5" drive adapter. I can get a good deal on 4TB 2.5" SAS SSDs that I am considering using on my 2nd P700 with Proxmox.
 

Attachments

  • HDDspec.jpg
    HDDspec.jpg
    35.5 KB · Views: 172

diogen

Explorer
Joined
Jul 21, 2022
Messages
72
...vCenter is needed for advanced administration.
Yes, when you have more than one ESXi box, vMotion, vSAN, NSX, etc.
I have never used it on standalone ESXi servers... It won't be reporting more than the hypervisor itself (just need to find it)...

Run the VCSA as a VM if you really want.
But it definitely does not need its own 2TB drive (it has over a dozen VMDKs by itself, but IIRC just a few hundred GB altogether).
 

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
Yes, when you have more than one ESXi box, vMotion, vSAN, NSX, etc.
I have never used it on standalone ESXi servers... It won't be reporting more than the hypervisor itself (just need to find it)...

Run the VCSA as a VM if you really want.
But it definitely does not need its own 2TB drive (it has over a dozen VMDKs by itself, but IIRC just a few hundred GB altogether).
I am only using a 2TB HDD because I have a few laying around. I have a 750GB 2.5" SSD but seems like a waste of use. vCenter tiny requires around 700GB to install but you are right it only utilizes about 200GB. Ideally I would like to setup an NFS share thin provisioned on TrueNAS Scale for vCenter. I had a similar setup before using my QNAP. I read that 22.02.2.1 has iSCSI/NFS issues so am staying on 22.02.2 for now.

So from what I understand the root cause of my initial problem was having the TrueNAS VM installed on a 120GB while assigning 128GB resulting in no space for the swap. I would still like to assign 128GB for ZFS cache although it currently only utilizes 50%. I will install 196GB of memory (64GB ESXi/128GB TrueNAS). What size vdisk should I assign to the TrueNAS VM. Looks like I will be using 2 800GB 2.5" SSDs mirrored and need room for vCenter, PFsense and a few more maybe. Promox will be my main VM server on the other P700.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
So from what I understand the root cause of my initial problem was having the TrueNAS VM installed on a 120GB while assigning 128GB resulting in no space for the swap. I would still like to assign 128GB for ZFS cache although it currently only utilizes 50%. I will install 196GB of memory (64GB ESXi/128GB TrueNAS). What size vdisk should I assign to the TrueNAS VM. Looks like I will be using 2 800GB 2.5" SSDs mirrored and need room for vCenter, PFsense and a few more maybe. Promox will be my main VM server on the other P700.
Question before you go too far ahead:

If you have ESXi as the hypervisor in "P700 #1" and ProxMox/KVM as the hypervisor on "P700 #2" - is there a reason you've chosen SCALE for a nested storage solution? If you plan to use the "Apps" library in SCALE or Docker, then certainly that's justification - but if you just plan to serve storage back to your ESXi/ProxMox solution over NFS/iSCSI then CORE is more mature (and will use more than half of the RAM for ARC by default)

But either way:

Yes, the problem is that ESXi will try to create a .vswp file for virtual memory swap equivalent to the size of the VM's RAM. You can prevent this by opting to reserve all of the system memory for it (128GB) but this is automatically done when you pass a physical hardware device (like your HBA) into a VM. That's what tipped me off to the fact that you were probably doing virtual disks, local RDM, or similar.

So you'll use the 2x800G SSD's as a mirrored datastore local to ESXi - on there, you can create a 16/32G virtual disk for TrueNAS to install to. Then you set up hardware passthrough for the HBA, and you will see the 4x 8TB disks connected there in TrueNAS - although RAIDZ isn't recommended for block/VMFS storage, mirrors are preferred.
 

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
Question before you go too far ahead:

If you have ESXi as the hypervisor in "P700 #1" and ProxMox/KVM as the hypervisor on "P700 #2" - is there a reason you've chosen SCALE for a nested storage solution? If you plan to use the "Apps" library in SCALE or Docker, then certainly that's justification - but if you just plan to serve storage back to your ESXi/ProxMox solution over NFS/iSCSI then CORE is more mature (and will use more than half of the RAM for ARC by default)

But either way:

Yes, the problem is that ESXi will try to create a .vswp file for virtual memory swap equivalent to the size of the VM's RAM. You can prevent this by opting to reserve all of the system memory for it (128GB) but this is automatically done when you pass a physical hardware device (like your HBA) into a VM. That's what tipped me off to the fact that you were probably doing virtual disks, local RDM, or similar.

So you'll use the 2x800G SSD's as a mirrored datastore local to ESXi - on there, you can create a 16/32G virtual disk for TrueNAS to install to. Then you set up hardware passthrough for the HBA, and you will see the 4x 8TB disks connected there in TrueNAS - although RAIDZ isn't recommended for block/VMFS storage, mirrors are preferred.
That thought crossed my mind as well. I am actually enrolled in a Linux admin course I need to get started on so I figured the more Linux the better but with Promox and Unraid installed on my QNAP maybe that's enough. I was also intrigued about Scale being based on Kubernetes, especially if it's opened up more to user configuration in the future.

TrueNAS will be my main storage target. As you mentioned Core is more mature and performance is better for my needs. I don't mind mirroring the 4x8TBs if it's recommended and sacrificing the 8TB, 16TBs should be more then enough.

I should have most of my hardware tomorrow. I will flash the 530-8i's to IT mode and then post what I have and would appreciate you advising me on the best way to configure the system.
 

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
Question before you go too far ahead:

If you have ESXi as the hypervisor in "P700 #1" and ProxMox/KVM as the hypervisor on "P700 #2" - is there a reason you've chosen SCALE for a nested storage solution? If you plan to use the "Apps" library in SCALE or Docker, then certainly that's justification - but if you just plan to serve storage back to your ESXi/ProxMox solution over NFS/iSCSI then CORE is more mature (and will use more than half of the RAM for ARC by default)

But either way:

Yes, the problem is that ESXi will try to create a .vswp file for virtual memory swap equivalent to the size of the VM's RAM. You can prevent this by opting to reserve all of the system memory for it (128GB) but this is automatically done when you pass a physical hardware device (like your HBA) into a VM. That's what tipped me off to the fact that you were probably doing virtual disks, local RDM, or similar.

So you'll use the 2x800G SSD's as a mirrored datastore local to ESXi - on there, you can create a 16/32G virtual disk for TrueNAS to install to. Then you set up hardware passthrough for the HBA, and you will see the 4x 8TB disks connected there in TrueNAS - although RAIDZ isn't recommended for block/VMFS storage, mirrors are preferred.
I couldn't figure out how to partition the boot drive, maybe diskpart or vcenter. I installed ESXI and on a mirrored Dual M.2 SATA to 2.5" RAID adapter. I read ESXi doesn't support RAID with my onboard SATA controller. Installed a separate 750GB SSD for VMs. 64GB to ESXi, 128GB to TrueNAS Core. I seemed to have passed the HBA successfully and created a Raid-Z 4 x 8TB. I also passed 1 port of my x540 NIC. Let me know if you see anything wrong and things I need to consider.
 

Attachments

  • TN.jpg
    TN.jpg
    81.5 KB · Views: 151
  • TN2.jpg
    TN2.jpg
    119.3 KB · Views: 158
Last edited:
Top