Dual socket HPE Proliant Gen9 with SAS SSD for VM Storage

Joined
Nov 5, 2022
Messages
8
Hi All,

I'm sorry if similar questions asked before. I'm new to forums. We were using Truenas-core as a backup storage for few years with Dell hardware. Our setup was stable and never had any issues.

But recently we got requirement to find storage solution for our VM storage as we are decommissioning our small ceph storage (6nodes) cluster.

I was checking multiple storage solutions including synology / Dell / Hitachi VSP (we using one in another site) / Hpe Nimble. but the big issue is we cannot get any of these deployed quickly and due to the cost we need approval and things. So our plan is to reuse the existing ceph hardware and leverage Truenas core for VM storage. And later go with enterprise grade storage server like Truenas Enterprise appliances.

We don't have much experience with Truenas as primary VM storage. So we quickly put together single node Truenas-core 12 U8.1 box for testing this.
  • 2x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz (2 Sockets) - 28 cores 56 Threads
  • 128GB ECC RAM (HPE)
  • 6x SAS 3.2TB SSD Drives (HPE)
  • 2x HPE Ethernet 10Gb 2-port 560SFP+ Adapter
  • 2x PSU
We have 3 of these identical servers. but for now we only setup one.

After the base installation we create LACP with 2x Juniper qfx51xx switches and tested the networking part. all seems working good. Then create one large pool with Raidz2 and export the NFSv3 share to our Proxmox-ve cluster with the NFS number of servers value set to 12. Not added any tunable yet.

Then we move clones of some of the high IO vms to this storage on proxmox (12vms). Everything seems working fine. we did some test like backups and live migration tasks as well. Capacity usage is even less than 2% at the moment due to Lz4. I did scrub task and Smart tests while the VMs are running and seems no much Impact from that as well.

Now I have few questions
  • Is this hardware combination good choice for Truenas-core 12 ?
  • ISCSI or NFS ?
  • Mirrored vdevs or raidz2 pool ?
  • How will be the recovery process if server crash/reboot ?
  • How bad the VM iops if storage is more than 50% full and scrubbing ?
  • Any checklist that I need to follow before deploy Truenas as VM storage ?
  • Any risks if we go with this setup ?
Appreciate your response.
Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
2x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz (2 Sockets) - 28 cores 56 Threads

You've failed to identify what you're using for a disk controller. The typical problem with HP is that people use a RAID controller; this is not going to work out correctly. Please see


Mirrored vdevs or raidz2 pool ?

Follow the recommendations.


How bad the VM iops if storage is more than 50% full and scrubbing ?

Kinda depends on the SSD's. For HDD, you want to keep occupancy rates low in order to keep write speeds good; this has to do with fragmentation on the pool. SSD does not eliminate this concern entirely, but high quality SAS SSD's are going to be better than typical consumer SSD's.
 
Joined
Nov 5, 2022
Messages
8
The typical problem with HP is that people use a RAID controller; this is not going to work out correctly.

Sorry I forgot to mention that I converted the HPE RAID controller to HBA mode
 
Joined
Nov 5, 2022
Messages
8
And this exact model. converted to HBA mode from the HPE smart storage administrator

Embedded Smart Array P440ar Controller 749796-001
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Those are things are known to not work well, HBA mode or not.
 
Joined
Nov 5, 2022
Messages
8
Does it make any difference if I use Truenas-scale instead of Truenas-core ? as scale is Debian and these array drivers properly supported ?

And how stable /safe Truenas scale for production use ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I converted the HPE RAID controller to HBA mode

Only in some figment of an HP marketing person's imagination. In the real world, you converted a useless RAID controller into an imaginary HBA. Your product is still a RAID controller. This is all explained in the post I linked above.
 
Joined
Nov 5, 2022
Messages
8
Only in some figment of an HP marketing person's imagination. In the real world, you converted a useless RAID controller into an imaginary HBA.
Got it. Seems better go with proper enterprise grade storage array than tinkering with these hw
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
difference if I use Truenas-scale instead of Truenas-core

No, not really. You're still proposing using an untested, unproven, unknown controller. You would be a guinea pig. RAID controllers are famously known for RAID-oriented behaviours even in so-called bull**** "HBA modes"; even if you choose to believe
these array drivers properly supported ?
there isn't any proof that they behave the way TrueNAS and ZFS need. We know from BILLIONS of run hours on the LSI HBA's that very particular firmware versions interoperate correctly with the FreeBSD system driver. Even having a firmware mismatch causes various problems.

Now, on the flip side of that coin, I'm not telling you that your system cannot work. We've had reports of various problems with the PMC RAID controllers, for which we've universally recommended replacement with an LSI HBA. However, I will tell you that ZFS can certainly work fine with any controller that can cleanly communicate with the hardware. The risk becomes mostly what happens when there are hardware issues (failed drive, detection of hot insertion/removal of drives, SMART access, etc). If your data is not particularly important or irreplaceable, this might be a reasonable experiment for you to try. I definitely wouldn't bother on FreeBSD based TrueNAS Core though.
 
Top