Update on my Eypc server build... (things not going to plan :/)

Droz

Dabbler
Joined
Apr 28, 2022
Messages
21
OK so update on my server build...

My plan is to replace the ReadyNAS 428 I have with eight 10TB drives. Its under 10% free space now and the lack of anything faster than 1GigE makes transferring to/from painfully slow.

Hardware I chose...

Motherboard Asrock Rack: ROMED6U-2L2T
48GB ECC RAM (six x 8GB sticks)
Some M.2 NVME SSD's Four 8TB drives to start
Fractal Node 804 case
Four NVMe SSD's (2TB, 2TB, 4TB, 8TB)

The motherboard is a great option. It has four PCIe Gen 4 slots, three mini-sas connectors, a couple NVME m.2 slots and a few slim-sas connectors that can support two NVME drives each. So there's LOTS of upgrade paths for the future. Ultimately I want to use the Highpoint PCIe to M.2 cards (up to eight slots per card) and have all SSD storage down the line. The idea is to start with hard drives to build out initially and get all my data off the ReadyNAS. I've been testing various configs/setups with ESXi, Win2k19, Ubuntu, Fedora, Openmedia Vault and Truenas Scale.

Initially I had it running bare metal just so I could do testing and get used to TrueNAS. The idea was to have Truenas or whatever I chose as my storage OS as a VM and do PCIe passthrough in the hyper visor for the SAS controllers and the Nvidia P1000 card to another VM for plex. Well..... the more I tested, the more things I found really frustrating with Truenas. I could not get the Plex container to reliably/consistntly see the zvol/dataset for content. Nor could I get it to see the Nvidia card. For as much hype there has been for Truenas in the past year or so on youtube, from friends, etc. What none of them tell you is how horrible the UI/UX is in Truenas. When you're trying to add app/containers, NOTHING IS CONSISTENT when configuring things. Now I'm no linux expert, I haven't worked in it daily for almost ten years. But I know enough to be an intermediate level admin.

I think I setup a Plex instance at least a couple dozen times and probably 3 of those instances worked well enough that I could work with it for consistent testing. How wrong I was! Something would just stop working. I'd kill the instance, rebuild it the same as before. It wouldn't see the dataset for Videos despite having storage setup the same as the previous instance. After a couple weeks I gave up on Truenas. I went with Win2k19 as the base OS and a Win10 Hyper-V instance and that was doing what I needed.

My main need is Resilio Sync for project collaboration as well as off-site backups. Resilio in their great wisdom, won't let you run the free version on Windows server. The linux version doesn't have a UI. So it needs to be Win10. I decided to go back to my old setup with an 8th gen i7 and do some more testing on the Epyc system. This is where I dug more into the issue with trying to pass through the SAS controller to a VM. I tried Proxmox and ESXi. Could not get anything to work as desired. I had done some research previously but didn't really get anywhere. This time though, I found something on my first search. Someone with the exact same motherboard and CPU as me.



FML!!!!

There's no real fix, there's some potential patch but stability and security may be an issue. So now I'm kind of pissed. I really wanted to get this build up and running by now. I got laid off, found another job which turned out to be a complete disaster. Hiring manager was clueless and basically gave me access to nothing so I couldn't do my job. So now waiting on next job so I can decide what I want to do.

I see my options as

1. Continue with my original plan with the addition of an HBA and forego the built in SAS controllers. This would mean I need to get a new rackmount case with higher airflow to keep the HBA cool

2. Scrap the original plan. Keep the system in the case or downsize to a smaller one and use as my desktop/workstation and get a QNAP or Synology NAS with 10GigE
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Sorry to hear your plight. So you are unable to run using ESXi 7.0 ? In my setup, which I've been using for many years, I pass through the controller card and then run TrueNAS on 16GB RAM VM and then I run any other VM directly on ESXi. I never use TrueNAS as my VM server.
 

Droz

Dabbler
Joined
Apr 28, 2022
Messages
21
Sorry to hear your plight. So you are unable to run using ESXi 7.0 ? In my setup, which I've been using for many years, I pass through the controller card and then run TrueNAS on 16GB RAM VM and then I run any other VM directly on ESXi. I never use TrueNAS as my VM server.

I can run ESXi 7 just fine. I can make VM's do all the testing I want in environments that aren't ideal.


ESXi and Proxmox both see the HBA and the video card. But will not allow me to do pass through to a VM. So kind of negates having the dedicated hardware for them.

If I take plex out of the equation I could run things as is and be fine.

Or I could get a beefier Epyc and have the CPU do transcoding. But higher core counts make the price go out dramatically.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I guess I do not understand the goal of what you are trying to do and need to make happen. Are you trying to run Plex and it must transcode? Why do you need to pass through a video card, for Plex? I understand passing through an HBA card and if you cannot pass it through, you could do RDM to pass through each individual drive. I have done it, it works, many people do not like it but I have helped other though the process. It's just not as nice an clean as passing an HBA through. But you can remove the drive pool and place them into any other machine and they will work fine.

Look at my system setup specs, I can transcode very well if desired but I can also let my TV or smartphone do the work as well, which is how I try to have my content encoded to support. I have used simple DLNA too (Plex has a built in DLNA server) and all works great.

So are you just trying to create a video repository and Plex server? Is there more to it?

As for the Plex Container... the best course of action for me was to create my own jail (on core) and install Plex there. It makes it easy to manage. Maybe Scale can be easily managed too but if I'm creating a VM for it.

I may not be able to answer all your questions but I can help you explain your situation and what you want to do, then someone else might be able to assist and you may not need to buy anything more, which should be a good thing.
 

Droz

Dabbler
Joined
Apr 28, 2022
Messages
21
I guess I do not understand the goal of what you are trying to do and need to make happen. Are you trying to run Plex and it must transcode? Why do you need to pass through a video card, for Plex? I understand passing through an HBA card and if you cannot pass it through, you could do RDM to pass through each individual drive. I have done it, it works, many people do not like it but I have helped other though the process. It's just not as nice an clean as passing an HBA through. But you can remove the drive pool and place them into any other machine and they will work fine.

Look at my system setup specs, I can transcode very well if desired but I can also let my TV or smartphone do the work as well, which is how I try to have my content encoded to support. I have used simple DLNA too (Plex has a built in DLNA server) and all works great.

So are you just trying to create a video repository and Plex server? Is there more to it?

As for the Plex Container... the best course of action for me was to create my own jail (on core) and install Plex there. It makes it easy to manage. Maybe Scale can be easily managed too but if I'm creating a VM for it.

I may not be able to answer all your questions but I can help you explain your situation and what you want to do, then someone else might be able to assist and you may not need to buy anything more, which should be a good thing.

I would prefer plex not to transcode. There are instances where people I share with have crap hardware and require transcoding. Even though I have plex set to NOT transcode. It will still do it in certain cases. 99% of the time everyone is direct play. I don't want to have CPU transcode on an Eypc, even one that's just 8 cores.

The few that can't direct play are in Hawai'i or las vegas where I am not there and not easy to tell them how to check settings to make sure they have the correct quality set.

I can't do RDM either. This particular instance with the hardware does not allow me to show any device directly to the VM. I can only use datastores
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
1. Continue with my original plan with the addition of an HBA and forego the built in SAS controllers. This would mean I need to get a new rackmount case with higher airflow to keep the HBA cool
No, you can simply mount a small fan such that sufficient cooling is achieved. Why do you feel that you need a rackmount case for that?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I can't do RDM either. This particular instance with the hardware does not allow me to show any device directly to the VM. I can only use datastores
I've never heard of that one before but then again, I have limited experience with ESXi.
The few that can't direct play are in Hawai'i or las vegas where I am not there and not easy to tell them how to check settings to make sure they have the correct quality set.
Sounds like they should get their own servers. :wink:
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Droz

Dabbler
Joined
Apr 28, 2022
Messages
21
No, you can simply mount a small fan such that sufficient cooling is achieved. Why do you feel that you need a rackmount case for that?

You mean like this one?

The CPU on the card runs 10 degrees C over it max operating limit with no drives attached to the card.

This is also with makeshift ducting to direct airflow past the card.

The card states 200 LFM needed for airflow. Pretty sure I'm not even hitting 100 LFM

PXL_20221001_213308476.jpg
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The CPU on the card runs 10 degrees C over it max operating limit with no drives attached to the card.
This is one thing I dislike about many of the HBA's out there, they run terribly hot. This was a key factor into why I use the HBA I have installed. Very little heat and they work great. Well for now they work. I'm waiting for the day they become unsupported, that will likely be the day I stop upgrading TrueNAS, or I guess I could build a new system.

As for the HBA with the fan, I'd recommend an 80mm fan on a custom built duct to blow onto the heatsink, assuming you want to keep the HBA.

Good luck on whatever your solution is.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
You mean like this one?
Whatever works for you.
The CPU on the card runs 10 degrees C over it max operating limit with no drives attached to the card.
With or without the fan construction shown below?
The card states 200 LFM needed for airflow. Pretty sure I'm not even hitting 100 LFM
I am not an expert on this topic, but would assume that the requirements for airflow in a server rackmount case are different from a custom fan mount. In that light the temperature measured should be the only relevant aspect
If that is card you want to use for ZFS, you should change it. This is not a simple HBA but a hardware RAID controller.
 

Droz

Dabbler
Joined
Apr 28, 2022
Messages
21
Whatever works for you.

With or without the fan construction shown below?

I am not an expert on this topic, but would assume that the requirements for airflow in a server rackmount case are different from a custom fan mount. In that light the temperature measured should be the only relevant aspect

If that is card you want to use for ZFS, you should change it. This is not a simple HBA but a hardware RAID controller.

As I said, it runs 10 degrees over max with the fan and with the ducted airflow. Airflow in a server case would be less of an issue as they are designed for high CFM/LFM. The Fractal Node 804 (mentioned in my post) is not designed for high airflow. Hence the heat issue with the adaptec card.

I am aware the card is a raid controller. I am also aware that you should not use drives in a raid array in truenas. I understand why everyone re-iterates this constantly. Kind of wish they didn't. That particular card has an HBA mode (aka IT mode)
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I am aware the card is a raid controller. I am also aware that you should not use drives in a raid array in truenas. I understand why everyone re-iterates this constantly. Kind of wish they didn't. That particular card has an HBA mode (aka IT mode)
5) A RAID controller that supports "JBOD" or "HBA mode" isn't the same.
 
Last edited:

Droz

Dabbler
Joined
Apr 28, 2022
Messages
21

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Droz

Dabbler
Joined
Apr 28, 2022
Messages
21
OK so update again.....

This issue has been bugging me a lot. I've been planning this build since last year now and started to piece things together earlier this year.

I started poking around at IOMMU pass through videos on youtube. Since IOMMU is present on Ryzen chips as well. One video, the guy was talking about enabling AMD CBS in BIOS/UEFI to allow pass through.

I poked around in the Asrock board BIOS/UEFI and sure enough. There's an option for it and it was disabled.

I killed off Win2k10 again, dumped ESXi on and I'm not able to pass though the SATA controllers

HAPPY DAY!

Going to be conducting testing the next few days for performance, etc.

Truenas scale on ESXi with two cores and 16GB ram seemed ok. It was passing traffic at full GigE speed (limited by my NAS its getting files from). CPU seemed VERY busy though and the mgmt dashboard wouldn't load properly. Gave it four more cores and 6GB more of ram and its happier now.
 

CDRG

Dabbler
Joined
Jun 12, 2020
Messages
18
Odd that you cannot pass through to VMs on ESXi. On older HW that didn’t explicitly have IOMMU options in BIOS I was able to pass both HBAs and GPUs to various VMs, including Free/TrueNAS and Plex in Windows on an ESXi 6 host.

I’m running a very similar set up to you in that I have, now, an ESXi 7 host with TrueNAS and Plex as a VM, where I am passing a Broadcom HBA to Truenas and my 1660 Ti to Plex for transcoding. Yes, transcoding sucks, but that’s another story for another forum.

My previous set up was an Asrock Rack EP2C602-4L/D16. While I appreciate there’s funny differences between Intel and AMD I wouldn’t expect you to have these types of issues.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
I killed off Win2k10 again, dumped ESXi on and I'm not now able to pass though the SATA controllers

HAPPY DAY!
...
I guessing a typo, "not" = "now"...
 
Joined
Jun 15, 2022
Messages
674
I've never seen anyone use hot-glue instead of push-pins to mount a fan....until now.

Paired with a Fractal Node 804 micro-ATX chassis...

I wish this was all in one thread, because I have the feeling this is like watching a new driver who somehow got hold of a supercar and is about to dump all their data all over the freeway.

375-3751484_smiley-eating-popcorn-animated-gif-clipart-free-clipart.png
 
Top