H/W configuration for 150 people working in 4K

JuhoN

Cadet
Joined
Jul 17, 2023
Messages
5
하드웨어 구성

슈퍼마이크로 A+ 서버 2113S-WN24RT

AMD EPYC™ 7002*(32코어) x 1
RAM 등록 ECC DDR4 3200MHz > 총 1TB
SSD Micron 6500 ION 30.72TB x 26 > 총 798.72TB(데이터)
SSD 마이크론 120GB x 2 > 120GB (True OS)
NIC Mellanox ConnectX-5 EX > 100GbE x 2
HBA 9405W-16e

전문가의 의견을 듣고 싶습니다.

우리는 현재 4K로 작업하고 있습니다.
우리는 최근에 프로젝트를 진행했고 기존 스토리지가 우리에게 너무 느리다는 것을 깨달았습니다.
운 좋게도 회사 내 테스트에서 남은 서버가 있으며 모든 플래시 스토리지로 구성하고 있습니다.

우리는 100G 인프라를 가지고 있고 직원들은 10G 연결을 가지고 있습니다.
스토리지는 영구 스토리지가 아니라 프로젝트 중에 사용하기 위한 것입니다.

풀 구성에 대한 자세한 내용은
풀 구성에 대한 https://openzfs.github.io/openzfs-docs/Basic Concepts/RAIDZ.html#introduction 입니다.
NVME 12 개, NVME 12 개, 예비 2 개 --> 사용 가능한 614.4TB (섹터의 RAIDZ2 / 256 블록 크기)
NVME 7 개, NVME 7 개, NVME 7 개, NVME 7 개, 예비 5 개, > 사용 가능한 552.96 TB (섹터의 RAIDZ1 / 256 블록 크기)

나는 두 가지 구성 중 하나를 사용하거나 더 나은 방향이 있는 사람이 있으면 차임벨을 울려주세요.

Mellanox ConnectX-5 EX NIC는 TrueNAS-13.1에서 잘 작동합니다.
아직 HBA를 테스트하지는 않았지만 보조 백업 (또는 하이브리드) 용 SAS 인클로저를 추가 할 생각입니다.

위의 규모에서 TrueNAS를 사용한 경험이 있거나 더 나은 권장 사항이 있으면 알려주십시오.

DeepL로 번역됨
 

JuhoN

Cadet
Joined
Jul 17, 2023
Messages
5
OMG..

The plugin does automatic translation.

Below is the original translation.




H/W configuration for 150 people working in 4K

Hardware Configuration

supermicro A+ Server 2113S-WN24RT

AMD EPYC™ 7002* (32Core) x 1
RAM Registered ECC DDR4 3200MHz > Total 1TB
SSD Micron 6500 ION 30.72TB x 26 > Total 798.72 TB (DATA)
SSD Micron 120GB x 2 > 120GB (True OS)
NIC Mellanox ConnectX-5 EX > 100GbE x 2
HBA 9405W-16e

I would like to hear from the experts.

We are currently working in 4K.
We recently worked on a project and realized that our existing storage was too slow for us,
Luckily, we have a server that was left over from a test within the company and we are configuring it with all flash storage.

We have a 100G infrastructure and our workers have 10G connections.
The storage is not intended for permanent storage, but for use during the project.

For the pool configuration, see
https://openzfs.github.io/openzfs-docs/Basic Concepts/RAIDZ.html#introduction for the pool configuration.
NVME 12 EA , NVME 12 EA , spare 2EA --> Usable 614.4 TB (RAIDZ2 / 256 block size in sectors)
NVME 7 EA , NVME 7 EA , NVME 7 EA , NVME 7 EA , spare 5 EA --> Usable 552.96 TB (RAIDZ1 / 256 block size in sectors)

I'm going with one of the two configurations, or if anyone has a better direction, please chime in.

The Mellanox ConnectX-5 EX NIC is working fine with TrueNAS-13.1.
I haven't tested the HBA yet, but I'm thinking of adding a SAS enclosure for a secondary backup (or hybrid).

If anyone has any experience using TrueNAS at the above scale, or better yet, any recommendations, please let me know.
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Context: We currently have several NVME based TrueNAS core deployments based off of Micron 9300 and 9400 pro drives at both 15 and 30TB sizes for a total of approximately 1.6PB spread out between four systems


For your build, I would swap to a dual AMD Epyc CPU system. Primarily, it allows for enough PCIE lanes so that you don't need to use an HBA. In my experience, the tri-mode adapters, while fast, are an overall bottleneck and not necessary. Based off your use case, I wouldn't run 12 drive wide vdevs. CPU usage can scale out of control very rapidly with larger vdev sizes and given the kind of IO you are talking about, I would go with 4 x 6 or 3 x 8.

TrueNAS isn't tuned for this performance level, not even close. Do your homework around tuning for the higher performance NVME drives and networking, especially if you are going to use NFS to connect to these.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
At that scale, definitely contact iXsystems. Their prices are competitive and you can avoid many mistakes along the way. 100GbE is still somewhat uncharted for most of us. Especially with a video editing workload - by the way, is this a "move stuff to a workstation, work on it, dump it back" or a "edit directly from the server" sort of scenario?
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
You’re not going to accomplish what you are trying to do without contacting the people who make TrueNAS
While we did not contact IX specifically, we did make use of a TrueNAS consultant who we've worked with over the years. We wouldn't have been able to get anywhere near what the systems are capable of, especially the system with 24 x 30.72TB NVME with 4 x 100Gbe interfaces.
 

JuhoN

Cadet
Joined
Jul 17, 2023
Messages
5
Context: We currently have several NVME based TrueNAS core deployments based off of Micron 9300 and 9400 pro drives at both 15 and 30TB sizes for a total of approximately 1.6PB spread out between four systems


For your build, I would swap to a dual AMD Epyc CPU system. Primarily, it allows for enough PCIE lanes so that you don't need to use an HBA. In my experience, the tri-mode adapters, while fast, are an overall bottleneck and not necessary. Based off your use case, I wouldn't run 12 drive wide vdevs. CPU usage can scale out of control very rapidly with larger vdev sizes and given the kind of IO you are talking about, I would go with 4 x 6 or 3 x 8.

TrueNAS isn't tuned for this performance level, not even close. Do your homework around tuning for the higher performance NVME drives and networking, especially if you are going to use NFS to connect to these.

Thank you for your response.
I will take note in my system configuration.
 

JuhoN

Cadet
Joined
Jul 17, 2023
Messages
5
At that scale, definitely contact iXsystems. Their prices are competitive and you can avoid many mistakes along the way. 100GbE is still somewhat uncharted for most of us. Especially with a video editing workload - by the way, is this a "move stuff to a workstation, work on it, dump it back" or a "edit directly from the server" sort of scenario?

Yes Let's contact iXsystems.

The 100GbE has been tested and works fine.
The way we work is "move stuff to a workstation, work on it, dump it back".
 
Top