New/Old Build

Amiss

Cadet
Joined
Jan 9, 2024
Messages
4
Hey there,

Was an avid freenas user a few years ago (around 15 years ago if my memory is correct, how time passes..) That server crashed due to a crazy ex that plugged the power of the unit to get back at me... resulting in a total loss of the pool and all photos I had ever shot (Used to be a professional photographer). This made me reluctant to spin up a new one, so sold the server to a friend and just bought it back a few days ago for pennies. I´m planning to use the NAS for storage of personal files, media files (4k material and 1080p) but if feasible also for storage of VM's for my ESXI server. Let me know what you think of the build and if its feasible for the use case I´m planning.

To serve the files I´ll use Emby trough Truenas. Plex has gotten to be too expensive in my opinion, Emby offers lifetime pass as Plex used to.

Hardware
S12000BTL
Intel Xeon E3-1245
24GB of ECC RAM (planning to upgrade to 32 gb total)
PSU Fractal with max output 400w
LSI 9200-8i SAS2008 in IT mode or Broadcom 9300 (due to the fact that the motherboard only has 4 sata 3gb/s and 2 6gb/s)
2 mini sas to sata cable
8 Toshiba mg08aca16te 16tb sata variant in a raidz2 or 3 (did not choose the sas disk as these disks can deliver as a max 260 mbit/s and the LSI controller only has 6 gb/s throughput, but the question here is would it be good to choose the sas variant of the disk and the broadcom 9300?)
2 Samsung Evo 2tb serving as mirrored zils (cache)

Should I consider adding a gpu for transcoding?

What are your thought on the build? Is it good enough for what I´m planning? Is my reasoning valid?

Any input would be appreciated!

 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
storage of VM's for my ESXI server
This is not compatible with the following:
raidz2 or 3
24GB of ECC RAM (planning to upgrade to 32 gb total)
You need to use Mirrors for the IOPS of block storage, not RAIDZ.
64GB is considered the starting point for decent block storage.

Separately to that:
2 Samsung Evo 2tb serving as mirrored zils (cache)
Don't do that. Use those disks as a mirrored pool for your VMs... but add memory if you don't want terrible performance.
You probably don't need SLOG if you're not serving block storage from the HDD pool, but even if you do, those are the wrong disks and mirroring the SLOG isn't really necessary if you have the proper drive.

Should I consider adding a gpu for transcoding?
If you're going to transcode, yes... but don't if you can avoid it, transcoding is crap (hard to get it going depending on the selected platform and then can mess with viewing quality unnecessarily in the name of lower bandwidth).

8 Toshiba mg08aca16te 16tb sata variant in a raidz2 or 3 (did not choose the sas disk as these disks can deliver as a max 260 mbit/s and the LSI controller only has 6 gb/s throughput, but the question here is would it be good to choose the sas variant of the disk and the broadcom 9300?)
I don't think you're counting correctly there... each channel/cable from the SAS controller has 6Gbits of capacity, which is rarely hit by a spinning disk. the only thing you gain from SAS3 is the ability to handle many SSDs. I don't think you need that at this point.
 

Amiss

Cadet
Joined
Jan 9, 2024
Messages
4
This is not compatible with the following:


You need to use Mirrors for the IOPS of block storage, not RAIDZ.
64GB is considered the starting point for decent block storage.

Separately to that:

Don't do that. Use those disks as a mirrored pool for your VMs... but add memory if you don't want terrible performance.
You probably don't need SLOG if you're not serving block storage from the HDD pool, but even if you do, those are the wrong disks and mirroring the SLOG isn't really necessary if you have the proper drive.


If you're going to transcode, yes... but don't if you can avoid it, transcoding is crap (hard to get it going depending on the selected platform and then can mess with viewing quality unnecessarily in the name of lower bandwidth).


I don't think you're counting correctly there... each channel/cable from the SAS controller has 6Gbits of capacity, which is rarely hit by a spinning disk. the only thing you gain from SAS3 is the ability to handle many SSDs. I don't think you need that at this point.
Thanks for the reply @sretalla

Ah alright, so raidz is not suitable to store VMs on due to that the writes are contact (high iops) instead of a write that then never changes. I thought from my reading that I could counter this by utilizing the ZILs for those I/O intense operations. But this is not a recommended way to move forward I guess?

Forgot to include in my description that I´ll be running two SSDs (Intel Optane SSD 905P) that I have left over that are 340gb in a mirror for the OS and Emby to run off. So I have all these other drives (Toshiba & Samsung Evo) to build the zfs pool. But if these are not the correct drives to build the zil with what drives are recommended if I was to go down that route? :)

Alright I´ll start looking for a GPU, saw that other people were post processing their media files to avoid transcoding so it could play on any device, but I´m way too lazy for that. Do enough transcoding for work so want it to just work when I press play no matter the content or filetyp on any device. Any recommendations when it comes to GPUs?

The thing is that the only jail/vm I´ll be running on this machine is for Emby, no other VMs or Jails will be run on this server. It'll be dedicated to TrueNas and Emby only, to give TrueNas and ZFS as much of the hardware as possible since the hardware is quite old by now and only supports 32gb of RAM. Giving EMBY a GPU would almost leave the CPU and RAM alone for TrueNas so it can consume as much as possible from the hardware.

Is 32gb of ram too little for a pool of this size? What's a max pool size in your opinion for 32gb of ram?

What I was thinking when it comes to the speed was that no matter if I choose MG08 as sas or sata disk the disks are rated for 262mbit/s meaning that I´ll never surpass the throughput of the LSI 9200-8i SAS2008. (8 x 0,262 gbps = 2,096 gbps) + (2 x 2600) = 2096 + 5200 = 7296 gbps and the lsi card supports 6gps per channel meaning that If I put the SSDs on one channel and the other disks on the other channel, there'll be enough bandwidth? Also to doublecheck, am I doing the calculations correctly? And am I correct in my understanding that the LSI card supports 6 gbps per channel or is it in total? If the card is the bottleneck I´ll have to order another card as the storage space is really needed.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I thought from my reading that I could counter this by utilizing the ZILs for those I/O intense operations
Nope, that's not how it works... you can smooth out the low points a bit with a SLOG, but the overall suck can't be removed.

If you're prepared to risk data up to and/or including pool loss, you can run the VMs on ZVOLs set to sync=disabled, which takes that whole problem away, but lands you in the risky area of much wailing and gnashing of teeth.

I´ll be running two SSDs (Intel Optane SSD 905P) that I have left over that are 340gb in a mirror for the OS and Emby to run off
Those would be better candidates for SLOG if you're going to do that.

Is 32gb of ram too little for a pool of this size? What's a max pool size in your opinion for 32gb of ram?
That's not really how the relationship goes.

For block storage, it's about how much churn you're doing (how many VMs and what they are doing with disk), not as much about the total storage (which must remain less than 50% full if you want to have reasonable performance).

the disks are rated for 262mbit/s meaning that I´ll never surpass the throughput of the LSI 9200-8i SAS2008. (8 x 0,262 gbps = 2,096 gbps) + (2 x 2600) = 2096 + 5200 = 7296 gbps and the lsi card supports 6gps per channel meaning that If I put the SSDs on one channel and the other disks on the other channel, there'll be enough bandwidth? Also to doublecheck, am I doing the calculations correctly? And am I correct in my understanding that the LSI card supports 6 gbps per channel or is it in total?
Looks like you understood it OK. Those controllers are much more powerful (and power hungry... 20W or something like that) than you would think based on how old they are.

Any recommendations when it comes to GPUs?
Not really, I see that older generation nVidia aren't well supported on SCALE... not sure about AMD.

... oops... I missed we're talking about CORE here... plex dropped their support for transcode entirely in FreeBSD. I guess that says something about driver support in some way. I wouldn't count on Emby keeping up with it for long, so reconsider that option (either go to SCALE for transcode or drop transcode IMO)
 

Amiss

Cadet
Joined
Jan 9, 2024
Messages
4
Nope, that's not how it works... you can smooth out the low points a bit with a SLOG, but the overall suck can't be removed.

If you're prepared to risk data up to and/or including pool loss, you can run the VMs on ZVOLs set to sync=disabled, which takes that whole problem away, but lands you in the risky area of much wailing and gnashing of teeth.


Those would be better candidates for SLOG if you're going to do that.


That's not really how the relationship goes.

For block storage, it's about how much churn you're doing (how many VMs and what they are doing with disk), not as much about the total storage (which must remain less than 50% full if you want to have reasonable performance).


Looks like you understood it OK. Those controllers are much more powerful (and power hungry... 20W or something like that) than you would think based on how old they are.


Not really, I see that older generation nVidia aren't well supported on SCALE... not sure about AMD.

... oops... I missed we're talking about CORE here... plex dropped their support for transcode entirely in FreeBSD. I guess that says something about driver support in some way. I wouldn't count on Emby keeping up with it for long, so reconsider that option (either go to SCALE for transcode or drop transcode IMO)
Aha! Okay, then I understand. Not relly willing to take that risk after what happened last time haha. Will not run VMs on the raidz in that case and create a separate pool with SSDs for the VMs. Most likely utilizing the evo ssd for the VMs as mirrors and opt for the intel as ZILs as you suggested. Are those enough though if I´m concurrently write 3 streams of raw 8k video to the nas? 3 streams of raw 8k video equates to a few TB.
(One hour of 8K RedCode Raw 75 amounts to 7.29 TB. That's 121.5 GB per minute for raw 8K footage) Should I opt for larger disks as the zil?

Alright great, so with the EVO ssds in mirror I´ll only utilize 1tb to keep optimal performance.

About the controllers, I just want to verify. Is it 6gpbs per channel on LSI 9200-8i SAS2008 or in total? So i´m not shooting myself in the foot with a too slow controller.

Uh oh that sucks about the support of Nvidia, read on truenas webpage for Scale then that it supports Alder Lake GPU, GeForce 40xx GPUs or is it just the applications for the gpus that are supported? But if the applications are supported wouldn't that mean that there is driver support for these gpus in FreeBSD? (I know I´m in the wrong subforum for the question... but thought Id ask anyway, if not okay please let me know and I´ll ask the question in the Scale subforum.)

Your answers are very appreciated!
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
The issue with consumer SSD's is that generally they suck (some more than others). Performance can fall off a cliff once the cache runs out and thats not something that leads to good iSCSi performance.

I use 6 second hand enterprise SSD (SATA) in 3*2 mirrors with an Optane SLOG as an iSCSI pool for my ESXi setup. The enterprise grade SSD's provide a consistent performance whilst the SLOG improves the pool performance as I am using sync=always. This seems to work
 
Top