BUILD New 5028R Build - CPU and Array Recommendations

Status
Not open for further replies.
Joined
Jul 12, 2018
Messages
4
System: SuperMicro SuperStorage Server 5028R-E1CR12L

I read a post on these forums about the quiet operation of this system and acquired one on the cheap for $750.

CPU: Unsure

I'm eyeing a 1660 v4 for 8 cores but I'm a bit wary of the price to performance ratio.

RAM: 64GB DDR4 2133MHz ECC

Standard compliment of RAM for my system. Unsure if I will need more.

Drives: 12 2TB WD Red Drives

Enough drives to fit the space I think I will need in the years to come.

Use Case:

This is going to be my bulk storage box and Plex media server. I want whatever CPU I pick to be able to handle Plex transcodes. I currently have around 2TB of media and 2TB of bulk documents and files and this will also be bulk VM storage. I'm not too set on array size and IOPs/throughput but I'd like to be able to saturate a 10 gig connection and have a decent amount of IOPs I'm relatively new to storage so I'm looking for pointers and suggestions. Thanks.
 
Joined
Dec 29, 2014
Messages
1,135
Are you going to be using iSCSI or NFS to share the VM storage? Are these production or lab VM's? Do any of them have a lot of disk i/o? If so, is it mostly read, write, or both? I will pass on the Plex questions as that is not my area of expertise. The basic hardware sound ok although you have not specified the drive controller. That could make a huge difference. The different kinds of use cases may benefit from having different pools build with different kinds of vdevs. Most people here seem to prefer mirrored vdevs for VM usage, but your more general storage would likely do well with RAIDZ2. There isn't complete complete agreement on the size of RAIDZ2 vdevs, but the general consensus is 6 or 8 drives per RAIDZ2 vdev. If you did a single pool of RAIDZ2, I would suggest 2 vdevs of 6 drives each. If you have the drive bays, I would always recommend putting a spare in the pool just in case. Particularly if you are sharing the pool for VM's via NFS, you will want an SLOG (Separate ZFS intent log on a REALLY FAST SSD) to get good performance. Oh, you also didn't mention what hypervisor you are using.
 
Joined
Jul 12, 2018
Messages
4
Are you going to be using iSCSI or NFS to share the VM storage? Are these production or lab VM's? Do any of them have a lot of disk i/o? If so, is it mostly read, write, or both? I will pass on the Plex questions as that is not my area of expertise. The basic hardware sound ok although you have not specified the drive controller. That could make a huge difference. The different kinds of use cases may benefit from having different pools build with different kinds of vdevs. Most people here seem to prefer mirrored vdevs for VM usage, but your more general storage would likely do well with RAIDZ2. There isn't complete complete agreement on the size of RAIDZ2 vdevs, but the general consensus is 6 or 8 drives per RAIDZ2 vdev. If you did a single pool of RAIDZ2, I would suggest 2 vdevs of 6 drives each. If you have the drive bays, I would always recommend putting a spare in the pool just in case. Particularly if you are sharing the pool for VM's via NFS, you will want an SLOG (Separate ZFS intent log on a REALLY FAST SSD) to get good performance. Oh, you also didn't mention what hypervisor you are using.

I'll be using NFS vis my 10 gig link. Hypervisor will be either VMWare of Hyper-V once I finalize my VM needs. The controller is motherboard Broadcom 3008. This unit will just be used for infrequently loading data to VMs, but when I do load the data I wish for to be speedy.
 
Joined
Dec 29, 2014
Messages
1,135
I'll be using NFS vis my 10 gig link. Hypervisor will be either VMWare of Hyper-V once I finalize my VM needs. The controller is motherboard Broadcom 3008. This unit will just be used for infrequently loading data to VMs, but when I do load the data I wish for to be speedy.

It is well documented in the forums (and I can personally attest to it) that ESXi does synchronous writes over NFS. If you don't have some kind of fast device (SAS SSD, NVMe) as a SLOG (dedicated ZFS intent log), you will be very disapppointed in the NFS write performance. With enough RAM and enough different physical drives (your drive controller is good), you will be quite happy with the read performance. I was able to get sustained reads in the 7-8G range on my old FreeNAS (2 generations ago for me) which was an HP DL380 G5. Here is a good used one https://www.ebay.com/itm/HUSSL4010B...5-SAS-SSD-SOLID-STATE-HARD-DRIVE/263352574895 for $64 on eBay. If you have the bays and controller space, get two and stripe them. Then you will be very happy with the write performance.
 
Last edited:
Joined
Jul 12, 2018
Messages
4
It is well documented in the forums (and I can personally attest to it) that ESXi does synchronous writes over NFS. If you don't have some kind of fast device (SAS SSD, NVMe) as a SLOG (dedicated ZFS intent log), you will be very disapppointed in the NFS write performance. With enough RAM and enough different physical drives (your drive controller is good), you will be quite happy with the read performance. I was able to get sustained reads in the 7-8G range on my old FreeNAS (2 generations ago for me) which was an HP DL380 G5. Here is a good used one https://www.ebay.com/itm/HUSSL4010B...5-SAS-SSD-SOLID-STATE-HARD-DRIVE/263352574895 for $64 on eBay. If you have the bays and controller space, get two and stripe them. Then you will be very happy with the write performance.

Can I use an Enterprise grade NVME SSD? Do I need a L2ARC with current configuration? I'm thinking of striping/mirroring two RAID Z2 vdevs.
 
Joined
Dec 29, 2014
Messages
1,135
Can I use an Enterprise grade NVME SSD? Do I need a L2ARC with current configuration? I'm thinking of striping/mirroring two RAID Z2 vdevs.

Absolutely on the NVME thing. My primary FreeNAS has an Intel Optane 900P and it is really fast. It doesn't have the power failure protection that some enterprise SSD's like the S3700 do, but those were outside of my budget. The 900P was under $400, and works great for me. I am willing to live with the risk. I am not entirely sure how to interpret your stripe/mirror statement. My primary pool has 2 RAIDZ2 vdevs in it which are each 8 x 1TB 7.2k SATA drives, and the pool has a spare. There are differing views at to whether 6 or 8 physical drives is the optimal number for a RAIDZ2 vdev. I picked 8 because I found the risk acceptable with a spare, and I have ~= 75% of the physical storage space available as opposed to ~= 66% with 6 drives per RAIDZ2 vdev. If what I am talking about is what you meant, great! If not, please elaborate.

EDIT: I would be inclined to say no on the L2ARC. I have seen some case studies on where an L2ARC makes a huge difference, but it was when you are repetitively reading the same data over and over (like imaging the hard drive on a new system). Otherwise I think you are better off with more RAM for read performance and the SLOG for NFS write performance.
 
Joined
Jul 12, 2018
Messages
4
Absolutely on the NVME thing. My primary FreeNAS has an Intel Optane 900P and it is really fast. It doesn't have the power failure protection that some enterprise SSD's like the S3700 do, but those were outside of my budget. The 900P was under $400, and works great for me. I am willing to live with the risk. I am not entirely sure how to interpret your stripe/mirror statement. My primary pool has 2 RAIDZ2 vdevs in it which are each 8 x 1TB 7.2k SATA drives, and the pool has a spare. There are differing views at to whether 6 or 8 physical drives is the optimal number for a RAIDZ2 vdev. I picked 8 because I found the risk acceptable with a spare, and I have ~= 75% of the physical storage space available as opposed to ~= 66% with 6 drives per RAIDZ2 vdev. If what I am talking about is what you meant, great! If not, please elaborate.

EDIT: I would be inclined to say no on the L2ARC. I have seen some case studies on where an L2ARC makes a huge difference, but it was when you are repetitively reading the same data over and over (like imaging the hard drive on a new system). Otherwise I think you are better off with more RAM for read performance and the SLOG for NFS write performance.

What CPU do you recommend? I'm eyeing at least an 8 core for Plex purposes. (4 for Plex, 4 for FreeNAS)?
 
Joined
Dec 29, 2014
Messages
1,135
What CPU do you recommend? I'm eyeing at least an 8 core for Plex purposes. (4 for Plex, 4 for FreeNAS)?

I only use my FreeNAS for storage, so the CPU isn't a huge deal. I think Samba (CIFS) is still single threaded, so a higher clock speed helps there. Sorry, but I don't have any experience with Plex so I can't comment on that.
 
Status
Not open for further replies.
Top