Need some advice on moving FN box to different HW

Status
Not open for further replies.

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
Hey all I need some advice/info/thoughts/comments/gripes on moving my FreeNAS box to a bigger memory footprint...

so my current build is below:

  • NORCO RPC-4224 24 Bay 4U Rack Mount Case
  • Super Micro X9SCM-F Motherboard
  • 32GB DDR3 1333 Memory
  • Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz CPU
  • 3x IBM M1015 in IT Mode
  • Dual Port Intel 10G NIC - connected LACP to a 10G Switch
  • 3x - 4xRZ1 (12 total SSDs) - Samsung 840/850 Pro 128GB SSDs - 700GB~ Thin Provisioned (280GB Actual Used) / 1.1TB Usable
  • 24x - 2TB Hitachi Spinners (In a NetApp DS4243 DAS Shelf) - 25TB consumed / 36TB Usable
Usage for the NAS is:
  • Homelab but production use at home
  • esxi 6.5, vmdk storage (ssd zpool) - 20 VMs mix of windows and linux, relatively low iops requirements, mostly just plex/sab/sonarr/radarr/deluge
  • the esxi hosts and NAS are all connected via 10G DAC or fiber to a US-16-XG switch
  • general media/backups/slower IOPs vmdk storage (although no VMDKs are provisioned on it yet) (DS4243)
  • No plugins or jails, NAS is used strictly as AD connected storage
  • AD/CIFS/AFP/FTP/SMB/ISCSI/NFSv3 Services Used
I am getting decent IOPs and writes but I noticed my ARC hit ratio is 40-50%.

My New Build
  • DL380 G6
  • 96-144GB DDR3 ECC 1333 RAM
  • 1 M1015 in IT Mode
  • 1 Intel 10G NIC
  • Use the 8 ports from the onboard PERC to build out a new SSD zpool of ~500GB SSDs in a 2x4 - RZ1
  • Use the M1015 for the 24x - 2TB Hitachi Spinners (In a NetApp DS4243) - 25TB consumed /36TB Usable
  • I will be getting another identical DS4243 in the near future and will daisy chain them together so my zpool03 storage will effectively double to 72TB
  • I also have an HP SAS Expander sitting here that I could make use of instead just not sure how to mount it in the DL380 since it only has 1U and only two PCI-e slots

Time Limited Graphs - I just bounced the box because of a power outage so stats aren't more than a few days old unforunately.


My Thoughts
I have a spare DL380 G6 sitting here with 96GB of memory in it and dual xeon x5650? proc's, I possibly could even add another 48GB to it if needed making the total ram 144GB. With my arc hit ratio being at 40-50%, I began thinking about increasing the memory footprint of my NASs and realized I had this spare DL380 sitting here powered off, originally i was going to use it as a third esxi host but I have enough compute as it is. I know the recommendation is to add memory before adding ZIL or L2ARC, now I don't have any direct complaints about performance but I'm always trying to get more outta my setup.

So what are your thoughts? Should I swap over to the DL380 Or leave the NAS in place and be happy with what I got.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I think that you are severely limiting yourself by moving to a 1U server. Unless there is some very compelling reason for a 1U server (or 2U) that you should use 3U or 4U servers due to the fact that most (Supermicro anyhow) 3U and 4U servers have more expansion slots available.
If you are only looking to run VMs on a system with only some sort of SAN access to storage, great, use a blade center or what ever makes you happy, but for your storage controller, you need interfaces to connect the drives. If you go to only one M1015 in IT Mode to connect your drives, you limit your data rate to the drives to the max data rate of the one interface. Also, I wouldn't count on being able to flash an integrated storage controller and if you can't flash it to IT mode, don't anticipate using it with FreeNAS.
I would not choose the hardware you are looking at as a FreeNAS disk controller although it might be super for running ESXi.
I have had situations where I needed to use 1U or 2U servers to fit all that processing in the available rack space (at work) but I don't have those space constraints at home so I always go with the thickest chassis so I have more room to cram things into it.
More connectivity, more capability.
I also hate that high pitched wine for the tiny little fans.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. I would think the simple solution is to just put a new(er) system board in the NORCO chassis you already have.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
I think that you are severely limiting yourself by moving to a 1U server. Unless there is some very compelling reason for a 1U server (or 2U) that you should use 3U or 4U servers due to the fact that most (Supermicro anyhow) 3U and 4U servers have more expansion slots available.
If you are only looking to run VMs on a system with only some sort of SAN access to storage, great, use a blade center or what ever makes you happy, but for your storage controller, you need interfaces to connect the drives. If you go to only one M1015 in IT Mode to connect your drives, you limit your data rate to the drives to the max data rate of the one interface. Also, I wouldn't count on being able to flash an integrated storage controller and if you can't flash it to IT mode, don't anticipate using it with FreeNAS.
I would not choose the hardware you are looking at as a FreeNAS disk controller although it might be super for running ESXi.
I have had situations where I needed to use 1U or 2U servers to fit all that processing in the available rack space (at work) but I don't have those space constraints at home so I always go with the thickest chassis so I have more room to cram things into it.
More connectivity, more capability.
I also hate that high pitched wine for the tiny little fans.

Thats a very good point, I am not even sure if the PERC 6i? in the DL380 can be flashed or not. I never thought that using a single M1015 would be I/O constrained but that is another very good point. Right now I have my 12x SSD zpool split across two M1015s. I already have 2x DL380 G7s running ESXi so the whine is not that bad with the side panels to my rack on, plus I'm used to it and usually have headphones on anyways in the office jamming out to music lol.

PS. I would think the simple solution is to just put a new(er) system board in the NORCO chassis you already have.
This becomes a $$ constraint then, as I have this hardware laying here not being utilized thats the only reason why I was considering utilizing the DL380 as a FreeNAS box. Right now its about $900-1k to swap to a new board and CPU combo that will support the additional memory but still be able to utilize the RPC-4224 NORCO case. The X9SCM-F (Intel C204 PCH chipset) max memory supported is 32GB (which you should know given your current builds), and unfortunately I've already hit that cap.

Thanks for your input and thoughts, would you agree that I need more memory given my current build and arc hit ratio?
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
....as I have this hardware laying here not being utilized thats the only reason why I was considering utilizing the DL380 as a FreeNAS box.
Selling these off on ebay might recover some of the costs for your upgrade to a better board.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
Selling these off on ebay might recover some of the costs for your upgrade to a better board.
thats not a bad idea actually, I'd really only be interested in selling it off locally (I hate dealing with shipping servers), thanks for the idea.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I actually have a board similar to yours. I have the X9SCL-F. Currently I am using it as my pfSense router, but when my current NAS fills up, I intend to buy a 3U or 4U chassis and put the X9SCL-F into it and use my current NAS board as my pfSense router.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
I actually have a board similar to yours. I have the X9SCL-F. Currently I am using it as my pfSense router, but when my current NAS fills up, I intend to buy a 3U or 4U chassis and put the X9SCL-F into it and use my current NAS board as my pfSense router.
I think the boards are nearly identical, I just have a few more PCI-E slots than you. Both have the 32GB memory constraint though :(.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This becomes a $$ constraint then, as I have this hardware laying here not being utilized thats the only reason why I was considering utilizing the DL380 as a FreeNAS box. Right now its about $900-1k to swap to a new board and CPU combo that will support the additional memory but still be able to utilize the RPC-4224 NORCO case. The X9SCM-F (Intel C204 PCH chipset) max memory supported is 32GB (which you should know given your current builds), and unfortunately I've already hit that cap.
Yes, I have the same system board in both of my systems and I also feel the memory crunch. The cost of new system board, processor and memory is the holdup for me moving to the Xeon E5 system I would like to have. I will have to make do with what I have for another year or so, but, even though I have not updated the data in my signature, I recently purchased more hard drives and 24bay Supermicro chassis to upgrade my storage. For the way I use my systems I am alright on the compute side for now but I would likely see better performance with more modern hardware.
As for the theoretical limitation on bandwidth. Take a look at this article as it makes some comparisons that might help explain what I was thinking with regard to going to one SAS controller vs the three you are using now.
http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Most of the LSI2008 controllers use 8x PCIe2 lanes. Although they have 8x 6gbps to the drives across all ports, 48gbps, they only have 32gbps of PCIe bandwidth.

Still that's 3GB/s+ :)

I'd be pretty happy to have a system where that was the bottleneck.
 
Status
Not open for further replies.
Top