Migraged my FreeNAS from a VM to a Dell R510. Will dual CPU's make a difference or just waste energy

Status
Not open for further replies.

Roveer

Dabbler
Joined
Feb 22, 2018
Messages
40
I migrated my FreeNAS installation from a VM with a wonky SAS cable out the box to a old PC with an 8 drive cage to a nice clean Dell R510. I'm using the IBM M1015 in IT mode so as not to have any array functions.

Migration went pretty smoothly.

I've got 3 questions.

1. My FreeNAS boot is a single 16GB usb stick internal and I think I want to go dual usb for some rudundancy. Question. In order to go from one to two usb's do I have to do a fresh FreeNAS install? I'm thinking I do. Not a big problem if I do.

2. The R510 has Dual Xeon X5560 cpu's. I see on my FreeNAS screen it's only showing one. Does FreeNAS make use of the 2nd CPU? If not, shouldn't I remove it to save energy? This is a dedicated box for holding backups and I won't be doing any other apps or VM on it.

3. I am getting close to 500MB/s across the 10GB link SMB copy but I see the drives surging, writes to all drives seem to come all at the same time. I'm thinking more memory would smooth this out? Right now I've got 16GB but thinking about 32GB. Anything I can look at in reporting that would show a memory bottleneck?

Thanks,

Roveer
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
1. My FreeNAS boot is a single 16GB usb stick internal and I think I want to go dual usb for some rudundancy. Question. In order to go from one to two usb's do I have to do a fresh FreeNAS install? I'm thinking I do. Not a big problem if I do.
No.

5.3.1. Mirroring the Boot Device
http://doc.freenas.org/11/system.html#mirroring-the-boot-device
2. The R510 has Dual Xeon X5560 cpu's. I see on my FreeNAS screen it's only showing one. Does FreeNAS make use of the 2nd CPU? If not, shouldn't I remove it to save energy? This is a dedicated box for holding backups and I won't be doing any other apps or VM on it.
Sorry, I didn't skip this on purpose. Yes, the system will use both CPUs. If you remove it some of the PCIe slots and memory slots will not work.
3. I am getting close to 500MB/s across the 10GB link SMB copy but I see the drives surging, writes to all drives seem to come all at the same time. I'm thinking more memory would smooth this out? Right now I've got 16GB but thinking about 32GB. Anything I can look at in reporting that would show a memory bottleneck?
The reason for the surges is likely when transaction groups get committed to disk.
Answering this would depend on a few things. It might help if you described your hardware in more detail.

Updated Forum Rules 4/11/17
https://forums.freenas.org/index.php?threads/updated-forum-rules-4-11-17.45124/
 
Last edited:

Roveer

Dabbler
Joined
Feb 22, 2018
Messages
40
No.

5.3.1. Mirroring the Boot Device
http://doc.freenas.org/11/system.html#mirroring-the-boot-device

Sorry, I didn't skip this on purpose. Yes, the system will use both CPUs. If you remove it some of the PCIe slots and memory slots will not work.

The reason for the surges is likely when transaction groups get committed to disk.
Answering this would depend on a few things. It might help if you described your hardware in more detail.

Updated Forum Rules 4/11/17
https://forums.freenas.org/index.php?threads/updated-forum-rules-4-11-17.45124/

Thanks for the reply. You got me. I was being lazy and didn't bother to search or consult the manual. Guess it was just a long day.

1. Read the section on usb boot. Waiting for my devices to arrive from Amazon.

2. Based on that I'm not going to monkey with the CPU's. I'm using two pcie slots so I just don't need the trouble. I'm going to throw my Kill-A-Watt on it to see exactly who much energy I'm using.

3. Hardware is a Dell R510 Dual Xeon X5660 CPU's, 12 bay with 16GB of memory (4x4gb). 8 2TB drives in a ZFS2 configuration. onboard 1GB nic to network, 10GB Mellanox 19x to my other server for high speed backup transfers.

Here's something worth mentioning. I'm using a IBM M1015 HBA but I've got it in a pcie 4x slot because this server only has 1 8x slot and it was needed for the 10GB NIC. Is this the possible culprit?

On the GB I'm getting 112-115/MB/s steady SMB
On the 10GB I'm getting 400-500MB/s SMB

Anything else I can provide? Quite possible it's totally normal, but it seems like it's bottlenecking. Can post a video if that will help.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm using a IBM M1015 HBA but I've got it in a pcie 4x slot because this server only has 1 8x slot and it was needed for the 10GB NIC. Is this the possible culprit?
The 4x slot just reduces the bandwidth, but with only 8 mechanical drives, there isn't enough throughput to be an issue.
On the 10GB I'm getting 400-500MB/s SMB
This speed limit is based on the low number of drives. If you want more speed, you would need more drives, specifically, you would need more vdevs.
On the GB I'm getting 112-115/MB/s steady SMB
That is maxing out the line speed. No problems there.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
3. Hardware is a Dell R510 Dual Xeon X5660 CPU's, 12 bay with 16GB of memory (4x4gb). 8 2TB drives in a ZFS2 configuration. onboard 1GB nic to network, 10GB Mellanox 19x to my other server for high speed backup transfers.

On the GB I'm getting 112-115/MB/s steady SMB
On the 10GB I'm getting 400-500MB/s SMB

In this calomel.org article 429MB/s for writes and 488MB/s for reads are reported for 6 Western Digital Black 4TB 7200rpm SAS 24 drives in RaidZ2 when benchmarked locally using Bonnie++. This is not too far from your result where you forgot to mention make and model of the HDDs and the exact method of benchmarking.

Memory bandwidth is usually clearly higher, in the order of tens of GBytes/s per CPU for Nehalem EP CPUs.
https://ark.intel.com/products/37109/Intel-Xeon-Processor-X5560-8M-Cache-2_80-GHz-6_40-GTs-Intel-QPI
 
Status
Not open for further replies.
Top