Asrock C2550D4I with FreeNAS 9.2

Status
Not open for further replies.

Madd Martigan

Dabbler
Joined
Jan 31, 2014
Messages
11
I wanted to post to the community my experience with the new Asrock C2550D4I motherboard running FreeNAS 9.2.

First, the hardware:

Lian Li Mini ITX Tower (6 3.5" drive bays and one 5.25" drive bay which I populated with a 4x2.5" HDD fixed bracket that is vented in the front)
RAIDMax 730W PSU
Asrock C2550D4I CPU/Motherboard Mini ITX
16GB (2x8GB) Kingston 240 pin DDR3 RAM (KVR16N11H/8)
4x Western Digital Black 2TB SATA 3.5" HDD
2x Western Digital Red 2TB NAS SATA 3.5" HDD
3x 500GB Hitachi/Toshiba 2.5" HDD

I realize that I have a mish mash of drives but for what I want/need this will work fine. I am running this as a iSCSI target for my VMware ESXi host. I have configured the 3.5" drives as a single RAIDZ disk group with just over 8TB usable in the ZVOL and the 2.5" drives as a single RAIDZ disk group with around 850GB in the ZVOL.

I have the network interfaces configured on two different VLANs but connected to a single iSCSI portal. On the ESXi host side I have two vswitches set up with a single NIC assigned to the VMKernel port on each. I have them both assigned to the software iSCSI adapter (I am using a Intel NIC on that side too. I tried using a Broadcom dual port NIC with it's hardware assisted iSCSI capabilities but I had purple screen issues with it every time I tried so I gave up). I have both ends configured to use 9000 byte jumbo packets. I am also using Round Robin for path management so I am getting active I/O on all paths assigned.

All of my testing/migration to this configuration so far has shown about 500Mbps of utilization on each of the NICs on each end. I am getting right around 95MBps read and write out of this configuration. However, the testing/migration basically involves migrating from local SATA spinning disk to the FreeNAS server.

The entire configuration has proven to be very functional. CPU on the FreeNAS server seems to be hovering right around 45% to 55%.

Overall it seems to be a very good performer. Just wanted to post it up here so that everyone knows that it does work fine.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thanks for sharing! Any chance you can do some CIFS speed tests to see how fast the system goes and CPU utilization with Samba?
 

Madd Martigan

Dabbler
Joined
Jan 31, 2014
Messages
11
Unfortunately I don't have any disks that aren't iSCSI block devices in it right now. One of the things I was considering testing was connecting a eSATA drive to the machine so I may try to dig up my eSATA drive. I know it won't be as fast as the RAIDZ set but it should be fast enough to test. I'll get back to you.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
really great to hear some reviews on this MOBO!
i'm going to build a NAS with the same mobo soon, so i hope it's stable :p

EDIT: would you in any case recommend the more expensive version C2750D4I? If i run a Plex server, will this cheaper one run fine, or is the more expensive version what i need?
 

Madd Martigan

Dabbler
Joined
Jan 31, 2014
Messages
11
Last thing first; I did some checking and the transcoder for Plex Media Server is multithreaded. I think that if I were going to use the Plex plugin then I would probably by the C2750D4I since it has more cores. Also, since I don't think this config would rely on CIFS (single threaded) then I read that ZFS is multithreaded and that would be a good combo. I know that the Avoton doesn't support all of the same things that Core i3/i5/i7 or E3 and E5 support but I would still think it would be worthwhile.

On to CIFS performance.

I connected a 2.5" 500GB drive to my server via eSATA since my RAID drives are allocated for iSCSI block use for a ESXi host.

I used a batch file to copy files from a Windows Home Server 2011 VM to the NAS server's CIFS share and here is the results that I got:

------------------------------------------------------------------------------

Total Copied Skipped Mismatch FAILED Extras
Dirs : 62 62 0 0 0 0
Files : 496 496 0 0 0 0
Bytes : 2.165 g 2.165 g 0 0 0 0
Times : 0:01:20 0:01:14 0:00:00 0:00:05


Speed : 31199491 Bytes/sec.
Speed : 1785.249 MegaBytes/min.

Ended : Sun Feb 09 10:12:31 2014

------------------------------------------------------------------------------

Total Copied Skipped Mismatch FAILED Extras
Dirs : 46 46 0 0 0 0
Files : 8294 8294 0 0 0 0
Bytes : 22.293 g 22.293 g 0 0 0 0
Times : 0:23:07 0:21:32 0:00:00 0:01:34


Speed : 18521280 Bytes/sec.
Speed : 1059.796 MegaBytes/min.

Ended : Sun Feb 09 10:35:38 2014

A quick note about my network config; I doesn't affect the performance for CIFS but I added a dual port Intel NIC and I have them configured on additional VLANs. So, for iSCSI access via the single portal, I have four NICs set up on four different VLANs providing target access for a VMware ESXi 5.5 host with four dedicated NICs set up on their own VLANs with Round Robin pathing. I have turned on 9000 byte frames and flow control on the switch. iSCSI access to the VMFS seems to be a solid 60-65MBps but I have seen bursts up to around 200MBps.

I may try to implement a SSD into the FreeNAS box but I'll have to read about how that actually works. I have one free disk slot internally that might help with such a thing. I'm not sure how that works with iSCSI and ZFS though. Hopefully someone that is interested will reply.
 

Starpulkka

Contributor
Joined
Apr 9, 2013
Messages
179
When you scrub, could you post scrub speeds too? Still doubdt myself that i mix bits and bytes, but not get in my mind off that transfers is kindof slow?
But thanks for information, as this avoton thingie is a hot topic on a coffee table at the moment.
 

Madd Martigan

Dabbler
Joined
Jan 31, 2014
Messages
11
OK, so, I'm fairly new to all of the other things that FreeNAS offers. I'm not familiar with the scrub process since my experience has been directly with making it a iSCSI block device for ESXi. Give me some insight on that process and I'll see what I can do.

As for the speed, remember, I'm running a 500GB 5400rpm 2.5" drive via a eSATA enclosure and it's probably running SATA2 at best.
 

Starpulkka

Contributor
Joined
Apr 9, 2013
Messages
179
History
As old real raid cards had a scrub command so raid card would check your hdds for parity errors, if you would not scrub in hardware raid your hdd's raid card would not fix your bitflipped data on hdd, so no doing scrub in long time even on raid5 you might have uncorrectable errors but thats history...

Today
Software raidz does cheksum check all the time you read you data, but if you not read all you data on a year, it cant find and fix your errors on unreaded section of hdd if you dont read all data. But if you do scrub command it reads all data and checks for errors and corrects those errors if you have parity data avaiable, as an example raidz1. Where you dont have data scrub obviously dont check on there. (Of course you can create parity data on just regular zfs if you want by using copies 3 option in zfs, but it would not shield in full hdd failure) So thats why raidz2 is usually a smart option to use you can loose 2 hdd's sametime at pool, and you get more speed. (raidz1 is ofcourse faster than raidz2) As for scrub times i think once a month is ok, no need scrub often it just wears your hdd's

Edit:
As for iSCSI block device for ESXi im not going to say nothing, but i would not dare do scrub if i use iscsi. Might be better to no scrub at all, i think isnt esxi or somewhere have fcheck tool for check drive. Thats why you have not yet done scrubs, its dangerous if you get a hiccup.

Command
Code:
zpool status

Shows when you last scrub occured and did it find errors.

http://doc.freenas.org/index.php/ZFS_Scrubs

Edit: So im not anymore intrested how you scrubs speeds are :)
 

Madd Martigan

Dabbler
Joined
Jan 31, 2014
Messages
11
So, I decided to revisit this because I finally have a form of backup that matters to me in place. Anyway, it appears that in v9.2 you get automatically scheduled ZFS scrubs at midnight every Sunday. I ran 'zpool status' and it doesn't show any detected errors on any of my volumes (2 zvols for iscsi and 2 zdatasets for CIFS). However, it doesn't show when it last ran. I'm willing to run the scrub manually if my output looks like it's actually happening against the zvols. Here is my output:

Code:
zpool status
  pool: CIFS-4TB
state: ONLINE
  scan: none requested
config:
 
        NAME                                          STATE    READ WRITE CKSUM
        CIFS-4TB                                      ONLINE      0    0    0
          gptid/3e504f53-9526-11e3-9172-001b785cf1b4  ONLINE      0    0    0
 
errors: No known data errors
 
  pool: CIFS-500GB
state: ONLINE
  scan: none requested
config:
 
        NAME                                          STATE    READ WRITE CKSUM
        CIFS-500GB                                    ONLINE      0    0    0
          gptid/403683b8-9197-11e3-bf83-001b785cf1b4  ONLINE      0    0    0
 
errors: No known data errors
 
  pool: R5-9TB-WD
state: ONLINE
  scan: none requested
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        R5-9TB-WD                                      ONLINE      0    0    0
          raidz1-0                                      ONLINE      0    0    0
            gptid/fd3b75a9-8a11-11e3-b4ef-bc5ff4ee38e0  ONLINE      0    0    0
            gptid/fd906337-8a11-11e3-b4ef-bc5ff4ee38e0  ONLINE      0    0    0
            gptid/fe5d466e-8a11-11e3-b4ef-bc5ff4ee38e0  ONLINE      0    0    0
            gptid/ff31c827-8a11-11e3-b4ef-bc5ff4ee38e0  ONLINE      0    0    0
            gptid/ff95196e-8a11-11e3-b4ef-bc5ff4ee38e0  ONLINE      0    0    0
            gptid/ffead647-8a11-11e3-b4ef-bc5ff4ee38e0  ONLINE      0    0    0
 
errors: No known data errors
 
  pool: R5-LT-900GB
state: ONLINE
  scan: none requested
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        R5-LT-900GB                                    ONLINE      0    0    0
          raidz1-0                                      ONLINE      0    0    0
            gptid/58725b47-7ba2-11e3-b256-94de80ae6ef9  ONLINE      0    0    0
            gptid/58f74abf-7ba2-11e3-b256-94de80ae6ef9  ONLINE      0    0    0
            gptid/597967d3-7ba2-11e3-b256-94de80ae6ef9  ONLINE      0    0    0
 
errors: No known data errors
 

Cyanon

Dabbler
Joined
Jan 14, 2014
Messages
10
I upgraded my NAS this week to the ASRock C2750D4I (ended up going for the 8 core for when I get around to setting up more jails and the fact that it has a 2 year longer warranty).
There were no issues transferring my HDD array (6x4TB Z2) and flash drive (9.20 Release) straight over, it booted up right away and recognized everything just fine.

I haven't really test anything on it other than transferring ~1TB from a windows machine to the NAS, it averaged about 80-90MBs. I'd be glad to run any (non-destructive) benchmark tests if anyone has some idea.

I was wondering, for when 16GB DIMMs become available for a decent price, is anyone aware of any issues of having 2 16GB DIMMs slotted alongside my current 8GB DIMMSs?

Temps so far: CPU has been idling at 45C, Hard drives at 28-35C
Just using the 2 case fans that came with the case.

Heres my NAS:

ASRock C2750D4I
16GB (2x8GB) Kingston ECC ValueRAM 1600
6x4TB HDD (5 WD Red, 1 Hitachi Deskstar)
Fractal Define R4 case (Amazing case, thanks for the recommendation thread jgreco)
Rosewill Silent Night 500 Platinum 80 Plus power supply (I saw it was recommended on silentpcreview.com , a re-branded Kingwin, and 5 year warranty)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Nobody has used the 16GB DIMMs. So nobody has any real experience with them. They are absurdly expensive. I think they're like $300+ a piece! But, I wouldn't expect there to be a problem.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Eh, fewer and fewer are using AFP now. Not surprisingly, he got good speed with AFP. AFP isn't a CPU hound like CIFS is. :P
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ah, I didn't see the edit. So few AFP users exist I think that if AFP was removed from FreeNAS tomorrow the number of people that would be upset could be counted on my fingers and toes.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Ah, I didn't see the edit. So few AFP users exist I think that if AFP was removed from FreeNAS tomorrow the number of people that would be upset could be counted on my fingers and toes.
I think quite a few use it for time machine. If not for that I probably wouldn't use AFP.
 

J Taylor

Cadet
Joined
Dec 7, 2014
Messages
1
Good news gentlemen,

I'm new to these forums, but I've been using Freenas since version 8. I was finally able to max out my Astech C2550d4i with 16gb RAM at 1 gigabit port of ethernet using CIFS.

Preliminary tests only allowed me 30MB/s over CIFS at 100% utilization of a single CPU core with default everything and onboard sata controller. I conceded defeat after reading many posts with these same findings.

I rebuilt anyways knowing I must live with these limitations. For the rebuild, I disabled all onboard sata and used a IBM ServRaid M1015 crossflashed to LSI9211 HBA and when setting up CIFS service I went to advanced and set the minimum to SMB2 and maximum SMB3. I disabled zeroconf, hostname lookups, unix extensions, and set log level to none. Permissions type all set to windows. Whereas before I had tried 1 set of striped, mirrored, RaidZ, all achieving the lackluster performance, this time when setting up I settled with a RaidZ of 3 drives, and mirror of 2 drives in one pool using the auto configuration, not sure what is helping, and what is not.

However, it's moving as fast as the hard drives and onboard gigabit ethernet adapter can put out now. Couldn't be happier. smbd is using 75% of 4 cores also now.

edit: FreeNAS-9.2.1.9-RELEASE-x64 btw.... before somebody asks.
edit2: Yes encryption. Yes compression.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The default is the lowest, and your client and server should autonegotiate the highest supported. So if you are using Vista or later you already got SMB2. So no, it's not the champion setting you think it is. But you've also broken SMB/CIFS for all the clients out there that don't do SMB2. So no, not a good idea to change the default and not something I'd recommend people do "just because". We've seen dozens of threads of people that have had problems *because* they changed the defaults, and *very* few (any?) that have actually posted a benefit from changing the defaults.

SMB3 also isn't recommended (it's disabled if you use the defaults) because SMB3 can create a whole new set of problems for some users.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
But you've also broken SMB/CIFS for all the clients out there that don't do SMB2. So no, not a good idea to change the default and not something I'd recommend people do "just because".
And if you are brave/dumb enough to go against this recommendation, be smart and save your configuration BEFOREHAND!
 
Status
Not open for further replies.
Top