BUILD Newbie First Building -- Feedback please

Status
Not open for further replies.

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
Hello everyone,

I've been reading a lot since two weeks about Freenas and i decided to start my new build, since then i started to learn so much about hardware and how computing stuffs are threated.

Our Scenario is:

Wwe were working with synology, but many times we need to reboot it due to cifs/Active directory strange issues, we use it for proxmox iscsi, NFS, CIFS and FTP, but right now we are limited on space, our needs is to have more space capacity on network, and right now we are limited, we have some server that we cant operate them at 100% due to disk limited capacity for that we would need more iscsi/nfs space, and for the other side CIFS for all the windows clients, also we will start using vmware that would connect to FreeNAS (Still i have to read more about it, since i read some perfomance issues on iSCSI-NFS from vmware due to sync writes or something sameless, but still i dont have it enough clear)

I've read that each CIFS connection would require high core speeds.
-How much cost a CIFS connection to FreeNAS?

Would CIFS connections affect on NFS-iSCSI perfomance if they are working together? i think it would be better to make different Volumes(zpools?) for each sharing service, but how much CPU requires NFS-iSCSI and CIFS? there is not so much documentation about this.

The configuration i decided to make is the next one:

-Case:
Supermicro SC846-R1200B (Backplane is a BPN-SAS2-846EL1)

-Motherboard:
X10SRi-F

-CPU:
E5-1650v3 (6C/12T 3,5GHz 15MB cache)
E5-1630v3 (4C/8T 3,7 GHz 10 MB Cache)
Question: What would be better for me? i couldnt decide if i prefer more speed or more cores for my configuration

-RAM
32GB
2x Samsung 16GB DDR4 PC4-17000 (2133)- ECC, Registered...

-SATA DOM
2x 16GB (This i would configure on raid 1)
Does it worth to use this for FreeNAS installation? if not, how would you install freenas for make the installation more reliability?

-HBA

1x M1015
Here i have some doubts due to the HBA and my backplane.
Case1: Can i install two HBA connected each with one SFF-8087 to my Backplane just in case one of my HBA fails the other would start to work automatlly?
Case2: If from one of my HBA i connect two it would duplicate the bandwith between? well, i think here the limitation would be pcie x8 speed to 20GB/2, since one SFF-8087 gives 6x4=24GB/s, right? , in this configuration if one of my backplane connectors stops working the other would start working and it would be transparent for freenas?
What would be the most reliability configuration for HBA(or HBA's) and backplane connection?
Many doubts in here, please help

-DISK

Still for see, probably WD Red, mixed with some more cheap with same capacity...

Sorry if my Spanglish is confusing, if you have any doubt about what i'm saying please say me and i will try to explain with other words.

Waiting opinions, thank you very much for this great platform and congrats to everyone for maintaing this forum so active.

Best regards.
 

jtonthebike

Dabbler
Joined
Apr 17, 2015
Messages
10
Hi. I am new to Freenas too, but, from what I have learned so far, I would only select one DOM not two, the Freenas OS is loaded into RAM. It's a big chassis you're getting, and if you're going to fill it, I would suggest that you at least double the amount of RAM you're getting. Also, and please tell me if I have this incorrect, I am under the impression that more cores work better than outright speed when using Freenas/ZFS?

The M1015 HBA has a maximum throughput speed (i don't recall this off hand) so perhaps 2x M1015's would be a better option given your performance requirements.

thanks.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
He clearly wants reliability so using two boot devices is a cheap way to have that.

32 GB should be ok for the start but then if you want to use some VMs you'll probably need more ;)

With Reds I highly doubt that the M1015 will be the bottleneck, I'd bet on the drives first. Actually if the network is only 1 Gbe, then it will be the bottleneck for sure :)
 
Last edited:

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
Thank you very much, jtonthebike.
-i'll then select just one DOM.

-At the moment i wont fill it, but my intention is to can fill it little by little with the company needs since i could get it at for a good price. one of my intentions with this hardware is to can last for a few years with the same box, at the moment i will just fill it with for get a raidz2 or 3 with 8TB of usable disk, 4 or 5 disk will be the most urgent, then i will make another fill for iscsis and another for NFS.

-I've read that i would need 1GB per TB, Here i forgot to make the question, Is it per TB of the all mounted disk or per TB of the usable capacity of the raids?

-From your last comment, if i plug two M1015 to my backplanes (it includes an expander chip) i could get double perfomance from the backplane?

Thanks again.
 

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
Hi Bidule0hm,

i think that im going in the correct way, im learning so much from here, i will start ordering for the components, since im in Europe some of them are a bit difficult, like the SATA DOM or the Box, it will take a bit of time to take it here, but as soon as i have them i'll start reporting here the building.

I'll make more research about how VM's can affect over the RAM before starting to deploy more than 3 or 4 VM's, i read some about iscsi, nfs, syncs and no async, but still i mix wich configuration would be fine for us.

For other side, i better wont mix app's avaible with our needs, for dont affect perfomance from other sides, better to deploy them in other machine and make a share to freenas if i need HDD capacity.

About the HBA:

I could read in this document http://www.supermicro.com/manuals/other/bpn-sas2-846el.pdf this :

<<The SAS-836EL backplane may be configured for failover with multiple HBAs using either RAID controllers or HBAs to acheive failover protection.
RAID Controllers: If RAID controllers are used, then the failover is accomplished through port failover on the same RAID card.
HBAs: If multiple HBAs are used to achieve failover protection and load balancing, Linux MPIO software must be installed and correctly confi gured to perform the load balancing and failover tasks.>>

Do you know if this apply for a single expander chip with primary and secondary port?

Best regards :) Awesome Forum.
 
Joined
Oct 2, 2014
Messages
925
Hey zetoniak , if you check out my profile and search through my posts youll come across my build or @DataKeeper 's build. We both use the SAS2 backplanes with a single ( i think for him too) HBA with 2 SAS connections, in THEORY i suppose you could add another HBA and connect it to the last SAS connection. There is a forum post from [H]ardforums with questions similar to yours about the backplane
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
No, select 2 DOMs. Or do you want to have downtime just to reinstall FreeNAS to a boot device if you get a checksum error (which gets repaired automatically with 2 DOMs in a ZFS mirror)? Also rather use 64GB ones instead of 16GB. FreeNAS isn't going to stay that small, maybe gets the possiblity to move more stuff like the .system dataset over to the bootdisks etc...

A single HBA is enough. You've still got the expander on the backplane as SPOF. There are (expensive) dual expander backplanes around, which requires (expensive) SAS HDDs to utilize both - and even then the troughput doesn't double as SAS Multipathing in BSD is only for redundancy. Also other components are more likely to fail.

Rather get a TrueNAS Z20 with dual controllers and SAS HDDs - that base model offers 16+24 SAS hotswap bays with full failover capabilities between the two controller boards and also full business support, 24x7 4hr reaction time if you must.
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
Get 2 of the 64GB Sata DOMs. During install you simply select them both and done.. mirrored boot devices directly from the install.
The ram is quad channel so you'll want to order 4 modules for best performance. Stick with the 16GB for 64GB right off the bat while remaining expandable down the road.

Check my build link in my signature below as I've built pretty much what you have listed.
 

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
Hi again, some months latter...

Finally i could get the budget and all the components, this is the list i got:

Chassis
1x -- Supermicro SC846-R1200B(Backplane is a BPN-SAS2-846EL1)
Motherboard
1x -- X10SRi-F
Processor
1x -- E5-1650v3 (6C/12T 3,5GHz 15MB cache)
RAM
2x -- 16GB DDR4-2133 ECC REG
Storage
2x -- 63GB SATA DOMs (For Freenas installation)
7x -- WD Red 2TB (Raidz2 purpose)
6x -- SAS 15k 148GB (For virtual machines on NFS)
HBA
1x - M1015
Today i assembled all components and burn the IT firmware on the HBA, here some pics and how i did

First of all i plugged my HBA on a old PC for burn the newest firmware and BIOS and change to IT mode:

I followed instructions from "Pieter" on this post, referring me to this other post and some research googling.
IF YOU FOLLOW THIS INSTRUCTIONS DO IT AT YOUR OWN RISK, I TAKE NO RESPONSIBILITY FOR ANYTHING GOING WRONG!!

USB Creation
I created a DOS bootable USB with rufus for windows, and then just copy/paste this .rar contains to my Bootable USB, also i copied files in "firmware" and "sasbios_rel" folder from this other link (LSI, now avago, official page for the future, maybe you can find newer versions here)
From the last link, i stored "2118ir.bin" and "2118it.bin" in the USB in a new folder called firmware and "mptsas2.rom" in a new folder called bios.

  • Boot from USB
  • First of all we need to gather some info from our adapter...
sas2flsh.exe -listall

012_zpsojbnggmi.jpg


If we have just one on the list, the Index nr. will be "0" so no problem, also here we can see our BIOS and FW versions.


If we have more than one on the list, for we sure wich index nr. we'll use, we can use:
sas2flsh.exe -list
(here i think we can put the index nr., i didnt try it, it's a read-only command so we can try)

016_zpsvkteg54z.jpg


Here we can look over our controller the "SAS Address" that is stuck on a green tag on our board and we write down the "Controller Number" for the next commands... IMPORTAN TO WRITE IT DOWN



    • Now we continue with the next command...
megarec -writesbr 0 sbrempty.bin
Where "0" is the index number of our controller and "sbrempty.bin" as its names says, is a empty image of the SBR, i think it's the part of the raid card wich contains some instruccions and some data when it's on raid mode, we want a HBA so... (if im wrong please someone to tell me, im not an expert)
I dont have this picture, i missed this step at the beggining and i got nervious...
  • We wipe our flash...

megarec -c 0 cleanflash
Dont forget, "0" is our controller address or Index nr.

009_zpsphogyahy.jpg


Mouse Initiation failed?? No way...

  • At this point i rebooted and boot from USB again, this makes a hotrestart, i couldn't find a way for coldrestart with FreeDOS
ctrl+alt+supr


    • Now we write our new IT firmware and bios on the controller, remember, we stored "2118ir.bin" and "2118it.bin" in our firmware folder, also "mptsas2.rom" in our bios folder.

sas2flsh.exe -c 0 -f firmware/2118it.bin -b bios/mptsas2.rom

013_zps72xsojiz.jpg

  • Now we have to assign again the SAS Address to our controller... Notice here the -o is a letter, not a number
sas2flsh.exe -o -sasadd <500605bxxxxxxxxx>
[x= our SAS Adress[I]]
in my case it was "500605b005555e00"
[/I]


Tomorrow i will continue the post with the building since i missed disk pictures, and i will go to install freenas.

I want to install Freenas on raidz on my SATA DOM's, how can i?
EDIT: @DataKeeper said:
Get 2 of the 64GB Sata DOMs. During install you simply select them both and done.. mirrored boot devices directly from the install.
I read for long about NFS and SYNC writes, since i will use NFS for VMWare and other backups, would you recommend me to create a RAIDZ with SSD and make a partition on this raidz for SLOG ZIL and L2ARC ?
EDIT: @cyberjock said:
"First, don't even consider an L2ARC until you have 64GB of RAM, period. End of that short discussion."
Do i have risk on my pools if i have power outage with SLOGZIL?
Do i have risk on my pools if i have power outage without SLOGZIL on ASYNC writes? (On SYNC writes i've clear that i can lost my entire pool.)
Does anyone has a list of wich protocols works with SYNC and ASYNC?

Thank youyoy!





All Pictures here
 
Last edited:

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
Ok, here my mounted server, just before being racked, Still i'm testing it.

DISCS
Pool-1
Left column are 6x WD RED Raidz2 with one spare on the second column.
Will contain, CIFS, Media, NFS(VM Backup Server), FTP and Backups​



Pool-2
Right Column are 6x SAS 15k 146GB on Raidz2
Will contains NFS for VM with more perfomance required​



MotherBoard General View
-X10SRI-F-O




Unplugged connector... Couldn't reach it anywhere :(, btw, Does someone knows what is the purpose of the 3 PIN connector comming from the power sources??




Did'nt know good how to plug the front-panel, since i couldn't find the frontpanel pinout, i had to try, miraculously it worked at first!




SATA DOM and CASE OPEN Detector cable

SATA DOMS 64GB with FreeNAS Boot on mirror, thanks for the recommendation!




RAM & CPU
E5-1650v3 6C 3.5G 15M
2x Samsung 16GB DDR4-2133 2Rx4 LP ECC Registered (Future upgrade to 64GB) budget...




HBA & BACKPLANE
HBA - M1015 IT Mode





Final Setup
-MB: X10SRI-F-O
-CPU: E5-1650v3
-RAM: 2x - 16GB DDR4-2133 2R4
-HBA: M1015 IT Mode
-SATA: 7x 2GB WD Red
-SAS: 6x 15k 146GB
-Case: SC846-R1200B


I'm worried about reliability, if i use NFS for VM and VM OS does O_SYNC, while it's working, if i lose the power, will i lose my entire pool?
I'm planning to buy one APC UPS, but is the before sentence correct?
If so, would you recommend me to wait before start with FreeNAS in a production environment?
Do i have chances for lost my entire pool using CIFS in this case?

Thanks to everyone! ;)
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I think the RAM should be on both blue sockets or black sockets but I can be wrong as I didn't read the manual for this board (but the color code has always been this one so...)

I don't think you lose the pool but just the data that was on the SLOG device IIRC, that's why you need a proper drive (battery or super-cap powered) but you shouldn't use the server in prod without a UPS anyway.
 

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
Fascinating! Thank you for sharing!

Did you try to compare performance of both pools?

Tomorrow i'll post it, at the moment i only made the Sata pool perfomance test, i have posted it here
 

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
I think the RAM should be on both blue sockets or black sockets but I can be wrong as I didn't read the manual for this board (but the color code has always been this one so...)

I don't think you lose the pool but just the data that was on the SLOG device IIRC, that's why you need a proper drive (battery or super-cap powered) but you shouldn't use the server in prod without a UPS anyway.

i've looked in the manual here but i didnt understand it good, it was saying for start at the first DIMM slot.

Somewhere i had read about what you say, if i interleave them i would get better perfomance, if not would be better reliability, but i think i will do in the way you say, i dont think RAM is a usual failure component.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Others might have much better ideas, but if I may ask, I would be interested in comparing the following
  • a single file copy (read/write more than 65GB of uncompressible contents = e.g. a couple of MP4 files concatenated together)
  • gcc compiling itself over NFS
  • gcc compiling itself over NFS, when that large file is being written into the pool at the same time
I mean doing each of the tests on the large SATA pool and on the small SAS pool :D
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
AFAIK the order for populating th sockets is: A1, B1, C1, D1, A2, B2, C2, D2.
 

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27

zetoniak

Dabbler
Joined
May 14, 2015
Messages
27
Others might have much better ideas, but if I may ask, I would be interested in comparing the following

Ok, for NFS i'll use a OpenVZ Container over proxmox, just some questions, between lines
  • a single file copy (read/write more than 65GB of uncompressible contents = e.g. a couple of MP4 files concatenated together)
Ok, i'll find out how to make this file, Do you mean copying over the same Pool?
  • gcc compiling itself over NFS
I don't know how to make this, if you say me how to i'll.
I read some strange things like... Compiling a C compilator writen in C with a C compilator?¿? sorry i got lost hehehe :) but say me how and i'll
  • gcc compiling itself over NFS, when that large file is being written into the pool at the same time
I'll, writting the file from the other pool, or copying inside the same pool?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
What I am proposing should give you some idea on how different storage systems behave with certain well understood loads.

I have no idea what your target load is going to be... Here we are playing with two distinct load types: lots of small files and a single large file. So at least you would get these two covered (from many possible).

Let's talk about streaming a large file. The numbers would be like doing dd, except that you have compression turned on, thus we have to use a file that is already compressed.

Disk speed is different on outer and inner tracks. The speed difference could be twofold. That is difficult to control, and I do not want you to suffer, so I will pretend that such difference does not exist!!!

============================================================
Since these are performance tests, doing them as close to hardware as possible is preferred.

Can you perform the tests from within Promox? If not from within Promox, then maybe from within OpenVZ, if that is not possible either, then of course from the Linux running in the container.

============================================================
Creating a huge file. On any Linux, get over 64GB of mp4 files (from YouTube, your smartphone, car dashboard camera etc.), then
cat *.mp4 > MyBigFile

Results would be realistic when the file is sufficiently larger than RAM, since then we are not measuring speed of your RAM.

I had assumed that Promox has some local storage. If it does not have local storage..., then the system you are using for posting here, does it do NFS (for copying MyBigFile)?

============================================================
Here are the gcc steps for you :D
  • on the machine the compilation is being done (again closer to the hardware = better), execute gcc --version
  • go the the page https://gcc.gnu.org/install/index.html and perform steps 1, 2, 3 and 4
  • in step 2., you would be downloading the source code for GCC from https://gcc.gnu.org/releases.html , please use the same version that your machine has (see above)
  • I am interested in comparing time it would take to perform step 4 when using NFS served from the SATA pool with time it would take when using the SAS pool over NFS. I think the difference would be significant. By the way, all you need to do is to type time make :D
  • I think there would be also a visible difference when performing step 3 on SATA and SAS, just that process is a lot shorter
Congratulations! You made GCC compile itself (multiple times)!

============================================================
The final hurdle...

For doing the compilation while copying is taking place, can you please create an infinite loop that would keep copying MyBigFile while steps 3 (configure) and 4 (make) are taking place...


Thank you a lot !!!!!!
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
You are the only one around here who has storage pools with two distinctively different types of hard drives and they are not in use yet, so they can be used for testing!
 
Status
Not open for further replies.
Top