BUILD My Go Big Build 48 TB

Status
Not open for further replies.

alkyred

TrueNAS User
Joined
Aug 31, 2016
Messages
21
NAS Build Research Primary Objectives:
  • iSCSI to be used for guest OS storage on Xenservers 6.5 and 7.0.
  • Shared storage to eliminate the need for a windows file server
  • Connect to Active Directory for file server authentication
  • LACP 2 10 GigE nics for 20 GigE storage access
Secondary Objectives:
  • Build Second NAS to be live backup of primary NAS
  • Backup Location 1 NAS to Location 2 NAS for Disaster Recovery
Test Equipment
  • 4 SuperMicro X8DTI-F Servers with 24 bays SAS drives.
    • 2 at each location
  • 16 SAS drives raided into 8 Raid 1 VDs -
  • 20 Gig Ram*
  • Dual 10 GigE nics **
  • LSI MR 9260-8i Raid Cards***
* Memory will be upgraded to 64 Gig ECC once testing has passed
** Nics are going to be replaced with 40 GigE Modules
***RAID Cards with be replaced with HBAs once we can prove the build with work.
 

alkyred

TrueNAS User
Joined
Aug 31, 2016
Messages
21
I have hit my first issue.

The switch has been setup with LACP and I can create a Link Aggregation from the GUI or from the Cli but it never gets an IP from the DHCP server. In fact if I turn on DHCP for the Link the GUI stops responding. I have to use the Cli to delete the link and reboot the server. If I manually set an IP address I cannot ping that IP address.

I also noticed a lot of errors in the Cli whenever I turn on DCHP for the Link. See below.
freenas mDNSResponder: mDNSPlatformSendUDP got error 51 (Network is unreachable) sending packet to 224.0.0.251 on interface 0.0.0./lass0/6


I have another Server running Zenserver 7.0 setup with LACP and it also does not get an IP from the DHCP server however I can assign an IP manually and ping it.

Looking for advise.

Thanks
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Not to sound harsh, but if you are thinking about building as is without the other parts (64 GB, HBA) and are running iSCSI; I would not recommend demoing it to prove anything to anyone.

For a decent/tolerable iSCSI you want:
  • 64 GB RAM (Honestly, in your case I would think 128 is a minimum)
  • Mirrored vDevs
  • No Hardware Raid (unless you are thinking of hanging a fast, proper SSD off of this for a SLOG)
I appreciate your enthusiasm, but question your tactics. ;)

Of course, this is just my opinion....
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Also, I cringe when anyone starts talking about LACP.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I have never used lacp but I suspect the steps would be 1. Configure switch 2. Use freenas cli to setup the interfaces 3. Reboot and see if you get an ip 4. Get 5ish clients and test your connection.

Also since you are using a raid card your performance might not be good enough to saturate those 10gbit cards. Don't expect miracles from this hardware.

Sent from my Nexus 5X using Tapatalk
 

alkyred

TrueNAS User
Joined
Aug 31, 2016
Messages
21
Not to sound harsh, but if you are thinking about building as is without the other parts (64 GB, HBA) and are running iSCSI; I would not recommend demoing it to prove anything to anyone.

For a decent/tolerable iSCSI you want:
  • 64 GB RAM (Honestly, in your case I would think 128 is a minimum)
  • Mirrored vDevs
  • No Hardware Raid (unless you are thinking of hanging a fast, proper SSD off of this for a SLOG)
I appreciate your enthusiasm, but question your tactics. ;)

Of course, this is just my opinion....

Mirfster-
I completely understand your opinion, this is R&D using what I have. In order to get funding I need to prove that FreeNAS can do the job. I'm all ears in this forum for ideas on how I can best accomplish my objectives using what I have. The only variable is the LACP connection. I was using this project to get more familiar with LACP but I can forgo LACP in order to move this forward.

Thanks
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
*** These are just my thoughts, I am no "expert" on iSCSI so take all of this with a grain of salt. ***

If the others see some stupidity on my part, please feel free to correct me and whop me over the head...

So this is what I would do if I were in your situation:
  1. Ditch the Hardware Raid
  2. Forget about LACP right now (unless your Switches can handle this and you are fully familiar with how to configure it). You don't want that interfering with the demo if it is done wrong
  3. Out of the 16 available drives you have; Take 10 of them and create a Pool/Volume using 5 x Mirror vDevs
    • This will be for iSCSI Demo
    • Can always tear it down and rebuild later with 3-Way Mirrors or add a couple "Hot Spares" (design is up to you)
    • Create the zVol
  4. Take the remaining 6 SAS Drives and make a Pool/Volume using RaidZ2 (all 6 drives in that vDev)
    • Create the Dataset
  5. Scrounge/Beg/Borrow/Steal for more RAM to give this demo half a chance
  6. Grab a good SSD (Intel DC S3710 would be nice) that has fast read/write speeds
    • Add this to the iSCSI Pool/Volume as a SLOG (Called a LOG in the GUI)
  7. Setup/Configure Windows (CIFS) and iSCSI (Block) Shares
  8. Connect and Test (one at a time, don't try to do too much with this Server or else its gonna puke)

*** Some additional info
  1. You can "Cheat" a bit here by setting the iSCSI Pool/Volume so that "sync=never". This is NOT RECOMMENDED though since it can be very detrimental. The actual recommendation is for "sync=always". This will give it a huge performance gain, but is very very very unsafe. /Really was questioning myself as to if I should even mention it...
  2. You can tune CIFS so that it performs a little better, there are threads all over on this forum. One good read that comes to mind is @cyberjock 's "CIFS directory browsing slow? Try this"
  3. You NEVER want to fill your iSCSI Pool/Volume higher than 50%
    • Performance will tank if you do so
    • Take that into serious consideration when planning
  4. Since you have two Xen Servers, count on only using one right now. You want the iSCSI zVol to only be attached to by one system. If you want to do two, then you would want two different iSCSI zVols.
  5. This should ALL be considered Experimental and NO VITAL DATA should be housed on it.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
You should play with replication between locations too.
 

alkyred

TrueNAS User
Joined
Aug 31, 2016
Messages
21
*** These are just my thoughts, I am no "expert" on iSCSI so take all of this with a grain of salt. ***

If the others see some stupidity on my part, please feel free to correct me and whop me over the head...

So this is what I would do if I were in your situation:
  1. Ditch the Hardware Raid
  2. Forget about LACP right now (unless your Switches can handle this and you are fully familiar with how to configure it). You don't want that interfering with the demo if it is done wrong
  3. Out of the 16 available drives you have; Take 10 of them and create a Pool/Volume using 5 x Mirror vDevs
    • This will be for iSCSI Demo
    • Can always tear it down and rebuild later with 3-Way Mirrors or add a couple "Hot Spares" (design is up to you)
    • Create the zVol
  4. Take the remaining 6 SAS Drives and make a Pool/Volume using RaidZ2 (all 6 drives in that vDev)
    • Create the Dataset
  5. Scrounge/Beg/Borrow/Steal for more RAM to give this demo half a chance
  6. Grab a good SSD (Intel DC S3710 would be nice) that has fast read/write speeds
    • Add this to the iSCSI Pool/Volume as a SLOG (Called a LOG in the GUI)
  7. Setup/Configure Windows (CIFS) and iSCSI (Block) Shares
  8. Connect and Test (one at a time, don't try to do too much with this Server or else its gonna puke)

*** Some additional info
  1. You can "Cheat" a bit here by setting the iSCSI Pool/Volume so that "sync=never". This is NOT RECOMMENDED though since it can be very detrimental. The actual recommendation is for "sync=always". This will give it a huge performance gain, but is very very very unsafe. /Really was questioning myself as to if I should even mention it...
  2. You can tune CIFS so that it performs a little better, there are threads all over on this forum. One good read that comes to mind is @cyberjock 's "CIFS directory browsing slow? Try this"
  3. You NEVER want to fill your iSCSI Pool/Volume higher than 50%
    • Performance will tank if you do so
    • Take that into serious consideration when planning
  4. Since you have two Xen Servers, count on only using one right now. You want the iSCSI zVol to only be attached to by one system. If you want to do two, then you would want two different iSCSI zVols.
  5. This should ALL be considered Experimental and NO VITAL DATA should be housed on it.

Mirfster-
Thanks for the detailed report. I am working on the beg borrow steal right now. It will take some time for me but I will report back my progress.
 

DaveY

Contributor
Joined
Dec 1, 2014
Messages
141
I have pretty much the same setup as you. I used LACP for my FreeNAS across a quad card 4x1Gb and it works fine. Since you said you have another server set up with LACP and having same problem, I would confirm you have LACP set up correctly on the switch side first and not accidently set as LAG. 802.3ad is what you want. And then create a link aggregation on FreeNAS after. What type of 10Gbe NICs? Intel??

The freenas mDNSResponder error is an error message introduced in 9.10 due to FreeBSD 10 switching to mDNSResponder. There's a bug out there detailing this, but it's supposedly harmless and unrelated to your LACP problem.

We use iSCSI and get good performance out of it. You'll probably want a SSD SLOG though if you're doing heavy read/writes
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
I can't think of one legitimate reason to ever use LACP with iSCSI.

Use MPIO with iSCSI, it was designed for availability and bandwidth aggregation.

No really.. use MPIO. Don't even consider LACP for communication with hosts. Also, just stick with the 10GbE NICs.
 

DaveY

Contributor
Joined
Dec 1, 2014
Messages
141
@diehard, agreed about MPIO and iSCSI.

I think OP is also trying to use it for shares. My guess is LACP is for network redundancy
 
Status
Not open for further replies.
Top