Big project - Tips and references

Status
Not open for further replies.

kspare

Guru
Joined
Feb 19, 2015
Messages
508
What are you using for Nics?

Also if you want to do an SLOG get the Intel 750 NVe drive, don't use sata drives for l2arc or slog(zil)
 

Mateus Bapista

Dabbler
Joined
Jul 21, 2015
Messages
21
Intel 750 NVe drive is better for me because is cheaper than "Intel Hard Drive SSD 800GB SATA 6Gb/s 2.5in"

Somebody else agrees with kspare ?
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
The 750 is fast and cheap. But endurance may be a problem. Only rated at 70GB a day. 127 TBW total. Something like an Intel DC S3700 will do 10x the drive size per day for 5 years. At least 10x the endurance.

Since slogs do nothing but write. You need to keep that in mind. For read heavy loads... no worries.
 
Joined
Jul 3, 2015
Messages
926
Are you going to multipath the JBOD to the 2 HBAs?

Just noticed you've changed the spec to 4 X HBAs, why for 1 JBOD?
 
Last edited:

Mateus Bapista

Dabbler
Joined
Jul 21, 2015
Messages
21
I will not do multipath. My JBOD have 4 Hosts HDD Zoning and if I dont use 4 HBAs I will do a bottleneck on the bus.
 

Mateus Bapista

Dabbler
Joined
Jul 21, 2015
Messages
21
New update
I will use a SAS2 Controller

Server - SYS-6028U-TR4T+
2x Xeon E5-2620 v3 2.40GHz
24x 32GB DDR4 PC4-17000 ECC Registered (768GB)
4x HBA 8 Ports external SAS2 6Gb/s - AOC-SAS2-9207-8E
1x SATA DOM 16GB SATA 6Gb/s internal
4x RJ45 10GBase-T LAN ports ( with LACP)

JBOD - CSE-946ED-R2KJBOD
2x SAS Expander 8 Ports SAS3 12Gb/s
88x ST6000NM0034 Hard Drive 6TB SAS 12Gb/s 7200RPM 3.5in (528TB RAW) - 264TB w/ Striped Mirrored
2x Intel 750 Series 1.2TB PCIe NVMe 3.0 SSD ( for L2ARC)
2x Toshiba Hard Drive SSD SLC 400GB SAS 6Gb/s 2.5in ( for ZIL)

Networking
Netgear ProSafe M7100
 
Joined
Jul 3, 2015
Messages
926
I will not do multipath. My JBOD have 4 Hosts HDD Zoning and if I dont use 4 HBAs I will do a bottleneck on the bus.

Ok I understand. You must be expecting some pretty good network speeds to saturate those 6Gb/s HBAs.
 

Mateus Bapista

Dabbler
Joined
Jul 21, 2015
Messages
21
New update

Server - SYS-6028U-TR4T+
2x Xeon E5-2620 v3 2.40GHz
24x 32GB DDR4 PC4-17000 ECC Registered (768GB)
4x HBA 8 Ports external SAS2 6Gb/s - AOC-SAS2-9207-8E
1x SATA DOM 16GB SATA 6Gb/s internal
4x RJ45 10GBase-T LAN ports ( with LACP)

JBOD - CSE-946ED-R2KJBOD
2x SAS Expander 8 Ports SAS3 12Gb/s
88x ST6000NM0034 Hard Drive 6TB SAS 12Gb/s 7200RPM 3.5in (528TB RAW) - 264TB w/ Striped Mirrored
2x Intel DC S3500 1.2TB PCIe NVMe 3.0 SSD MLC ( for L2ARC)
2x Toshiba Hard Drive SSD SLC 400GB SAS 6Gb/s 2.5in ( for ZIL)

Networking
Netgear ProSafe M7100 - 10GB Switch/Router

Please comment on
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Also if you want low latency, SFP+ with copper connections is the best, but you are limited to 5m cables. 10GBase-T has the highest latency.
 

Mateus Bapista

Dabbler
Joined
Jul 21, 2015
Messages
21
Thanks for the reply @kspare

Are you sure the limit is 5m ? I think is 15m, however the connection is with workstation to the servers, I need more 15m of cables.

Someone have some observation about bottlenecks ?
Someone ventured to estimate the read/write bandwidth when the storage will use the ARC, L2ARC, ZIL and Disks ?

Thanks community
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Yes you are correct, 15M with an active cable and 7m with a passive cable. it's also known as direct attach. If you need more than 15m i'd suggest using fibre over 10gbaset
 
Status
Not open for further replies.
Top