Resource icon

SLOG benchmarking and finding the best SLOG

macross_dyrl

Cadet
Joined
Nov 26, 2017
Messages
2
Intel 900P Fimrware 0x02

Code:
== START OF INFORMATION SECTION ===
Model Number:                       INTEL SSDPED1D280GA
Serial Number:                      PHMB7392012H280CGN
Firmware Version:                   E2010325
PCI Vendor/Subsystem ID:            0x8086
IEEE OUI Identifier:                0x5cd2e4
Controller ID:                      0
Number of Namespaces:               1
Namespace 1 Size/Capacity:          280,065,171,456 [280 GB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Sat Dec  8 06:40:16 2018 EST
Firmware Updates (0x02):            1 Slot
Optional Admin Commands (0x0007):   Security Format Frmw_DL
Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
Maximum Data Transfer Size:         32 Pages

nvd0p1
    512             # sectorsize
    280065085440    # mediasize in bytes (261G)
    547002120       # mediasize in sectors
    0               # stripesize
    65536           # stripeoffset
    34049           # Cylinders according to firmware.
    255             # Heads according to firmware.
    63              # Sectors according to firmware.
    INTEL SSDPED1D280GA    # Disk descr.
    PHMB7392012H280CGN    # Disk ident.
    Yes             # TRIM/UNMAP support
    0               # Rotation rate in RPM

Synchronous random writes:
     0.5 kbytes:     14.7 usec/IO =     33.2 Mbytes/s
       1 kbytes:     14.9 usec/IO =     65.6 Mbytes/s
       2 kbytes:     15.1 usec/IO =    129.1 Mbytes/s
       4 kbytes:     12.6 usec/IO =    309.4 Mbytes/s
       8 kbytes:     14.2 usec/IO =    548.7 Mbytes/s
      16 kbytes:     18.6 usec/IO =    840.5 Mbytes/s
      32 kbytes:     26.2 usec/IO =   1191.1 Mbytes/s
      64 kbytes:     41.2 usec/IO =   1515.3 Mbytes/s
     128 kbytes:     72.1 usec/IO =   1732.7 Mbytes/s
     256 kbytes:    132.9 usec/IO =   1880.7 Mbytes/s
     512 kbytes:    259.3 usec/IO =   1928.2 Mbytes/s
    1024 kbytes:    522.7 usec/IO =   1913.1 Mbytes/s
    2048 kbytes:   1029.5 usec/IO =   1942.8 Mbytes/s
    4096 kbytes:   2043.6 usec/IO =   1957.3 Mbytes/s
    8192 kbytes:   4062.0 usec/IO =   1969.5 Mbytes/s
 

drros

Dabbler
Joined
Aug 27, 2018
Messages
10
Installed a new Optane 800P 58Gb. For some reason my result is almost half of result of mjt5282 posted on a previous page.

Code:
root@freenas[~]# smartctl -a /dev/nvme0
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       INTEL SSDPEK1W060GA
Serial Number:                      PHBT8072002C064Q
Firmware Version:                   K4110410
PCI Vendor/Subsystem ID:            0x8086
IEEE OUI Identifier:                0x5cd2e4
Controller ID:                      0
Number of Namespaces:               1
Namespace 1 Size/Capacity:          58,977,157,120 [58.9 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            5cd2e4 ffd6180100
Local Time is:                      Tue Dec 11 13:07:00 2018 +04
Firmware Updates (0x02):            1 Slot
Optional Admin Commands (0x0006):   Format Frmw_DL
Optional NVM Commands (0x0046):     Wr_Unc DS_Mngmt Timestmp
Maximum Data Transfer Size:         32 Pages

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     3.60W       -        -    0  0  0  0  1000000   50000
 1 +     2.50W       -        -    0  1  0  1  1000000   50000
 2 +     1.80W       -        -    0  2  0  2  1000000   50000
 3 -   0.0080W       -        -    0  0  0  0  1150000   50000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         2

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning:                   0x00
Temperature:                        47 Celsius
Available Spare:                    100%
Available Spare Threshold:          0%
Percentage Used:                    0%
Data Units Read:                    4 [2.04 MB]
Data Units Written:                 0
Host Read Commands:                 302
Host Write Commands:                0
Controller Busy Time:               0
Power Cycles:                       5
Power On Hours:                     0
Unsafe Shutdowns:                   0
Media and Data Integrity Errors:    0
Error Information Log Entries:      0

Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged


Code:
root@freenas[~]# diskinfo -wS /dev/nvd0
/dev/nvd0
        512             # sectorsize
        58977157120     # mediasize in bytes (55G)
        115189760       # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        INTEL SSDPEK1W060GA     # Disk descr.
        PHBT8072002C064Q        # Disk ident.
        Yes             # TRIM/UNMAP support
        0               # Rotation rate in RPM

Synchronous random writes:
         0.5 kbytes:     18.9 usec/IO =     25.8 Mbytes/s
           1 kbytes:     18.8 usec/IO =     51.9 Mbytes/s
           2 kbytes:     19.7 usec/IO =     99.2 Mbytes/s
           4 kbytes:     23.4 usec/IO =    166.7 Mbytes/s
           8 kbytes:     28.8 usec/IO =    270.9 Mbytes/s
          16 kbytes:     42.2 usec/IO =    370.6 Mbytes/s
          32 kbytes:     68.9 usec/IO =    453.6 Mbytes/s
          64 kbytes:    122.9 usec/IO =    508.4 Mbytes/s
         128 kbytes:    239.2 usec/IO =    522.6 Mbytes/s
         256 kbytes:    461.3 usec/IO =    541.9 Mbytes/s
         512 kbytes:    924.5 usec/IO =    540.8 Mbytes/s
        1024 kbytes:   1854.1 usec/IO =    539.3 Mbytes/s
        2048 kbytes:   3645.1 usec/IO =    548.7 Mbytes/s
        4096 kbytes:   7149.9 usec/IO =    559.5 Mbytes/s
        8192 kbytes:  14190.5 usec/IO =    563.8 Mbytes/s
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
Installed a new Optane 800P 58Gb. For some reason my result is almost half of result of mjt5282 posted on a previous page.

If you look at tests at serve the home, etc. you'll see that this is expected behavior. Their SLOG recommendation skips over m.2 based Optane offerings altogether - the one option they mention a is a Optane SSD that happens to have a adapter cable. Their in-depth test here between various SLOG devices shows more detail. The key issue apparently being that motherboards expose only an PCI x2 connector to the m2 port, which naturally handicaps speed transfer vs. the x4 or x8-based competition.

That said, I bet you paid a a lot less for that drive than for a PCIe-based solutions and especially for small files should experience a significant speed-up. There are some potential power loss issues to consider with that SLOG - see the article from STH I reference above. Intel (perhaps just for marketing reasons) steers customers to the 4800X series for a "real" slog because it allegedly has more layers of protection. Whether any of us would experience them in real life is another question.

Anyhow, the M2 bottleneck is one reason I considered getting a Super-Micro X10SDV-4C-7TP4F to replace my C2750 from Asrock: Xeon D1518 in a Flex ATX form factor - two "real" PCIe x8 expansion ports instead of the many convoluted solutions that the C3000 series plays with. A real HBA - LSI2116 - etc. Consumes a bit more power and will never break dhrystone records. Likely 100% adequate for my purposes but consumes 10W more than the Avoton, and if I ever want a "real" PCIe-based Optane solution, it's plug and play.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
The key issue apparently being that motherboards expose only an PCI x2 connector to the m2 port, which naturally handicaps speed transfer vs. the x4 or x8-based competition.
That is a rather rare thing. Most M.2 slots wired for PCIe actually provide four PCIe 3.0 lanes. And I don't think anyone sells x8 SSDs outside of very niche applications.

The form factor does impose envelope restrictions, but that's easily solved with a U.2 adapter and SSD.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
That is a rather rare thing. Most M.2 slots wired for PCIe actually provide four PCIe 3.0 lanes. And I don't think anyone sells x8 SSDs outside of very niche applications.

The form factor does impose envelope restrictions, but that's easily solved with a U.2 adapter and SSD.
Apologies, you are correct, I should have qualified that with the C3000 series and other low-power motherboards typically only exposing x2 on M.2 connectors. I was looking at too many C3000 based boards and was frustrated by the lack of m.2 connections above x2 speed. x4 is likely more than fast enough for most uses - and the X10SDV-4C-7TP4F offers a m.2 at x4 and two PCIe x8 interfaces at a ~$150 premium vs. the Avoton 2750 series. The higher cored X10SDV-7TP4F offers the same feature set but uses a D-1537 instead. TDP still only 35W.
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
This is my cheap / lab /test 128 gig NVME on a Dell r720xd with 96gb of RAM:

root@freenas[~]# diskinfo -citvSw /dev/nvd0
/dev/nvd0
512 # sectorsize
128035676160 # mediasize in bytes (119G)
250069680 # mediasize in sectors
0 # stripesize
0 # stripeoffset
ADATA SX8000NP # Disk descr.
2H1420012524 # Disk ident.
Yes # TRIM/UNMAP support
0 # Rotation rate in RPM

I/O command overhead:
time to read 10MB block 0.014223 sec = 0.001 msec/sector
time to read 20480 sectors 1.728052 sec = 0.084 msec/sector
calculated command overhead = 0.084 msec/sector

Seek times:
Full stroke: 250 iter in 0.007574 sec = 0.030 msec
Half stroke: 250 iter in 0.007221 sec = 0.029 msec
Quarter stroke: 500 iter in 0.015620 sec = 0.031 msec
Short forward: 400 iter in 0.009570 sec = 0.024 msec
Short backward: 400 iter in 0.010907 sec = 0.027 msec
Seq outer: 2048 iter in 0.045166 sec = 0.022 msec
Seq inner: 2048 iter in 0.043860 sec = 0.021 msec

Transfer rates:
outside: 102400 kbytes in 0.119977 sec = 853497 kbytes/sec
middle: 102400 kbytes in 0.117730 sec = 869787 kbytes/sec
inside: 102400 kbytes in 0.124413 sec = 823065 kbytes/sec

Asynchronous random reads:
sectorsize: 916230 ops in 3.000332 sec = 305376 IOPS
4 kbytes: 909212 ops in 3.000314 sec = 303039 IOPS
32 kbytes: 189230 ops in 3.001815 sec = 63039 IOPS
128 kbytes: 52811 ops in 3.007165 sec = 17562 IOPS

Synchronous random writes:
0.5 kbytes: 1020.7 usec/IO = 0.5 Mbytes/s
1 kbytes: 1004.5 usec/IO = 1.0 Mbytes/s
2 kbytes: 1016.0 usec/IO = 1.9 Mbytes/s
4 kbytes: 1002.7 usec/IO = 3.9 Mbytes/s
8 kbytes: 1014.0 usec/IO = 7.7 Mbytes/s
16 kbytes: 1004.0 usec/IO = 15.6 Mbytes/s
32 kbytes: 1015.0 usec/IO = 30.8 Mbytes/s
64 kbytes: 1026.8 usec/IO = 60.9 Mbytes/s
128 kbytes: 1343.9 usec/IO = 93.0 Mbytes/s
256 kbytes: 1992.4 usec/IO = 125.5 Mbytes/s
512 kbytes: 1913.6 usec/IO = 261.3 Mbytes/s
1024 kbytes: 2422.1 usec/IO = 412.9 Mbytes/s
2048 kbytes: 4255.9 usec/IO = 469.9 Mbytes/s
4096 kbytes: 8180.9 usec/IO = 488.9 Mbytes/s
8192 kbytes: 14565.4 usec/IO = 549.2 Mbytes/s
 

dak180

Patron
Joined
Nov 22, 2017
Messages
307
So I am looking at getting an SLOG in the not too terribly distant future an was hoping for some insight into the functional trade offs of my two main contenders given my use case.

So, first the use case: among other things my NAS acts a timemachine backup server; with the latest version of macos smb is the preferred way these backups happen, however apple requires a set of extensions on timemachine shares that make all writes to them synchronous, these should be the only synchronous writes happening (there will be initially 2 computers backing up this way and as many as 4 in the future).

1GiG line speed is the most this system will be expected to handle for at least 5 years at which time I will reassess.

Now for the two that I am looking at: first is Intel® Optane™ SSD 800P (58 GB) which is my budget option, second is the Intel® Optane™ SSD DC P4801X (100 GB) which is the professional grade option; the 800p goes for about $100 and the P4801X goes for about $300.

The P4801X is unarguably better than the 800p; the real question is will it be better in ways that, given my use case, will matter (and/or be noticeable) and be worth x3 the price?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
I doubt the SLOG will make much of a difference for Time Machine. In my experience, each set of transfers is usually limited to 50MB/s at most, usually below 20MB/s. If data integrity is of utmost importance, I'd investigate using a dual (mirrored) SLOG instead. That gives you the benefit of the SSD speed increase yet reduces the probability of a missed/garbled/etc. write as well. I use two older Intel SSDs for that purpose.

I also wonder to what extent you may benefit from a L2ARC that is dedicated to metadata to help Time Machine do its directory traversals. IIRC, research from @Cyberjock suggested that SMB benefits from L2ARC, as long as your server already has plenty of RAM. The L2ARC drive should be a SSD but does not need to be durable, power protected, and so on.
 
Last edited:

dak180

Patron
Joined
Nov 22, 2017
Messages
307
I doubt the SLOG will make much of a difference for Time Machine.
I do not doubt that was the case with Time Machine over AFP there are quite a few differences to how it works over SMB, (AFP did not support sync writes at all) so I am not sure that is still how it operates when it uses SMB, particularly when operating on a machine using APFS (I believe it works on a snapshot rather than the live filesystem there).
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
I'm waiting for full Time Machine over SMB support on FreeNAS before I take the plunge. For now, I'm keeping my system at OS X Sierra as APFS gives me the willies in the absence of a bullet-proof backup plan.

My interpretation of the information posted at or linked to from Macintouch is that Apple has been hilariously out to lunch re: informing the hard drive utility developers how APFS works and what options recovery folk have to recover anything at all if things go sideways.
 

seanm

Guru
Joined
Jun 11, 2018
Messages
570
dak180, it's "APFS", not "AFS".

I doubt anyone really knows if a SLOG will help much, since basically only a few people have even tried Time Machine over SMB.

Constantin, you don't *need* stay at 10.12 like that, you can update to 10.13 without updating your disk from HFS to APFS, but you have to run the installer from the command line with an extra option to do that. That option was removed in 10.14 though.

It should be noted that, even in 10.14, Time Machine disks still have to be HFS, as APFS does not yet support time machine.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
I doubt anyone really knows if a SLOG will help much, since basically only a few people have even tried Time Machine over SMB.
@anodos has done a ton of work to bring full TM support over SMB to FreeNAS, and I'm very thankful for his efforts. The next revision of FreeNAS will include them, which is when I will likely start the transition to SMB for my file share transfers.

Plus, if one has old Macs in the stable, it's nice to have a file server that can talk AFP to them. I plan on setting up a AFP share to store my 68K and Power-PC based assets to allow me to access these files and programs in the future. It's unlikely that I'll be able to have my Mac SE/30 work with them due to the difficulty of making it play nice with ethernet but I do have the ability to transfer stuff via SCSI.

Constantin, you don't *need* stay at 10.12 like that, you can update to 10.13 without updating your disk from HFS to APFS, but you have to run the installer from the command line with an extra option to do that. That option was removed in 10.14 though.
Given the many horror stories at Macintouch re: High Sierra, it is highly unlikely I'd ever try that revision of the OS. For me, it's Mojave or bust, subject to a very robust backup plan. Sierra serves me well, is still officially supported, etc. much like FreeNAS 9.10.x still enjoys strong support on the Forum here.

Most of the early teething pains associated with Mojave seem to have been addressed by now, leaving the quasi-requirement to use APFS and TM over SMB as the two biggest elephants in the room for me. I strongly disagree with Apple's decision to convert all system SSDs to APFS silently and without user consent and the steps to undo that are numerous and may result in system instability.

So I'd rather jump with both boots into a system that is as close to the intended OEM spec as possible to ensure that I don't trigger a unforeseen data loss event. But a strong backup system has to be part of that plan. The reviews for Mojave suggest that Apple has tried to address at least some of the quality control issues found in High Sierra but the company seems to be struggling with a lot of pots and not enough dedicated cooks.
 

Andrii Stesin

Dabbler
Joined
Aug 18, 2016
Messages
43
Gentlemen, maybe it's a bit off the topic of this exact thread, but would you mind sharing your knowledge, please? Here in this other thread, I asked two questions but got no definite answer so far.
1) LZ4 makes the user-visible writing speed much higher compared to the "physical" speed of the array. When we add SLOG into the game, is SLOG written already compressed or is it written uncompressed, with compression happening later when (in case of a failure and reboot and ZIL recovery) SLOG is being read and transferred into the transaction group?
2) Is a "complex" vdev (say, raid0+1 made of 4 SSDs on a separate HBA) suitable for SLOG?
Any clues? Thanks in advance!
p.s. For this exact project, an enterprise-class NVMe backed with (super)capacitors is not an option. I am limited to SATA-3. Sorry.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
@Andrii Stesin - I answered in more detail in your other thread, but the short answers are:

1) Data is compressed before it hits the SLOG.
2) A complex (striped mirror) vdev is suitable for SLOG; consider if you have enough 6Gbps SATA ports using those instead as there is a small amount of latency added when connecting to drives behind an HBA, and in the SLOG game latency needs to be minimized wherever possible.
 

dak180

Patron
Joined
Nov 22, 2017
Messages
307
@anodos has done a ton of work to bring full TM support over SMB to FreeNAS, and I'm very thankful for his efforts. The next revision of FreeNAS will include them, which is when I will likely start the transition to SMB for my file share transfers.
And as @anodos explained in a recent post all of the writes will be sync writes which would be made better by an SLOG especially with a z3 pool; which would be why I am rather interested in the answer to the question on the merits of the 800p vs. the P4801X for such a use case.
 

xhoy

Dabbler
Joined
Apr 25, 2014
Messages
39
intel 760p 512mb, no power backup os we use it as a l2arc, ow and its only on x2 since there was no open x4 slot

Code:
root@SIJN-NAS04:~ # smartctl -x /dev/nvme0
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       INTEL SSDPEKKW512G8
Serial Number:                    
Firmware Version:                   004C
PCI Vendor/Subsystem ID:            0x8086
IEEE OUI Identifier:                0x5cd2e4
Controller ID:                      1
Number of Namespaces:               1
Namespace 1 Size/Capacity:          512,110,190,592 [512 GB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Thu Feb 21 19:19:46 2019 CET
Firmware Updates (0x14):            2 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Maximum Data Transfer Size:         64 Pages
Warning  Comp. Temp. Threshold:     75 Celsius
Critical Comp. Temp. Threshold:     80 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
0 +     9.00W       -        -    0  0  0  0        0       0
1 +     4.60W       -        -    1  1  1  1        0       0
2 +     3.80W       -        -    2  2  2  2        0       0
3 -   0.0450W       -        -    3  3  3  3     2000    2000
4 -   0.0040W       -        -    4  4  4  4     6000    8000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning:                   0x00
Temperature:                        38 Celsius
Available Spare:                    100%
Available Spare Threshold:          12%
Percentage Used:                    0%
Data Units Read:                    11 [5.63 MB]
Data Units Written:                 161 [82.4 MB]
Host Read Commands:                 671
Host Write Commands:                831
Controller Busy Time:               0
Power Cycles:                       5
Power On Hours:                     1
Unsafe Shutdowns:                   2
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0

Error Information (NVMe Log 0x01, max 256 entries)

Code:
root@SIJN-NAS04:~ # diskinfo -wS /dev/nvd0
/dev/nvd0
        512             # sectorsize
        512110190592    # mediasize in bytes (477G)
        1000215216      # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        INTEL SSDPEKKW512G8     # Disk descr.
        PHHH849402T4512H        # Disk ident.
        Yes             # TRIM/UNMAP support
        0               # Rotation rate in RPM

Synchronous random writes:
         0.5 kbytes:    999.6 usec/IO =      0.5 Mbytes/s
           1 kbytes:    999.7 usec/IO =      1.0 Mbytes/s
           2 kbytes:    999.3 usec/IO =      2.0 Mbytes/s
           4 kbytes:   1001.6 usec/IO =      3.9 Mbytes/s
           8 kbytes:   1001.1 usec/IO =      7.8 Mbytes/s
          16 kbytes:    999.9 usec/IO =     15.6 Mbytes/s
          32 kbytes:    999.6 usec/IO =     31.3 Mbytes/s
          64 kbytes:   1004.4 usec/IO =     62.2 Mbytes/s
         128 kbytes:   1003.0 usec/IO =    124.6 Mbytes/s
         256 kbytes:   1010.5 usec/IO =    247.4 Mbytes/s
         512 kbytes:   1139.7 usec/IO =    438.7 Mbytes/s
        1024 kbytes:   1892.3 usec/IO =    528.4 Mbytes/s
        2048 kbytes:   2366.7 usec/IO =    845.1 Mbytes/s
        4096 kbytes:   3691.4 usec/IO =   1083.6 Mbytes/s
        8192 kbytes:  15328.3 usec/IO =    521.9 Mbytes/s
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
The 800p is limited to a PCIe 3.0x2 interface (1450 read / 640 write MB/s) while the P4801x features a x4 interface (2200 read / 1000 write MB/s). Based on the interface and the much higher write speeds, I'd opt for the p4801x in a SLOG application. However, I have yet to see a test showing real life performance with the P4801x in a FreeNAS rig.
 
Last edited:

dak180

Patron
Joined
Nov 22, 2017
Messages
307
The 800p is limited to a PCIe 3.0x2 interface (1450 read / 640 write MB/s) while the P4801x features a x4 interface (2200 read / 2000 write MB/s). Based on the interface and the much higher write speeds, I'd opt for the p4801x in a SLOG application.
That gets to a big part of my question, with 1gb line speed and a majority of the clients on wifi will the difference in speed between them matter?
 
Joined
Dec 29, 2014
Messages
1,135
That gets to a big part of my question, with 1gb line speed and a majority of the clients on wifi will the difference in speed between them matter?
I would say that it likely would not matter with 1G max clients. With 10G connections to FreeNAS, it definitely matters!
 
Top