10Gbe - 8Gbps with iperf, 1.3Mbps with NFS

Joined
Apr 26, 2015
Messages
320
Is the SLOG device something I can add later?
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You can add SLOG after the installation.
 
Joined
Apr 26, 2015
Messages
320
Great, so it looks like I can go back to the first page and start going through the suggestions again.
I'll deal with the IBM stuff later. I know they do JBOD but I don't think they hand off the drives as individual ones.
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
What exactly are you referring to when you say IBM stuff? And... Jbod? Are you back to using a raid controller? I'm getting really confused...
 
Joined
Apr 26, 2015
Messages
320
This thread is me learning where I went wrong and trying to better understand TN.
I've been using FN and TN for years but up until now, never knew I was doing it wrong.
In terms of the IBM's, I posted that earlier in the thread and someone gave some thoughts on how to handle that.
And, this really should be a different thread because it's going to keep extending this already long one in a way that might confuse someone looking for something along my original post.

Years ago, I set up FN with a bunch of IBM DS3524 storage chassis.
These can be used stand alone with iSCSI or they can be connected to devices that handle LUNs.
In my case, I put a qlogic card in the FN server then connected the IBM's to that.
I created logical (RAID) drives on the IBMs and FN could see them.
I added the LUNs as pools then sliced them up using sharing.
While everyone here seems to say you'll lose data for sure, it's never failed. There has never been any loss at all, it simply works perfectly and I can add as many storage devices as I want.

Anyhow, apparently, I've been doing that wrong, even though it works very well so I'd like to take a look at that once I'm done with this.
 
Last edited:
Joined
Apr 26, 2015
Messages
320
BTW, for those that suggested this device... I ordered a PCIe m.2 adapter and a 32GB Optane M10.
I also ordered a couple of 100GB SSD to use for the OS mirrors so I'll be rebuilding this next week.
 
Last edited:
Joined
Apr 26, 2015
Messages
320
Then seriously consider striped mirrors instead of raidz if you can afford the small cost in capacity.

Not sure what is being suggested. The first suggestion was to create one pool using all the individual storage devices as a raidz2 which is what I did. That used the eight remaining 1TB drives as one 5TB pool. You are suggesting stripped mirror so I need to review the entire thread again because maybe I've missed something.
 
Last edited:
Joined
Apr 26, 2015
Messages
320
A quick test without tuning or SLOG. Sync is disabled.
I have a shut down vm on esx host 1 and transferring that to the TN NFS storage.
Looking at the TN dashboard and NIC status, I'm seeing the transfer maxing out a bit over 200Mbps or so using the 10Gbe NIC.
There is barely any CPU usage, memory is showing 4GB+ services and 2Gb for ZFS cache.
The esx host says it's hitting 900Mbps or so.
From a vm on the host, I get under 1Gbps.
Better than 1.3Mbps when I started at least :).

Update
I added the tune found here;
which helped a bit but I'm missing something since I can barely get over 1Gbps.

$ pv 50gb-file > /mnt/50gb-file
47.1GiB 0:01:03 [ 762MiB/s] [============================================================>] 100%

Oddly, when I copy a vm from esx datastore to the shared nfs, it's much slower than if I copy the 50gb file from a vm.
 
Last edited:
Joined
Apr 26, 2015
Messages
320
Well, I have no idea what I've done wrong.
I followed much of if not all of the suggestions but transfer rates are still nowhere near the 10Gb mark. I'm getting at most a bit over 1Gb.
I even upgraded the memory to 90GB since I had a mix of 8GB and 4GB chips. No change. Surely, it cannot be solely because I'm not using SLOG yet?

Is it obvious to someone in this thread what I've missed?

2021-12-18_131927.jpg
2021-12-18_131955.jpg
2021-12-18_132028.jpg
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
How have you set up your pool?
 
Joined
Apr 26, 2015
Messages
320
How have you set up your pool?
I created one pool made up of the eight 1TB drives that the system can see once installed with sync disabled. Otherwise, all default options.
I then created a couple of datasets from that then shared one using NFS.
2021-12-18_155721.jpg
2021-12-18_155923.jpg
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Please provide zpool status -v pool01.
 
Joined
Apr 26, 2015
Messages
320
Code:

# zpool status -v pool01
  pool: pool01
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        pool01                                          ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/47dedc29-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0
            gptid/47ed6a12-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0
            gptid/47b17d6a-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0
            gptid/48206d79-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0
            gptid/4814e29c-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0
            gptid/482bf623-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0
            gptid/484c0b36-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0
            gptid/48b1fab0-5f4c-11ec-9d7d-90b11c1dd891  ONLINE       0     0     0

errors: No known data errors
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Try rebuilding your pool as a 4-way stripe of 2-way mirrors.

Also, did you implement the tunables for TCP Cubic?
 
Joined
Apr 26, 2015
Messages
320
I'll do that next and yes, as documented above, I did try with and without and tunables, this one.

Should I try the CUBIC one now or after the pool change?

This has to be for testing only since I won't have the amount of space I'll need with only 8 1TB drives.
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You can try sysctl type tunables any time. Loader type tunables only take effect at the next boot.
 
Joined
Apr 26, 2015
Messages
320
Yes, I tried it after reboot.
I'm having a hard time creating this configuration. Could you walk me through it since the system keeps telling me I'm doing the wrong thing.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Add 2 drives to a vdev; this creates a mirror. Click repeat to create another 2-way mirror vdev. Repeat another 2 times.
 
Joined
Apr 26, 2015
Messages
320
Add 2 drives to a vdev; this creates a mirror. Click repeat to create another 2-way mirror vdev. Repeat another 2 times.
Ok, done. The problem is that this gives me 3.5TB of space and I have 5TB of backups so I'm going to have to miss my deadline again by ordering larger drives or configuring something a bit different.

2021-12-18_185430.jpg
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes, unfortunately, stripes of mirrors are the only way to reach high IOPS with spinners; this necessarily sacrifices capacity. Please run zpool status -f pool01 again to verify you have the correct topology.
 
Top