10Gbe - 8Gbps with iperf, 1.3Mbps with NFS

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
You can boot from just about anything. As long as the system dataset is moved from the boot pool almost anything will do. Just remember the boot pool uses the entire space of the boot disk(s)

Do not use any old SSD as a SLOG. It needs, as you have been told several times, low latency, high speed, PLP and log endurance. There are very few SSD's that will do all of these. Optane ticks the box and there is some other rarer stuff (RMS-375 although I suspect you may need to be sitting down when the price is mentioned)
 
Joined
Apr 26, 2015
Messages
320
I'm not using anything and did look into the optane but the R620 doesn't seem to have anything internal I can use. Still looking around before I rebuild the box. I don't need the most performance possible as this will be mainly for backups, nothing real time.
 
Joined
Apr 26, 2015
Messages
320
Well, I give up, at least for today.
Maybe I bricked the card, no idea right now.
I can see the drives during the install and pick the 2 146GB drives.
I get no options to mirror or anything else.
No matter if I pick EUFI or BIOS as the boot device, none are seen after the install.
I found some threads on this but nothing that helps so far.
When I boot, I get an F1 to continue, F2 to setup and F11 to pick the boot device.

Done for now, bad day.

photo_2021-12-15_17-31-11.jpg
 

Attachments

  • photo_2021-12-15_17-31-11.jpg
    photo_2021-12-15_17-31-11.jpg
    84.3 KB · Views: 103

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Note, you may also need to flash the boot ROMs (legacy and UEFI) to enable booting from downstream drives.
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
Do not use any old SSD as a SLOG. It needs, as you have been told several times, low latency, high speed, PLP and log endurance. There are very few SSD's that will do all of these. Optane ticks the box and there is some other rarer stuff (RMS-375 although I suspect you may need to be sitting down when the price is mentioned)
Actually RMS-200s are kinda-readily available and getting well cheap now, you just need an available PCIe slot. Only 8GiB but for 10GbE this fits perfectly for the 5s txg group timer. And these *rock* for performance, have PLP, and virtually unlimited endurance (one of mine has a ridiculous write volume, 100's of PiB!). If you have a few minutes you can read up on different options here. Also the little 32Gb Optanes work very well, but then you need an M.2 slot or an adapter to connect to SATA/SAS, but even that worked very well for me.

Hope this helps.

Kai.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I thought you would need more than 8GB. A single (unmodified) txg group on 10Gb would max out at 6.25GB of memory but aren't there 3 transaction groups in existence at any one time - which at full chat (which is probably highly unlikely depending on workload) would be a maximum of 19GB.

I may be overthinking this
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
Well @NugentS, if you consider a saturated 10GbE link, at most ~1GiB/s will come in (rounded up). If you have slow storage behind it (like me, with only a few striped mirrors), you could tune your zfs to fill a RMS-200 for a period of 8s before the transfers start to stall. However, all the while, this data is being stored sync, so is *safe*, even across a cord pull. So yes, while of course it's not a giant slog device, it really boosts the perceived performance of the filer in certain usecases. If you need longer-duration sync xfers, then you should be looking at improving your storage architecture that sits "behind" (do note the quotations!!!) the slog. Actually, before @jgreco picks me up and dumps me in the bin I'll explicitly say it: slog is not a write cache. A larger slog helps, but is not the answer.

K.
 
Joined
Apr 26, 2015
Messages
320
I was looking at PCIe based solutions as well but everything I found was hundreds of dollars.
An RMS-200 was mentioed above, something like this? Radian RMS-200?

This storage is mainly just for backing up and copying vms around when needed so I'm not sure I need super high performance.
There will also be two shares for two sets of load balanced web services for their web pages. Currently just using a Linux box running nfs server to accomplish this. Of course, if I can get a little high performance, I could end up using this for other functions.

I'm also under a time crunch and now cannot install the OS so have to solve this first.

>you may also need to flash the boot ROMs (legacy and UEFI) to enable booting from downstream drives

How do I confirm if this is the case? I don't want to completely trash this server by flashing all sorts of things rendering it useless and have to start over. Need to start searching on this now so I can move forward again.

BTW, did anyone in this thread notice my iperf tests to the server without any tune? It was hitting over 8Gbps. The server has 32GB.
 
Last edited:

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
So just to be clear. Iperf has literally nothing to do with your storage, on either end. Getting 8Gbits/s is a respectable start, a little tuning might push this over 9gbit/s. But I recommend getting the thing operational first, 8 vs 9 Gbit/s is not a show stopper, not booting more commonly is.

I'm not sure what is included in the ISO you used, but usually when flashing, boot images should be included in the process. Were they during what you did?
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
SLOG
Your best cheap option is the smaller Optane's in gumstick format (probably). @Kailee71 thinks 8GB is enough for a 10Gb NIC. I would prefer one of the 16GB or 32GB units myself (well preferably 2 mirrored).
More importantly in your use case - for backup and moving VM's around - whilst it may (NFS or iSCSI) make a difference - it almost certainly doesn't actually matter unless you are very much time constrained. So at least initially I wouldn't bother with any SLOG, just test and see what happens and how it works out for you.

As for the boot problems - I have no idea I am afraid. Wish I could help
 
Joined
Apr 26, 2015
Messages
320
Correct, I was talking about iperf from the OS and not through the NFS services. My point on that is simply that network wise at least, other than tuning to get better, there aren't any obvious issues there. I can't wait to get to the point where I can test that again through NFS this time.

I'm trying to get an SLOG done now if I can because once the machine is gone, it will be hard to get at and will be costly to make changes.
However, I wanted to get to a full working system, even without SLOG for now so I could complete and document all of the steps.
I didn't see any options to flash the boot roms so I'll go back to the documentation on flashing the card to start with.
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
Not so quick - I think with the *RMS* 8GiB is enough, but only because that's what it has, and no more. The Optane sticks you should get with as big a size as you can afford; the smaller ones also have lower performance. But you can always use 2 16GiB sticks to get the performance of one 32GiB stick (assuming you can connect them somehow). However all of these are really affordable. You should be able to find the RMS for $150 or less, and a 32GiB Optane for less than that. Trawl though eBay, you'll probably find something applicable.
 
Joined
Apr 26, 2015
Messages
320
The only options on this system are one internal SATA port but no power or PCIe card. There are some SD slots which aren't of any value. I was also looking at a PCIe SAS card because I have room to put a 15K SAS drive in the chassis but again, no power. For power, I was then looking for maybe something like a USB to SAS power adapter/cable. There might be a 5VDC output on the motherboard but the tech docs I have don't get into that.

While I'm doing the above, I'd like to get to a build that works so I can do it again once I confirm doing a SLOG device or being good without one if I am simply out of time.

First I need to solve this boot problem today while also having to work on the problem of our giant pine tree falling into our neighbors yard and causing damage.
 
Joined
Apr 26, 2015
Messages
320
Note, you may also need to flash the boot ROMs (legacy and UEFI) to enable booting from downstream drives.
Ok, so how do I confirm if that is my next step.
 
Joined
Apr 26, 2015
Messages
320
I have a free PCI slot in the chassis. What about something like this;
https://www.ebay.com/itm/304240858816 and an M.2 card like optane M10 or H10 combo?
I would have these things by next week and would have the booting issue solved by then.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
What size slot, and version of PCIe
Also, looking through the BIOS and or manual - can you find any mention of bifurcation?

BTW - looking at the H10 specs that looks like a cynical attempt by Intel to persuade you to buy an Optane that isn't actually an Optane (or has a tiny bit of Optane as a cache). Just don't.

Ebay Link to 32GB Optane as an example of what I believe is a good unit
That ebay card you linked to ought to work
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Ok, so how do I confirm if that is my next step.

If you're trying to boot from downstream drives, but they don't appear, it's a pretty good sign you'll need to flash the boot ROMs back onto your H710.
 
Joined
Apr 26, 2015
Messages
320
To me, nothing is that clear right now.

I read the entire doc seeing this part over and over.

>If you need to boot from drives connected to this adapter, you'll need to flash a boot image to it. Otherwise, skip it. This is what
>gives you the "press blahblah to enter the LSI boot configuration utility" text when the server boots. To flash the regular BIOS
>boot image:this time, figured I'd try this;

While this seemed to be the problem, I wasn't sure if it was safe to do this at this point but decided to go ahead.

# flashboot /root/Bootloaders/x64sas2.rom

Then I was able to see Avago and the drives when the system boots.
I then installed TN and it finally boots but it was a test using just one drive so now I'll see if I can get a mirror.
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
Do not get the H10 Optanes for slog. They're hybrid drives that use some 3dxpoint as cache, they're not the kind you want for slog. You're after the M10 devices with 16GiB, 32GiB or better yet 64GiB total capacity. They all only need 2 lanes, but double in xfer bandwidth with each stepsize up. The 64GiB parts will quite happily sustain upwards of 600MiB/s. All of them have very low latency which is what you're after for slog; however the smaller you go, the higher the latency. Patrick@sth did a great intro here. Then after that go back to the slog thread that was linked earlier (it was, right?).

In one of my boxes I use two of this kind of adapter; it's quite clever because power comes from the pcie bus, for both the m.2 drives, the top drives link via sata to any hba you like, while the bottom ones do traditional nvme over pcie (the optanes). This could maybe help with you power issue if you don't have sata power anywhere (I have the same issue in my dl380's and dl560's). Do check if your system can boot off nvme, and if it supports pcie bifurcation. If it does both, it simplifies things further because you can circumvent sata completely.
 
Joined
Apr 26, 2015
Messages
320
Up and running, I see the eight 1TB drives now when wanting to create a pool. I thought they were 600GB.
Anyhow, during the install, I picked the two 146GB drives but the installer never asked or mentioned anything about mirroring.
Therefore, I assume that by picking both drives, they will now be kept in sync by the OS.

It is mentioned that boot time is longer and it certainly is. For production, this could be a problem.

This is before adding an SLOG device which I'm not sure I'll have time to do just yet but still trying to confirm what will work. I really cannot justify more than $100.00 on an SLOG solution as I've already gone way over budget on a lot of other hardware for this small remote network I needed to build.

Before I do anything else, am I at the correct setup so far, at this point?
 
Top