SOLVED Now a X11SPi-TF / Intel Xeon Silver 4214 / Logic Case SC-4316

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Hi all,

I've been following this forum and been reading advices and posts on the last couple of months on multiple topics, from hardware recommendations, to ZFS, vdevs, controllers, SLOG, L2ARC, etc.

Our current use case is mostly data archiving but also exploring new technologies, kubernetes, openshift, VMs (Windows and Linux) which from doing some testing bhyve will be more than sufficient to handle it.

The new build target is a small business / home office scenario up to 10 users max. And so file share, file sync will be the premium user (SAMBA/NFS/AFP) with some virtualization running for research and development.

Below is our bill of materials which I would like to get your feedback.


Asset typeDescriptionQuantityReasoning
caseSilverStone RM43-320-RS Rackmount Storage, 4HE1go big for future upgrades
mbsupermicro X13SAE-F1
cpuIntel Core i9-13900T, 8C+16c/32T, 1.10-5.30GHz, tray (32 threads)1
ramKingston Server Premier DIMM 32GB, DDR5-4800, CL40-39-39, ECC, on-die ECC4
cpu fanNoctua NH-D12L1
psuSeasonic Prime TX-1000 1000W ATX 2.41
boot ssdSolidigm SSD D3-S4520 240GB, 2.5", SATA2
slow storageWestern Digital WD Gold 2TB - (~7 TiB practical storage capacity using RAID-Z2) 6current data storage is using 1 TiB
fast storageSolidigm SSD D3-S4520 1.92TB, 2.5", SATA (mirrored vdev)3current virtualization storage is using 500 GiB
controllerBroadcom SAS 9300-8i, PCIe 3.0 x81
optaneIntel Optane SSD 900P 280GB, PCIe 3.0 x42optional, this is no longer showing as available on geizhals.eu


I also have a list of questions/doubts that would like to get feedback from the community.
  1. for the chassis we selected SilverStone, what is the feedback on this and other brands? InterTech and Fantec? Best quality?
  2. for MB/CPU/RAM we are planning to use the latest Intel CPU i9 with ECC support and with a total of 32 threads, however this is a Desktop CPU. What other options exist supporting ECC and been energy effective?
  3. for magnetic storage we have used in the past WD Gold Enterprise Class and would like to continue using it for the critical data archive
  4. for VMs and technology research the proposed option is Solidigm since it is enterprise class (includes PLP and high TBW)
  5. on the NIC's and SLOG topic: since most of the traffic is file sharing with some generated by the VMs is there any point of having a SLOG/Intel Optane SSD? Knowing also that the NIC available on the MB is limited to 2.5GbE and at the moment our switches are 1GbE
  6. for L2ARC and on the use cases we have enumerated, what I understood is that there is probably no gain at all
  7. last but not least, for such a system and demand will it make sense to get a quote from iXsystems or this will always come down to a TrueNAS Mini which accordingly with the website is limited to 32GB RAM? (Also I won't have half the fun of building it)

Thank you for your time and inputs.

Regards,
MrJames.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
for the chassis we selected SilverStone, what is the feedback on this and other brands? InterTech and Fantec? Best quality?

SilverStone makes some reasonable stuff, but you really should consider a Supermicro. These are head and shoulders above the rest the best non-vendor chassis you can get.

for MB/CPU/RAM we are planning to use the latest Intel CPU i9 with ECC support and with a total of 32 threads, however this is a Desktop CPU. What other options exist supporting ECC and been energy effective?

Why not just get the equivalent Xeon? Shockingly, people seem to think that there's some big difference between server and desktop CPU's. There isn't. The Xeons do not contain miniature space heaters that cause them to eat 4x the energy. People just see that Dell sells all these dual CPU heavy hitters that take a thousand watts plus, then look at their desktop that idles at 50 watts, and fail to notice all the differences. Lower end Xeon single socket CPU's like the Xeon E-2388G have a TDP of 95W, 5GHz+ core speed, 8 cores, integrated GPU, up to 128GB RAM, etc.

I would also recommend avoiding X13 gen. Go a bit older. CPU support for latest gen hardware is always a bit of a boggle, and you may be opening yourself up for unnecessary pain by going cutting edge.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Hi @jgreco, appreciate your feedback.
SilverStone makes some reasonable stuff, but you really should consider a Supermicro. These are head and shoulders above the rest the best non-vendor chassis you can get.
I think I will stick with SilverStone because when looking at Supermicro chassis with at least 12 hot-swap bays I see it starts at 1.2k EUR.

Why not just get the equivalent Xeon? Shockingly, people seem to think that there's some big difference between server and desktop CPU's. There isn't. The Xeons do not contain miniature space heaters that cause them to eat 4x the energy. People just see that Dell sells all these dual CPU heavy hitters that take a thousand watts plus, then look at their desktop that idles at 50 watts, and fail to notice all the differences. Lower end Xeon single socket CPU's like the Xeon E-2388G have a TDP of 95W, 5GHz+ core speed, 8 cores, integrated GPU, up to 128GB RAM, etc.

I would also recommend avoiding X13 gen. Go a bit older. CPU support for latest gen hardware is always a bit of a boggle, and you may be opening yourself up for unnecessary pain by going cutting edge.
The option you presented, Xeon E-2388G, in terms of price is 200 EUR above the i9 but it has half of the threads, 16 vs 32.
I know that for real data centres this is not relevant, but for our uses cases I believe it is more efficient to have lower base clocks with multiple threads available. So when there is demand for intensive work it will burst to the max clock and at the same time present multiple cpu threads to the processes running. I would say that for our use case 12 in 24 hours the server will be in idle state. Not sure if you share the same view regarding cpu threads and base clocks.

Regarding latest generation I did read some post about not going cutting edge but it was back in April 2022. But I guess I need to explore older options. Will also make me save some money.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Nothing has fundamentally changed since 2022: X13 is still the last generation, and Intel's hybrid architecture is still not supported by TrueNAS.
X12 or older means DDR4, and savings. If you do want more than 8C/16T (it's not clear if your workload actually needs it), look into X11 with either some medium-range 1st/2nd Xeon Scalable or X11SRL-F/X11SRM-F with a Xeon W-2100/2200—either case meaning RDIMM, and further savings on RAM.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Nothing has fundamentally changed since 2022: X13 is still the last generation, and Intel's hybrid architecture is still not supported by TrueNAS.
X12 or older means DDR4, and savings. If you do want more than 8C/16T (it's not clear if your workload actually needs it), look into X11 with either some medium-range 1st/2nd Xeon Scalable or X11SRL-F/X11SRM-F with a Xeon W-2100/2200—either case meaning RDIMM, and further savings on RAM.
Cheers @Etorix got your point and that's why also @jgreco was also recommending an older CPU. This was something I overlooked, the fact that Intel Hybrid (P and E cores) aren't supported on TrueNAS 13. Very likely not even on the upcoming FreeBSD 14 from what I read on their forums.

Looks like I need to redo my research and select Intel 11th gen. Or maybe I can revisit an old draft that I had using AMD Ryzen.

Once I am done with my research will reply to this post.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think I will stick with SilverStone because when looking at Supermicro chassis with at least 12 hot-swap bays I see it starts at 1.2k EUR.

Depends on local availability. Regrets. Perhaps not entirely in favor of SilverStone, but I will say that I buy stuff like their SDP11 units with zero issues or complaints. Clean design, well engineered and clearly not just slapped together in some Asian sweatshop. It's just that the Supermicro stuff has the advantage of immense flexibility and generations of evolution. Your SilverStone probably won't start on fire or anything like that. :smile:

The option you presented, Xeon E-2388G, in terms of price is 200 EUR above the i9 but it has half of the threads, 16 vs 32.
I know that for real data centres this is not relevant, but for our uses cases I believe it is more efficient to have lower base clocks with multiple threads available. So when there is demand for intensive work it will burst to the max clock and at the same time present multiple cpu threads to the processes running. I would say that for our use case 12 in 24 hours the server will be in idle state. Not sure if you share the same view regarding cpu threads and base clocks.

CPU burst speeds do not apply when more than a certain number of cores are active. My perspective is that for most NAS workloads, higher base core speed with lower core count is preferable for most workloads, including SMB, NFS, etc. However, if doing other work such as dedup, high compression, jails/containers/VM's, you might well benefit from a larger number of cores. In such cases I am mostly interested in the peak GHz * core product (2.7GHz * 12 cores = 32.4 GHz, etc) but rarely do I find sheer number of cores alone to be meaningful. That is mostly just useful in virtualization environments where you are selling dedicated cores or are locking jobs to cores (same thing more or less). Intel's CPU selection has exploded in recent years so that you can really optimize for various aspects of workload-oriented CPU sizing. At a price, of course.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Thanks for both feedbacks. Meanwhile, I have another proposal which I will add below.

Depends on local availability. Regrets. Perhaps not entirely in favor of SilverStone, but I will say that I buy stuff like their SDP11 units with zero issues or complaints. Clean design, well engineered and clearly not just slapped together in some Asian sweatshop. It's just that the Supermicro stuff has the advantage of immense flexibility and generations of evolution. Your SilverStone probably won't start on fire or anything like that. :smile:

Tip, buy a used version!

Looks like this build is not going to be so "pro" as I wanted. I am still looking at SilverStone. I understand this is not going to be so shiny and polished as Supermicro, but the fact SilverStone won't start on fire is reassuring.

Maybe is just my lack of knowledge about the market but couldn't find any European based shops that sells used Supermicro chassis. Some shops show used rack servers however they are mostly complete builds and not only the chassis. Also most of them don't show Supermicro just the HP, Dell, Lenovo brands.

And since this is still quite an investment, I really want to avoid all the customs clearance.


CPU burst speeds do not apply when more than a certain number of cores are active. My perspective is that for most NAS workloads, higher base core speed with lower core count is preferable for most workloads, including SMB, NFS, etc. However, if doing other work such as dedup, high compression, jails/containers/VM's, you might well benefit from a larger number of cores. In such cases I am mostly interested in the peak GHz * core product (2.7GHz * 12 cores = 32.4 GHz, etc) but rarely do I find sheer number of cores alone to be meaningful. That is mostly just useful in virtualization environments where you are selling dedicated cores or are locking jobs to cores (same thing more or less). Intel's CPU selection has exploded in recent years so that you can really optimize for various aspects of workload-oriented CPU sizing. At a price, of course.

Agree but at the same time I don't see on my use case a very specific need for high peak GHz. I would prefer to have a bit more threads since I will be doing some virtualization, not to the level that a certain # of cores will be assigned to a VM, but still think is preferable.


Meanwhile, I also evaluated an old proposal that I had done a couple of months to use AMD Ryzen and have gone again through a lot of threads on this forum about this. I also read about a simulation that was done by overclocking a build to test ECC on AMD consumer, and it did correct a single bit but failed to detect a double bit error.
So in the end, my conclusion is simple: if you care about your data and want to jump to server grade memory, use Intel (or AMD but using server level CPUs - however that is way more expensive). The alternative will be to continue using non-ECC memory.


So planning now to go with X11 and Xeon Silver. I have select Samsung RDIMM because it is still available on the market and is showing by Supermicro as been tested successfully.

Asset typeDescriptionQuantityReasoning
caseSilverStone RM43-320-RS Rackmount Storage, 4HE1go big for future upgrades
mbSupermicro X11SPi-TF retail1
cpuIntel Xeon Silver 4214, 12C / 24T, 2.20-3.20GHz1
ramSamsung RDIMM 32GB, DDR4-3200, CL22-22-22 - M393A4K40EB3-CWE8
cpu fanNoctua NH-D9 DX-4189 4U1
psuSeasonic Prime TX-1000 1000W ATX 2.41
boot ssdSolidigm SSD D3-S4520 240GB, 2.5", SATA2
slow storageWestern Digital WD Gold 2TB - (~7 TiB practical storage capacity using RAID-Z2) 6current data storage is using 1 TiB
fast storageSolidigm SSD D3-S4520 1.92TB, 2.5", SATA (mirrored vdev)3current virtualization storage is using 500 GiB
controllerBroadcom SAS 9300-8i, PCIe 3.0 x81


Any additional comments / different points of view?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Looks fine… if you're aware you have a LGA3647 CPU and a LGA4187 cooler which will require an adapter kit from Noctua.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
There is quite some room for interpretation when it comes to your workload/use-case. On the other hand the latter is crucial to determine how good a fit a certain hardware configuration is. Especially virtualization, containers, etc. that you mentioned are not described in detail. But that detail would be very important.

Also, when you write about file sharing this can mean very different things. Having ten people work on office documents that are saved every minute to the NAS is very(!) different from ten people doing video editing and scrubbing an 8k timeline. So that level of detail would help.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Looks fine… if you're aware you have a LGA3647 CPU and a LGA4187 cooler which will require an adapter kit from Noctua.
Thanks for pointing it out. I just confirmed the compatibility for NH-D9 DX-3647 4U on Noctua Compatibility Center. You just saved me some money of ordering an additional adapter kit.


There is quite some room for interpretation when it comes to your workload/use-case. On the other hand the latter is crucial to determine how good a fit a certain hardware configuration is. Especially virtualization, containers, etc. that you mentioned are not described in detail. But that detail would be very important.

Also, when you write about file sharing this can mean very different things. Having ten people work on office documents that are saved every minute to the NAS is very(!) different from ten people doing video editing and scrubbing an 8k timeline. So that level of detail would help.
Agree. I should have clarified earlier.

The main aim of this system is definitely building a resilient NAS solution for home and small business. This includes backups of ISO images, documentation (PDF, PPTX, etc.), financial information, video and other media storage (drone captures).

The main use case won't be, for sure, having users editing or scrubbing video or modifying documents directly from the NAS.

As for the virtualization and containerization, it won't have any services running for clients. The purpose is internal learning as it will be used as a playground.

We already have a system running KVM for virtualization with 10 VMs and the used CPU % is around 10-15 % idle, this is where we have some Rocky Linux machines running kubernetes, nodejs and also some other Windows machines. The idea was to consolidate into the NAS solution to save some energy.



Planning to start ordering few of the materials already, for e.g. the disks and the SSDs.
Hopefully if there is any major issue with my BOM, particularly on CPU, MB and RAM discussion, I will still have some time for debate.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Hi there community, it's been a while since my last post.

Few things have changed on my BOM, for the case I got away from the expensive and more complete solution like Supermicro and just bought something from Logic Case. I will share my feedback once ready, but replacing the stock fans is a must.

I started buying the components, just waiting on the UltraStar and the Solidigm D3-S4620.

On the HBA topic, back in June I opted for Broadcom HBA 9405W-16i, but it's no longer available (with no ETA). Can someone comment whether option Supermicro AOC-S3616L-L16iT is a good one? The chipset is Broadcom SAS3616, so in principle ...? good option?

My current BOM is currently like:
ComponentURL
1x Logic Case SC-4316
1x Supermicro X11SPi-TF retail
1x Intel Xeon Silver 4214, 12C / 24T, 2.20-3.20GHz
8x M393A4K40CB2-CVF - Supermicro (Samsung) 32GB 288-Pin DDR4 2933 (PC4 24300) Server Memory (MEM-DR432LC-ER29)
1x Seasonic Prime TX-1600 1600W ATX 2.4
1x Noctua NH-D9 DX-3647 4U
2x Solidigm D3-S4520 2.5 480GB "(6.4cm) SATA 6Gb / s 3D-NAND TLC (SSDSC2KB480GZ01)
6x Western Digital 4TB ULTRASTAR DC HC310 3.5" SATA - HUS726T4TALN6L4
3x Solid SSD D3-S4620 1.92TB, 2.5 ", SATA
1x Broadcom HBA 9405W-16i, PCIe 3.1 x16 or
1x Supermicro AOC-S3616L-L16iT, PCIe 3.0 x16


Getting the components one by one takes time, but hopefully should be ready to start the build phase before Christmas and sharing some photos of the build detail.

Thank you.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
I didn't do my homework on the last post... Supermicro AOC-S3616L-L16iT, PCIe 3.0 x16 is using chipset Broadcom SAS3616

Accordingly with this post from jgreco
"Any of them that run the IT firmware and show up under the MPR driver should be fine."

and the list from mpr
(...)
• Broadcom Ltd./Avago Tech (LSI) SAS 3616 (16 Port SAS/PCIe)


Supermicro AOC-S3616L-L16iT should be fine.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
"Any of them that run the IT firmware and show up under the MPR driver should be fine."

I will note that this is somewhat speculative in nature; most of the forum users here use 9200 or 9300 silicon (SAS200x/SAS230x/SAS300x), which has been tested to death. There is some ambiguity in that list because stuff like the SAS3108 shows up there, which doesn't seem to support IT firmware, yet MPR claims to support it. Quite possibly a typo or summarization error. Newer controllers apparently have firmware that can safely do HBA mode via the MRSAS driver when the card is configured correctly, but you won't find lots of people here with experience with this since most of us are cheapskates and shoot for the cheapest viable option.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
I will note that this is somewhat speculative in nature; most of the forum users here use 9200 or 9300 silicon (SAS200x/SAS230x/SAS300x), which has been tested to death. There is some ambiguity in that list because stuff like the SAS3108 shows up there, which doesn't seem to support IT firmware, yet MPR claims to support it. Quite possibly a typo or summarization error. Newer controllers apparently have firmware that can safely do HBA mode via the MRSAS driver when the card is configured correctly, but you won't find lots of people here with experience with this since most of us are cheapskates and shoot for the cheapest viable option.
Hi @jgreco, understood.

I would also like to apologize for the long hiatus but between getting the hardware in stock, Christmas season with items getting delayed, it took me until the beginning of the year to get most of the pieces.

In the meantime, this month I finally received the cables that I was missing (had to order additional MOLEX cabling as well the MiniSAS HD to MiniSAS HD cables).

All is mounted and ready to start! And I didn't want to ask for a shortcut but (:smile:), is there any community guides or walkthrough that helps to:
  • confirm the SAS controller is in IT mode and run few tests to ensure it is fully compatible with TrueNAS?
  • run few performance tests on the disks and SSDs to ensure they are up to the spec (and not some box with a SD card inside)?

The bill of materials ended like the list below. I replaced all stock fans with similar Noctua. The SAS controller was really hard to get, even this one took ages to arrive. Stock disappeared very fast, this was between end of October and November last year.

Components
1x Logic Case SC-4316
1x Supermicro X11SPi-TF retail
1x Intel Xeon Silver 4214, 12C / 24T, 2.20-3.20GHz
8x M393A4K40CB2-CVF - Supermicro (Samsung) 32GB 288-Pin DDR4 2933 (PC4 24300) Server Memory (MEM-DR432LC-ER29)
1x Seasonic Prime TX-1600 1600W ATX 2.4
1x Noctua NH-D9 DX-3647 4U
2x Solidigm D3-S4520 2.5 480GB "(6.4cm) SATA 6Gb / s 3D-NAND TLC (SSDSC2KB480GZ01)boot ssd mirrored
6x Western Digital 4TB ULTRASTAR DC HC310 3.5" SATA - HUS726T4TALN6L4planned for 2 vdevs with 3-way mirror
3x Solid SSD D3-S4620 1.92TB, 2.5 ", SATAplanned for 1 vdev with 3-way mirror
1x Supermicro AOC-S3616L-L16iT, PCIe 3.0 x16only SAS available


Meanwhile, I am posting some pictures to show how it is looking currently.

IMG_3655.jpg

IMG_3654.jpg


Cheers.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
confirm the SAS controller is in IT mode
sas3flash -list should do
run few performance tests on the disks and SSDs to ensure they are up to the spec (and not some box with a SD card inside)?
I replaced all stock fans with similar Noctua.
This is potentially an issue if the quiet fans do not pull enough air.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
sas3flash -list should do
This returns "No Avago SAS adapters found".

Although I found through dmesg there is a match for mpr0 instead, "Avago Technologies (LSI) SAS3616".

In the meantime, I flashed the firmware to the latest version available, which makes mention of 3616IT24.ROM

And found the following commands looking into other threads:
Code:
# mprutil show adapter
mpr0 Adapter:
       Board Name: AOM-S3616-S
   Board Assembly:
        Chip Name: LSI SAS3616
    Chip Revision: ALL
    BIOS Revision: 9.47.00.00
Firmware Revision: 24.00.00.00
  Integrated RAID: no
         SATA NCQ: ENABLED
 PCIe Width/Speed: x16 (8.0 GB/sec)
        IOC Speed: Full
      Temperature: 55 C

# storcli show
CLI Version = 007.1207.0000.0000 Sep 25, 2019
Operating system = FreeBSD 13.1-RELEASE-p9
Status Code = 0
Status = Success
Description = None

Number of Controllers = 1
Host Name = (removed)
Operating System  = FreeBSD 13.1-RELEASE-p9
StoreLib IT Version = 07.1300.0200.0000

IT System Overview :
==================

--------------------------------------------------------------------------
Ctl Model       AdapterType   VendId DevId SubVendId SubDevId PCI Address
--------------------------------------------------------------------------
  0 AOM-S3616-S   SAS3616(B0) 0x1000  0xD1    0x15D9   0x1C15 00:b3:00:00
--------------------------------------------------------------------------


But still not absolutely sure whether this confirms or not the IT mode.

I will do a deep reading on this thread and will post my results once I have a conclusion.

This is potentially an issue if the quiet fans do not pull enough air.
Ok, good that I still have those stock fans around. But I can also increase the airflow on the motherboard config. Will monitor the temperature sensors for any sudden increase.

Thanks for all the help.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
But still not absolutely sure whether this confirms or not the IT mode.
Configure the system BIOS of the motherboard to load storage option ROMs, and make sure that the slot into which the card is placed also has the "load option ROM" setting enabled (BIOS varies, so exact settings vary). Basically, do your best to make sure that the option ROM for the card gets loaded.

Boot the system, and if you see a legacy screen for the card, press the keystroke listed to enter its config. If you don't seen anything, look in the UEFI BIOS config of the motherboard for the config of the card.

Look around in those configs, and if you see anything about "disk groups" or "virtual disks" or "clear configuration", then you have RAID functionality. Otherwise, you don't, and it's IT mode.

But, based on the storcli output, you are not in IR or MR mode, which only leaves IT.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Configure the system BIOS of the motherboard to load storage option ROMs, and make sure that the slot into which the card is placed also has the "load option ROM" setting enabled (BIOS varies, so exact settings vary). Basically, do your best to make sure that the option ROM for the card gets loaded.

Boot the system, and if you see a legacy screen for the card, press the keystroke listed to enter its config. If you don't seen anything, look in the UEFI BIOS config of the motherboard for the config of the card.

Look around in those configs, and if you see anything about "disk groups" or "virtual disks" or "clear configuration", then you have RAID functionality. Otherwise, you don't, and it's IT mode.

But, based on the storcli output, you are not in IR or MR mode, which only leaves IT.
Thanks for the tip @nabsltd.

I will check this on the next reboot. At the moment, still waiting for the solnet-array-test-v3.sh test to finish.


Code:
Array's average speed is 316.944 MB/sec per disk

Disk    Disk Size  MB/sec %ofAvg
------- ---------- ------ ------
da0      3815447MB    241     76 --SLOW--
da1      1831420MB    464    146 ++FAST++
da2      3815447MB    243     77 --SLOW--
da3      1831420MB    466    147 ++FAST++
da4      3815447MB    244     77 --SLOW--
da5      3815447MB    244     77 --SLOW--
da6      1831420MB    464    146 ++FAST++
da7      3815447MB    243     77 --SLOW--
da8      3815447MB    243     77 --SLOW--

This next test attempts to read all devices in parallel.  This is
primarily a stress test of your disk controller, but may also find
limits in your PCIe bus, SAS expander topology, etc.  Ideally, if
all of your disks are of the same type and connected the same way,
then all of your disks should be able to read their contents in
about the same amount of time.  Results that are unusually slow or
unusually fast may be tagged as such.  It is up to you to decide if
there is something wrong.

Performing initial parallel array read
Sat Feb 24 16:46:46 WET 2024
The disk da0 appears to be 3815447 MB.
Disk is reading at about 242 MB/sec
This suggests that this pass may take around 263 minutes

                   Serial Parall % of
Disk    Disk Size  MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0      3815447MB    241    242    100
da1      1831420MB    464    483    104
da2      3815447MB    243    243    100
da3      1831420MB    466    481    103
da4      3815447MB    244    242     99
da5      3815447MB    244    242     99
da6      1831420MB    464    484    104
da7      3815447MB    243    241     99
da8      3815447MB    243    241     99


So far so good.

Initially I didn't want to mix the HDD and SSD's but since the full test takes a while to complete I took the decision to mix all.

The three faster ones are the SSD's. Looks pretty standard to the numbers the drives were showing. So they look to be the real deal.
 

mrjames

Dabbler
Joined
Feb 27, 2023
Messages
11
Just today after having a second look at the script code that I realize it will be running forever (in burnin mode). :rolleyes:

For some reason I had the idea it would stop on the 5th or 6th round.

Anyway, more than one week after and I am ready to setup the mirrors and start the migration from the old server.

Code:
Performing burn-in pass 5 parallel array read
Sat Mar  2 20:04:46 WET 2024
The disk da0 appears to be 3815447 MB.
Disk is reading at about 241 MB/sec
This suggests that this pass may take around 263 minutes

                   Serial Parall % of
Disk    Disk Size  MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0      3815447MB    241    241    100
da1      1831420MB    464    483    104
da2      3815447MB    243    243    100
da3      1831420MB    466    481    103
da4      3815447MB    244    242     99
da5      3815447MB    244    242     99
da6      1831420MB    464    484    104
da7      3815447MB    243    241     99
da8      3815447MB    243    240     99

Awaiting completion: burn-in pass 5 parallel array read


Sun Mar  3 01:40:17 WET 2024
Completed: burn-in pass 5 parallel array read

Disk's average time is 14525 seconds per disk

Disk    Bytes Transferred Seconds %ofAvg
------- ----------------- ------- ------
da0         4000787030016   19906    137 --SLOW--
da1         1920383410176    3792     26 ++FAST++
da2         4000787030016   19780    136 --SLOW--
da3         1920383410176    3811     26 ++FAST++
da4         4000787030016   19873    137 --SLOW--
da5         4000787030016   19667    135 --SLOW--
da6         1920383410176    3793     26 ++FAST++
da7         4000787030016   20131    139 --SLOW--
da8         4000787030016   19972    138 --SLOW--

This next test attempts to read all devices while forcing seeks.
This is primarily a stress test of your hard disks.  It does thhis
by running several simultaneous dd sessions on each disk.

Performing burn-in pass 5 parallel seek-stress array read
Sun Mar  3 01:40:17 WET 2024
The disk da0 appears to be 3815447 MB.
Disk is reading at about 221 MB/sec
This suggests that this pass may take around 287 minutes

                   Serial Parall % of
Disk    Disk Size  MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0      3815447MB    241    220     91
da1      1831420MB    464    541    117
da2      3815447MB    243    218     90
da3      1831420MB    466    541    116
da4      3815447MB    244    219     90
da5      3815447MB    244    215     88
da6      1831420MB    464    541    117
da7      3815447MB    243    215     88
da8      3815447MB    243    218     90

Awaiting completion: burn-in pass 5 parallel seek-stress array read

Mon Mar  4 14:57:41 WET 2024
Completed: burn-in pass 5 parallel seek-stress array read

Disk's average time is 70111 seconds per disk

Disk    Bytes Transferred Seconds %ofAvg
------- ----------------- ------- ------
da0         4000787030016  117104    167 --SLOW--
da1         1920383410176   17674     25 ++FAST++
da2         4000787030016   91621    131 --SLOW--
da3         1920383410176   17164     24 ++FAST++
da4         4000787030016   94620    135 --SLOW--
da5         4000787030016   91100    130 --SLOW--
da6         1920383410176   16371     23 ++FAST++
da7         4000787030016   92847    132 --SLOW--
da8         4000787030016   92498    132 --SLOW--

This next test attempts to read all devices in parallel.  This is
primarily a stress test of your disk controller, but may also find
limits in your PCIe bus, SAS expander topology, etc.  Ideally, if
all of your disks are of the same type and connected the same way,
then all of your disks should be able to read their contents in
about the same amount of time.  Results that are unusually slow or
unusually fast may be tagged as such.  It is up to you to decide if
there is something wrong.

Performing burn-in pass 6 parallel array read
Mon Mar  4 14:57:41 WET 2024
The disk da0 appears to be 3815447 MB.
Disk is reading at about 242 MB/sec
This suggests that this pass may take around 263 minutes

                   Serial Parall % of
Disk    Disk Size  MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0      3815447MB    241    241    100
da1      1831420MB    464    483    104
da2      3815447MB    243    242    100
da3      1831420MB    466    481    103
da4      3815447MB    244    242     99
da5      3815447MB    244    242     99
da6      1831420MB    464    483    104
da7      3815447MB    243    240     99
da8      3815447MB    243    240     99

Awaiting completion: burn-in pass 6 parallel array read
 
Top