SUPERMICRO 4U CSE-846 - X9DRI-F Build

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
Good Day wonderful iX Community,

I been a lurker for a while and I figured I try my shot at this new build I planning, it’s based on a lot of the hardware recommendations I seen posted here in the forum along with some tweaks of my own based on my requirements.

I would like to buy the following used server:
https://www.ebay.com/itm/Supermicro...-24-x-HDD-Storage-Server-W-Rails/173688157339

System: Supermicro SuperChassis 846E1-R1200B Dual 6 Core Xeon 24 x HDD Storage Server
Processor: Dual Intel Xeon E502620 15M 2Ghz Six Core Processor
Memory: 64GB RAM (8x 8GB PC3-12800R)
RAID Controller : LSI 9266-8i Array *(To be replaced with one or more HP HBA https://www.ebay.com/itm/660088-001...8I-PCI-E-HBA-HOST-BUS-ADAPTER-US/132757577271)
Network: Intel I350T4BLK (4 Port Gigabit Network Adapter) AND some sort of dual port 10GB SPF+
System Board: X9DRi-F
Power Supply: Dual PWS-1K21P-1R 1200W 80 Gold Plus
Hard Drive Configuration: Tentative, I am thinking of two configurations:
  • Data pool with 3 VDEVs RAIDZ1 on 9x 6TB Drives total of under 36TB of usable space, VM pool with 3VDEVs Mirrors on 6x 2TB SSDs total of under 6TBs of usable space. This leaves me with 9 extra drive bays to add another VDEV to the data pool for future expansion and 2 or 3 VDEV Mirrors assuming I may or may not use hot-spare drives.
  • Pool with 5 VDEV Mirrors on 10x 6TB Drives total of under 30TBs of usable space, a ZIL and maybe SLOG SSD or NVMe drive(s). This configuration leaves me with 13~14 empty drive bays that I can utilize to slowly expand with additional VDEV mirrors.

***This assumes my understanding of VDEVs via RAIDZ1 and mirrors is correct***

This server will be utilized to store 20TBs of active content files ranging from small to large files (20Gigs in size) for 6-10 machines and serve up iSCSI storage to three ESX servers via dual 10GB SFP+ links to host 30~35 VMs for standard operation AD, Management systems, Firewall, CA, IDS, few databases etc.…

Current concern is making sure VM performance is not a bottleneck as new systems are often stood up each month, imaged or deployed and can take up some bandwidth with unclear IOPS requirements.

NOTE: Content will be backed up to a Synology NAS populated with 8x 8TB Drives via 10Gig SFP+

I would appreciate some feedback on the system and drive configurations.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
RAID Controller : LSI 9266-8i Array *(To be replaced with one or more HP HBA
You only need one. I would suggest spending the little bit extra to get one that is already flashed to the correct firmware. It is worth a little extra cash to save the time and trouble:
https://www.ebay.com/itm/HP-H220-6G...0-IT-Mode-for-ZFS-FreeNAS-unRAID/162862201664
Network: Intel I350T4BLK (4 Port Gigabit Network Adapter) AND some sort of dual port 10GB SPF+
Why the 4 port 1Gb card? There are two 1Gb ports integrated in the Supermicro board, if I recall correctly, but you don't even need those unless you are going to dedicate the SFP+ to a dedicated storage network.
This is the 10Gb card that iX systems sells and it is a good one:
https://www.amazon.com/FreeNAS-Dual-Port-Upgrade-Ports-Twinax/dp/B011APKCHE/
Data pool with 3 VDEVs RAIDZ1 on 9x 6TB Drives
RAIDz1 is not advisable with drives that large, you should be thinking RAIDz2.
I may or may not use hot-spare drives.
Hot spares are a waste of a hard drive. Cold spares are good to have on hand.
This server will be utilized to store 20TBs of active content files ranging from small to large files (20Gigs in size) for 6-10 machines and serve up iSCSI storage to three ESX servers via dual 10GB SFP+ links to host 30~35 VMs for standard operation AD, Management systems, Firewall, CA, IDS, few databases etc.…
If you are going to host iSCSI (block storage) you should be looking at all mirrors and no RAIDz of any kind.
You should read this article by @jgreco on why to use mirrors with block storage:
https://www.ixsystems.com/community...and-why-we-use-mirrors-for-block-storage.112/

Current concern is making sure VM performance is not a bottleneck as new systems are often stood up each month, imaged or deployed and can take up some bandwidth with unclear IOPS requirements.
I would suggest going with a 12 disks to start, in six mirror vdevs, and go with larger drives as they usually have faster transfer rate.
I have recently purchased a batch of the Seagate 10TB drives for a server at work and I have been very happy with the performance:
https://www.amazon.com/Seagate-256MB-Cache-Enterprise-ST10000NM0086/dp/B01LXXV880

I would recommend them because of the speed, regardless of the capacity you expect to need. Also, keep in mind that you can't fill block storage above 50% without loosing performance significantly. So, 6 vdevs at 10TB each would give you 60TB of raw storage, but you could only use 30TB of it.
As you add more miror vdevs, you add more potential for IOPS. In very rough terms, more vdevs equals more IOPS.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Pool with 5 VDEV Mirrors on 10x 6TB Drives total of under 30TBs of usable space, a ZIL and maybe SLOG SSD or NVMe drive(s).
PS. Yes, you will need a SLOG and you would not have 30TB usable, you cut usable in half when you do iSCSI / block storage.

Some insights into SLOG/ZIL with ZFS on FreeNAS
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Testing the benefits of SLOG using a RAM disk!
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

SLOG benchmarking and finding the best SLOG
https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521/
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
Chris,

Thanks a lot for the detailed response.
You only need one. I would suggest spending the little bit extra to get one that is already flashed to the correct firmware. It is worth a little extra cash to save the time and trouble:
https://www.ebay.com/itm/HP-H220-6G...0-IT-Mode-for-ZFS-FreeNAS-unRAID/162862201664
I was thinking I may need more if I went for two separate pools one being standard spindle drives and the other SSDs which could potentially need more speed on the back end.

Why the 4 port 1Gb card? There are two 1Gb ports integrated in the Supermicro board, if I recall correctly, but you don't even need those unless you are going to dedicate the SFP+ to a dedicated storage network.
This is the 10Gb card that iX systems sells and it is a good one:
https://www.amazon.com/FreeNAS-Dual-Port-Upgrade-Ports-Twinax/dp/B011APKCHE/
The original used server came with so i'd figured I could leave it on. Thanks for the 10Gb card link.

RAIDz1 is not advisable with drives that large, you should be thinking RAIDz2.
Noted, additional RAIDz2 drive cost will need to be calculated but may depend on feedback you provided about block storage. My original thought on the drive configuration was
1) I create two separate pools, one for data storage with RAIDz1 or RAIDz2 to serve content via SMB/CIFS while the other pool would be SSDs multiple VDEV mirrors to serve iSCSI content.
2) I would just create a large pool with VDEV mirrors and serve up a few TBs via iSCSI while the rest via SMB/CIFS

Hot spares are a waste of a hard drive. Cold spares are good to have on hand.
Makes sense, I had a similar thought

If you are going to host iSCSI (block storage) you should be looking at all mirrors and no RAIDz of any kind.
You should read this article by @jgreco on why to use mirrors with block storage:
https://www.ixsystems.com/community...and-why-we-use-mirrors-for-block-storage.112/
I definitely did read that post a week ago as I was coming up with some build ideas, which is why I was thinking of separating the pools if I went RAIDz1 and Mirrors route noted above. This of course is assuming my understanding on how pools and VDEVs can be configured is close to correct.

I would suggest going with a 12 disks to start, in six mirror vdevs, and go with larger drives as they usually have faster transfer rate.
I have recently purchased a batch of the Seagate 10TB drives for a server at work and I have been very happy with the performance:
https://www.amazon.com/Seagate-256MB-Cache-Enterprise-ST10000NM0086/dp/B01LXXV880


I would recommend them because of the speed, regardless of the capacity you expect to need. Also, keep in mind that you can't fill block storage above 50% without loosing performance significantly. So, 6 vdevs at 10TB each would give you 60TB of raw storage, but you could only use 30TB of it.
As you add more miror vdevs, you add more potential for IOPS. In very rough terms, more vdevs equals more IOPS.
The 12 disks build at 10TB drives is a bit more steep in price than originally expected, but it may be a possible option since I can expand it with additional VDEVs. I'll check out a few comparable drives, It's been a while since I used the enterprise Seagate drives.

I did not know the performance hit on 50% or high consumption with mirrors, this may explain some prior issues i had several years ago with an old freenas build with iSCSI mirrors.

PS. Yes, you will need a SLOG and you would not have 30TB usable, you cut usable in half when you do iSCSI / block storage.
Excellent, I have been reading nonstop on the NVMe SLOG drives and I'm starting to feel that "Intel Optane SSD 900P Series - 280GB" looks awesome. Just to clarify, are you referring that the iSCSI/block storage should not be 50% of 30TB or that the drive consumption of the 30TB pool where iSCSI will utilize some of it should not be 50% of 30TB due to performance issues?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I was thinking I may need more if I went for two separate pools one being standard spindle drives and the other SSDs which could potentially need more speed on the back end.
The SAS backplane in that system would not be able to be split between two controllers. It is a SAS expander backplane and adding a second controller to it would add redundancy, not speed. Dual link to a single controller is the fastest access you can get, and that would be quite fast.
If you want to put the SSDs on one controller and the spinning disks on another, you could put all your spinning disks in this chassis and use an external controller to connect a disk shelf with the SSDs.
Just to clarify, are you referring that the iSCSI/block storage should not be 50% of 30TB or that the drive consumption of the 30TB pool where iSCSI will utilize some of it should not be 50% of 30TB due to performance issues?
You were estimating 30TB of capacity from the 6TB drives but you were expecting to be able to fill them all the way up. You must not fill the pool above 50% if you want to still have good performance from block storage. For that reason, even though you might have 30TB, you could only use 15 of it. That is part of the reason I suggested the larger disks. If you want to have 30TB usable, you will need to have 60TB of raw storage capacity. Storage costs. Where I work, we spent over $100k in the last six months just for the servers I manage. Some of that was a new 10Gb switch and interface cards, but we are also buying 80 of these 10TB hard drives to upgrade a server (just one server with 80 drives) that is using 4TB drives right now.
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
The SAS backplane in that system would not be able to be split between two controllers. It is a SAS expander backplane and adding a second controller to it would add redundancy, not speed. Dual link to a single controller is the fastest access you can get, and that would be quite fast.
If you want to put the SSDs on one controller and the spinning disks on another, you could put all your spinning disks in this chassis and use an external controller to connect a disk shelf with the SSDs.

Understood, did not know this SAS expander was rated up to 7880MB/sec

You were estimating 30TB of capacity from the 6TB drives but you were expecting to be able to fill them all the way up. You must not fill the pool above 50% if you want to still have good performance from block storage. For that reason, even though you might have 30TB, you could only use 15 of it. That is part of the reason I suggested the larger disks. If you want to have 30TB usable, you will need to have 60TB of raw storage capacity. Storage costs. Where I work, we spent over $100k in the last six months just for the servers I manage. Some of that was a new 10Gb switch and interface cards, but we are also buying 80 of these 10TB hard drives to upgrade a server (just one server with 80 drives) that is using 4TB drives right now.

Got it, I was reading too much into your previous post, I was thinking 30 TB for my original mirror set that would be about 85% consumed. However, based on recent findings, this would tremendously impact my performance so a change in configuration is necessary.

My budget isn't quite that potent since you see i'm considering used hardware configuration, but you are definitely giving me some great ideas. I'll be crunching some numbers over the next few weeks to see what is the best route on my end.

Appreciate the feedback and advise!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
did not know this SAS expander was rated up to 7880MB/sec
I did not say a speed. Where did you get that number from?
My budget isn't quite that potent since you see i'm considering used hardware configuration, but you are definitely giving me some great ideas. I'll be crunching some numbers over the next few weeks to see what is the best route on my end.
If you would share the limits, I can attempt to find something that would fit into your budget.
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
I did not say a speed. Where did you get that number from?
Hey Chris,

Thanks again for your constant assistance. After I read your post I decided to read up more on the HBA, so I actually got the numbers from the sellers description on the eBay page below.

"These HBA controllers are based on the LSI SAS2308 SAS controller chipset and support PCI-E 3.0 specifications with x8 lanes, providing up to 7880MB/sec bandwidth between this card and your system! The controller has 2 SFF-8087 connectors, each carrying 4 SAS lanes. You will need a compatible cable to connect these to your hard drives or a backplane. "

If you would share the limits, I can attempt to find something that would fit into your budget.

I actually made a new post about an hour ago about a thought I had from this discussion on re-purposing hardware instead of buying new storage server in the link below:
https://www.ixsystems.com/community/threads/convert-dell-poweredge-t620.76114/

Either way, for a server build not including the drives I am trying to keep the cost below $1,400. This leaves me space to purchase a set of drives and an intel Optane SLOG drive.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You might want to look at this system.
https://www.ebay.com/itm/Chenbro-Cl...8-Bay-Enclosure-Chassis-16gb-RAM/382905905586
I use one like this with a bit more memory and I am very happy with it. It has two 24 slot SAS expander backplanes that allow up to 48 data drives.
You can connect both backplanes to one SAS controller OR you can put each backplane on a separate SAS controller if you wanted a bit more potential for IOPS.
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
You might want to look at this system.
https://www.ebay.com/itm/Chenbro-Cl...8-Bay-Enclosure-Chassis-16gb-RAM/382905905586
I use one like this with a bit more memory and I am very happy with it. It has two 24 slot SAS expander backplanes that allow up to 48 data drives.
You can connect both backplanes to one SAS controller OR you can put each backplane on a separate SAS controller if you wanted a bit more potential for IOPS.

Wow that's a lot of drives, will single cpu be sufficient? off note, I kind of liked the SuperMicro build you had previously linked that was still way under budget.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The system I am using for my home NAS is almost identical. It's working fine for me.
I have a NVMe SSD that I partitioned for swap, SLOG and L2ARC. I also have 64GB of RAM, so there's a few small differences. I have 24 x 4TB drives and 8 more 6TB drives.
I also added a 10Gb network card.
Yes, a single CPU is plenty, unless you want a lot of virtual machines.
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
So I ended up pulling the trigger on the following:

Supermicro SuperChassis CSE-846 4U Rackmount Server with X9DRi-F Motherboard
24-Port 4U SAS 6Gbps Backplane SAS2-846EL1
Dual Intel Xeon E5-2630 v2 2.6GHz 6 Core 15MB Cache Processor
192GB DDR3 ECC Registered Mem (12 x 16GB)
LSI 9210-8i IT Mode 6Gbps SAS/SATA RAID Controller
2x PWS-1200p-SQ Power Supply Units since these are apparently very quiet
A handful of Supermicro 80mm Hot-Swappable Middle Axial Fan (FAN-0074L4) to replace standard jet engine fans
Intel Optane SSD 900P Series - 280GB
2x Kingston A400 -120GB SATA - intended to host OS

I have to determine what type of 10Gb network adapter I can move onto it since I have two choices already available:

Mellanox connectx-3 pro
Dual Port 10GB Intel T2 Adapter (X540T2)

I also need to decide how I am going to populate it, since I currently have the following drives available:

8x 4TB Dell branded drives
4x 1TB Samsung SSD (840pro, 850Pro, 850EVO, 860Evo)

Haven't started looking at new larger drives yet, may do that later after I figure out how i'm going fund a new Hypervisor server to replace my T620 with maybe a Supermicro 6027TR-DTRF 2-Node.
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
So i got the 2x PWS-1200p-SQ Power Supplies but It doesn't seem this CSE-846 unit is actually compatible with them, all I can find is they are compatible with the PWS-1K21P-1R model which is super loud from what I been hearing.

Anyone know if there is a way to actually get the SQ models working without returning them?
 

nikalai2

Dabbler
Joined
Jan 6, 2016
Messages
40
Most likely those are not compatible with your PDU. Can you please check what model it is?
I bought two 920SQ's some time ago but before buying them i made some research and i found out that i was no need to change the PDU.
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
So apparently the server was set with the following: Power Distributor PDB-PT846-8824 1 SC846 24PIN REDUNDANT POWER DISTRIBUTOR

After looking online for a replacement unit and not being pleased with the price I took a different turn and did something probably not recommended. I modified the two PWS-1200p-SQ by cutting the extra pins.

I read this solution from the serverthehome guys, a guy called katit got the Frankenstein idea to cut the pin to get it to fit. It works without issues thus far, both units turn on and are fairly quiet, so all I need to do now is replace the standard fans to more quiet models.

https://forums.servethehome.com/index.php?threads/cse-846-questions.6295/page-3

Outside of that I just got the base OS running, still tinkering with the network adapters I think Im going with the Mellanox Connectx-3 pro dual 10GB adapter. Now need to start playing with the drive configurations, followed by as much tuning as I can figure out with iSCSI and some burn-in testing.

Anything else I should be looking for?
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
So I configured my drives as follows, 6x 4TB drives as raidz2 with 2 offline drives while I scavenge for 4x more to add another VDEV. I also setup another pool with 4x 1TB SSD configured with mirrors. Lastly the Intel Optane 900p 280gig drive was partitioned into two 30gig partitions and attached to each pool. (I know some people don't recommend this, but I don't think my workloads are crazy enough to impact it)

Code:
  pool: Data
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        Data                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/25e2403a-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/29bf6a46-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/2de4b5c8-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/31f93619-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/33028656-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/36dbc51f-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
        logs
          gptid/8fd33c99-a77b-11e9-9c5a-002590e66376    ONLINE       0     0     0

errors: No known data errors

  pool: VMs
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        VMs                                             ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/d0d4d9f7-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
            gptid/d14d41ee-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/d1ab40eb-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
            gptid/d1facaa5-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
        logs
          gptid/1f13156b-a776-11e9-a597-002590e66376    ONLINE       0     0     0


Some dd testing with both pools looks like the following:

Code:
Writes

dd if=/dev/zero of=/mnt/VMs/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 40.536689 secs (2,648,814,795 bytes/sec)

dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 41.088641 secs (2,613,232,776 bytes/sec)

Reads

dd if=/mnt/VMs/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 14.702960 secs (7,302,895,867 bytes/sec)

dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 14.915569 secs (7,198,798,782 bytes/sec)

As for actual performance the NAS is delivering excellent iSCSI speeds so far with no tuning thus far via the SSD mirror pool. The following quick test was done on my existing Hyper-V server. I am also able to maintain nearly identical results even while copying data through a separate network interface via SMB at around 300~400MBs.

NewNAS-MirrorSSDs-With_ZIL-JumboFrames_OnSwitchOnly.PNG


I think at the moment I will continue checking what can improve my speeds and start planning on getting a few more 4TB drives and another 2x more SSDs to add onto the pools.
 

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
Another update:

Just recently started receiving email alert notifications informing me that one of my drives appears to be failing some of the self tests I configured noted below. (Not sure yet if I can RMA this drive since its Dell branded, but will get a Seagate immediate replacement)

* Device: /dev/da3, Self-Test Log error count increased from 0 to 1

I checked a bit more and saw a few more errors on, so I'm ordering a few drives to replace the failing drive and expand my configuration. I want to add another VDEV into my configuration. My main configuration is just now over 50% utilization currently configured as follows:

Code:
  pool: Data
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        Data                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/25e2403a-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/29bf6a46-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/2de4b5c8-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/31f93619-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/33028656-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/36dbc51f-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
        logs
          gptid/8fd33c99-a77b-11e9-9c5a-002590e66376    ONLINE       0     0     0


Since I have 2 spare 4TB drives, I will replace the failing drive and add 5x newly purchased (Seagate Constellation ES.3) drives to have another identical VDEV. From what I played with, I don't expect any issues but will be the first time I do a real expansion with a real live freenas system.

Outside of that, I plan on also expanding my SSD mirrors configuration, by adding a new mirror VDEV with possibly 2x 2TB SSDs instead of using 1TB drives. At this point my current consumption is just at 49% and I read plenty of issues with performance degradation after exceeding 50% that I would rather add another two drives than to suffer any penalties.

Code:
  pool: VMs
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        VMs                                             ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/d0d4d9f7-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
            gptid/d14d41ee-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/d1ab40eb-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
            gptid/d1facaa5-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
        logs
          gptid/1f13156b-a776-11e9-a597-002590e66376    ONLINE       0     0     0


In the end, I am more than happy to say this has been a very painless few months, where the server just runs without issues. Little tweaking was required on my part, yes maybe I can tune more performance out of it but as it is I am getting good performance.
 

colmconn

Contributor
Joined
Jul 28, 2015
Messages
174
* Device: /dev/da3, Self-Test Log error count increased from 0 to 1

That error is coming from smartmon not the filesystem. You should probably run smartmon -a /dev/da3 to see what SMART test failed. Might be worth running a long or offline smart test on the drive.
 

drinking12many

Contributor
Joined
Apr 8, 2012
Messages
148
So I configured my drives as follows, 6x 4TB drives as raidz2 with 2 offline drives while I scavenge for 4x more to add another VDEV. I also setup another pool with 4x 1TB SSD configured with mirrors. Lastly the Intel Optane 900p 280gig drive was partitioned into two 30gig partitions and attached to each pool. (I know some people don't recommend this, but I don't think my workloads are crazy enough to impact it)

Code:
  pool: Data
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        Data                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/25e2403a-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/29bf6a46-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/2de4b5c8-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/31f93619-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/33028656-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
            gptid/36dbc51f-a82a-11e9-924a-002590e66376  ONLINE       0     0     0
        logs
          gptid/8fd33c99-a77b-11e9-9c5a-002590e66376    ONLINE       0     0     0

errors: No known data errors

  pool: VMs
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        VMs                                             ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/d0d4d9f7-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
            gptid/d14d41ee-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/d1ab40eb-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
            gptid/d1facaa5-a77a-11e9-9c5a-002590e66376  ONLINE       0     0     0
        logs
          gptid/1f13156b-a776-11e9-a597-002590e66376    ONLINE       0     0     0


Some dd testing with both pools looks like the following:

Code:
Writes

dd if=/dev/zero of=/mnt/VMs/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 40.536689 secs (2,648,814,795 bytes/sec)

dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 41.088641 secs (2,613,232,776 bytes/sec)

Reads

dd if=/mnt/VMs/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 14.702960 secs (7,302,895,867 bytes/sec)

dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 14.915569 secs (7,198,798,782 bytes/sec)

As for actual performance the NAS is delivering excellent iSCSI speeds so far with no tuning thus far via the SSD mirror pool. The following quick test was done on my existing Hyper-V server. I am also able to maintain nearly identical results even while copying data through a separate network interface via SMB at around 300~400MBs.



I think at the moment I will continue checking what can improve my speeds and start planning on getting a few more 4TB drives and another 2x more SSDs to add onto the pools.


Gotta be careful using DD and the like for performance numbers it has to be writing stuff big enough to really tax it, I have 64GB RAM and it's pretty much a hodgepodge of disks behind it and I essentially get the same numbers. I do have a 60GB ZIL and two mirrored VDEVS with regular hard drives on this pool.

dd.JPG


I am sure it is fast though, but you could be getting a lot of that based on caching in RAM for DD, the ISCSI numbers are probably a lot more accurate though I would think.
 
Last edited:

miercoles131

Dabbler
Joined
Apr 17, 2019
Messages
15
I got the following on the questionable drive, looks to me like its starting to fail.

Code:
=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature:     43 C
Drive Trip Temperature:        64 C

Manufactured in week 02 of year 2015
Specified cycle count over device lifetime:  1048576
Accumulated start-stop cycles:  40
Specified load-unload count over device lifetime:  1114112
Accumulated load-unload cycles:  70403
Elements in grown defect list: 0

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:   97606992        2        30  97606994          2      45020.210           0
write:  106156849        0     14211  106156849          0      31872.818           0
verify:  1403512        8       333   1403520          9       2388.786           0

Non-medium error count:      284

SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background short  Completed                  48   31035                 - [-   -    -]
# 2  Background long   Failed in segment -->      56   30924        7814034605 [0x3 0x11 0x0]
# 3  Background short  Completed                  48   30867                 - [-   -    -]
# 4  Background long   Failed in segment -->      56   30757        7814034605 [0x3 0x11 0x0]
# 5  Background short  Completed                  48   30699                 - [-   -    -]
# 6  Background long   Failed in segment -->      56   30589        7814034605 [0x3 0x11 0x0]
# 7  Background short  Completed                  48   30531                 - [-   -    -]
# 8  Background long   Failed in segment -->      56   30422        7814034605 [0x3 0x11 0x0]
# 9  Background short  Completed                  48   30364                 - [-   -    -]
#10  Background long   Completed                  48   30254                 - [-   -    -]
#11  Background short  Completed                  48   30196                 - [-   -    -]
#12  Background long   Completed                  48   30086                 - [-   -    -]
#13  Background short  Completed                  48   30028                 - [-   -    -]
#14  Background long   Completed                  48   29918                 - [-   -    -]
#15  Background short  Completed                  48   29860                 - [-   -    -]
#16  Background long   Completed                  48   29750                 - [-   -    -]
#17  Background short  Completed                  48   29692                 - [-   -    -]
#18  Background long   Completed                  48   29583                 - [-   -    -]
#19  Background short  Completed                  48   29524                 - [-   -    -]
#20  Background long   Completed                  48   29415                 - [-   -    -]

Long (extended) Self Test duration: 31120 seconds [518.7 minutes]
 
Top