Register for the iXsystems Community to get an ad-free experience

Help/Guidance - SSD Pool performance is poor compared to rotational pool

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
my hardware:
Motherboard msi tomahawk b550 https://www.msi.com/Motherboard/MAG-B550-TOMAHAWK/Specification
CPU AMD Ryzen 5 3600 https://www.amd.com/en/products/cpu/amd-ryzen-5-3600
RAM 32GB Kit 2x16GB DDR4-3200 PC4-25600 ECC 2Rx8 Unbuffered Server Memory
Controller ADAPTEC ASR-71605 SFF8643 16 PORT SAS SATA 6Gb/s HBA/RAID in HBA mode
network 2 mobo nics + Intel X520-DA2 10GB Dual Port PCI-E E10G42BTDA

2 pools, primary and secondary.

primary 1 vdev raidz2, 6 ssd, connected to mobo 6 sata ports:
vdev1 3x Samsung_SSD_870_QVO_2TB
vdev1 Samsung_SSD_870_EVO_2TB
vdev1 Samsung_SSD_870_QVO_2TB
vdev1 SPCC_Solid_State_Disk

secondary, 2 vdevs mirrored rotational, connected to adaptec controller:
vdev1 ST8000VN004-2M2101 8tb
vdev1 WDC_WD80EDAZ-11TA3A0 8tb
vdev2 TOSHIBA_HDWE140 4tb
vdev2 WDC WD4000FYYZ-01UL1B3 4b

software:
TrueNAS-SCALE-22.02.3

non-standard settings:
secondary has zfs set cache=metadata secondary

Being used as a backup server for a few desktop pcs, ip camera nvr and vm disk storage repository for xcp-ng (12 vms moderate to low disk io) via the 10g nic, nfs share, sync disabled and directly connected to truenas w/ jumbo frames enabled, nic speeds are as expected.
primary pool hosts vm storage repository and the backups for the desktops. Secondary pool has the drive for frigate to write the ip camera recordings. Primary pool also replicates to the secondary pool as a backup.

The problem... My rotational array performs better than my ssd array. Originally was setup with a 5 disk raid 6 array on an hp server. The drives after a year of use started acting slow. So I decided to standup truenas so
I could get better insight to what was going on, centralize the storage and feel safer with my data.

What I am experiencing:
Transfers of large vm disks to truenas error and fault drives on the array if the copy is for longer than 20 minutes. It is usually the qvo drives crapping out. I can zfs clear them and they resilver and jump back in.
Moving applications or drives to the secondary pool preform 3x better than the primary ssd pool.

I did not buy my hardware with truenas in mind, I stood up the server with what I had available. After doing a lot of forum searching it seems people have problems with the samsung qvo drives. I also see the controller I have doesn't seem to be a favorite either. It is in hba mode but when I have the controller doing the ssd array, I can not trim the drives. This is why I moved it to the onboard sata controller. I am happy to switch out parts gradually to make this a much better storage server. I can roughly buy a part or disk once a month to upgrade/fix. I had 4 qvo drives but I used this months budget to replace one with the evo.

What I am asking is how do I troubleshoot this with my existing hardware and/or what should I replace to get my ssd pool preforming. I mostly care about the vm's running in my lab and I don't have a huge budget to make costly changes. My largest concern is that when the disk work load is high for 20 minutes the array degrades.

Any help is greatly appreciated.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
16,530
It is usually the qvo drives crapping out

This is generally expected; the QVO drives are closer to thumb drives than SSD's in terms of their performance and especially once the free page pool has been depleted, performance will tank.

Controller ADAPTEC ASR-71605 SFF8643 16 PORT SAS SATA 6Gb/s HBA/RAID in HBA mode

This is completely unacceptable, see


RAM 32GB Kit 2x16GB DDR4-3200 PC4-25600 ECC 2Rx8 Unbuffered Server Memory

This is probably too low especially for SCALE.


primary 1 vdev raidz2, 6 ssd, connected to mobo 6 sata ports:

If this is being used for VM storage, it is expected to perform poorly. See

 

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
TY @jgreco for your response. If I was to transform this server in to a better server a part at a time, what would you suggest first? Swapping out the qvo drives or the ram or something else? I am leaning towards eliminating the qvo drives and swapping with evo or pro. My main concern is stability of the array. Copying a 100gb file and the drives choke really worries me.

The adaptec controller is in hba mode and not presented as virtual drives but I see from many posts this doesn't seem to matter and is just a no no. This is somewhat last on my improvement list since the secondary pool is really just there for replication as a local backup. I originally had the ssds on the controller but I couldn't trim the drives. Not knowing enough about lack of trim and seeing all these scary posts about write amplification made me move them to the onboard sata controller.

Question about the vm storage after reading your link. So mirrors seem to be the most performant for vm/database storage. Would a 3 dev mirrored setup be the way to achieve better performance? If this is so, can I just replicate my current pool to my backup pool... destroy the array and replicate back even though the new array will be smaller? I am currently consuming 1.12TiB on primary pool so there is lots of room for reconfiguration. Another thought, I can really configure this pool for no redundancy since I replicate nightly to the other pool and losing a day on these vm's isn't the end of the world. I could build this with the 2 non-qvo devices ... add a evo each month as I build up to the 6 drives.

Sorry for so many questions, the storage world is a new place for me.
 

ChrisRJ

Guru
Joined
Oct 23, 2020
Messages
1,075
The adaptec controller is in hba mode and not presented as virtual drives but I see from many posts this doesn't seem to matter and is just a no no. This is somewhat last on my improvement list since the secondary pool is really just there for replication as a local backup.
This is a line of argument seen relatively often: "My TrueNAS is only used for backup, therefore it is ok to run a configuration that is known to be high risk."

As far as I'm concerned this is utterly wrong. On the contrary: Your backup is your last line of defense. If there is a place to be paranoid, it is the hardware underpinning your backup.
 

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
@ChrisRJ thank you for your response. Do you have any suggestions for order of improving my system? Also how do I know if my storage controller isn't a true hba? Is there some test I can do? I can't find any actual information saying it is not and I am not saying it is one way or the other. I just can't seem to pin down how to know.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
16,530
how do I know if my storage controller isn't a true hba? Is there some test I can do?

You read the article I linked to above. 'Cuz, y'know, this question gets asked daily, so there is already an answer available, it's not like it has never been asked before. :smile:

Does it have a battery? Does it have cache RAM? Then it's a RAID controller and it isn't suitable.
Does it have the letters "LSI", "Avago", or "Broadcom" stamped on the card or chipset? Does the firmware version include the string "-IT" in there? Then it might be a suitable HBA.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
419
You have an AMD system - try searching the forums for bios tweaks (can't remember if they are for CORE or SCALE). People have had strange behavior in the past, which have been resolved by tweaking the bios (primarily C-states if I remember correctly).
Besides that, the QVOs might need some looking into (and maybe replaced, but make sure first they're the problem) - have you taken a closer look at the SMART data?
 

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
@jgreco @ChrisRJ
Cool. Any suggestions of order of replacement of the parts? y'know other than dumpster. Looking for quickest way to get ssd pool more stable, which is attached to the mobo stata ports. I am focused on the ssd pool since this seems to be the biggest danger zone.

@c77dk ty so much for responding with something I can research/try/do. I will definitely look in to the C-states and see what I can find on the forum.

Smart data from last faulted drive:

smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.131+truenas] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Samsung based SSDs Device Model: Samsung SSD 870 QVO 2TB Serial Number: S5VWNJ0R406193V LU WWN Device Id: 5 002538 f31440ebb Firmware Version: SVQ02B6Q User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: Solid State Device Form Factor: 2.5 inches TRIM Command: Available, deterministic, zeroed Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-4 T13/BSR INCITS 529 revision 5 SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Thu Sep 22 09:05:46 2022 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x53) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 160) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 9 Power_On_Hours 0x0032 097 097 000 Old_age Always - 11100 12 Power_Cycle_Count 0x0032 099 099 000 Old_age Always - 71 177 Wear_Leveling_Count 0x0013 096 096 000 Pre-fail Always - 39 179 Used_Rsvd_Blk_Cnt_Tot 0x0013 100 100 010 Pre-fail Always - 0 181 Program_Fail_Cnt_Total 0x0032 100 100 010 Old_age Always - 0 182 Erase_Fail_Count_Total 0x0032 100 100 010 Old_age Always - 0 183 Runtime_Bad_Block 0x0013 100 100 010 Pre-fail Always - 0 187 Uncorrectable_Error_Cnt 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0032 065 049 000 Old_age Always - 35 195 ECC_Error_Rate 0x001a 200 200 000 Old_age Always - 0 199 CRC_Error_Count 0x003e 100 100 000 Old_age Always - 0 235 POR_Recovery_Count 0x0012 099 099 000 Old_age Always - 62 241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 92853482601 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 11031 - # 2 Short offline Completed without error 00% 11007 - # 3 Short offline Completed without error 00% 10983 - # 4 Short offline Completed without error 00% 10960 - # 5 Short offline Completed without error 00% 10936 - # 6 Short offline Completed without error 00% 10912 - # 7 Short offline Completed without error 00% 10890 - # 8 Short offline Completed without error 00% 10866 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing 256 0 65535 Read_scanning was never started Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

The pool before I cleared the status showed 8 read, 32 write errors, 0 checksum errors
 

ChrisRJ

Guru
Joined
Oct 23, 2020
Messages
1,075
@ChrisRJ Also how do I know if my storage controller isn't a true hba?
Because you told us so :wink:: "ADAPTEC ASR-71605 SFF8643 16 PORT SAS SATA 6Gb/s HBA/RAID in HBA mode". Also, HBA mode != HBA

It seems(!) that you are only doing short SMART tests. What is the result with long ones?

Are you doing regular scrubs?

I do not consider myself an expert on this particular aspect, but I would get rid of the QVO drives. Also, your description of the SSD pool seems inconsistent to me. Can you please double-check?
 

NugentS

Wizard
Joined
Apr 16, 2020
Messages
1,669
Your QVO drives are doing exactly what I would expect them to do - shit themselves as soon as a serious workload comes along. They are strictly consumer light use drives.

You would do better buying used Intel 36nn or better 37nn 1.6TB SSD's from ebay (I don't even know what the current equivalent is). But do not buy 35nn drives which are read-intensive.
36nn = mixed load - these I use for a lab - I have a stripe of three mirrors
37nn = write intensive - but hard to find unless a lot smaller
As an example URL: Ebay (UK) Link
Note that the seller doesn't state wear on the drives - so you would have to ask. However the auction does state "Used: An item that has been previously used. The item may have some signs of cosmetic wear, but is fully operational and functions as intended. This item may be a floor model or an item that has been returned to the seller after a period of use. See the seller’s listing for full details and description of any imperfections" which in my view means that if they are badly worn then they are returnable. Other listings may be better - especially as you are in the US

An enterprise SSD has so many advantages over the consumer drives that it is a real no brainer. They don't balk at high write loads, they don't slow down suddenly because the ram cache has been exhausted - they just work. And they last a hell of a lot longer. Honest drives rather than highly marketed garbage [Can you tell I don't like QVO's in particular, but wouldn't even use EVO's for this use either]

For backups use spinning disks and as long as they aren't SMR they will probably just work. But for running VM's use enterprise SSD's, in mirrors with (if you are using sync writes) an Intel Optane SLOG (or an RMS200 or 300). Some will tell you to mirror the SLOG, but for a lab I wouldn't bother as the failure condition where it can cause data loss from a failed SLOG is kinda unlikley. Basically the power would have to fail suddenly / a kernal panic followed by a SLOG failure during the fail / boot process AND you have data in transit through the SLOG but not quite written yet.

BTW - I also run a few VM's from a stripe of mirrors of HDD's (8*EXOS12TB in 4 vdevs) with an Optane SLOG and loads of RAM. they aren't high performance VM's - but they run well although I do prefer the SSD's
 

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
@NugetS I am definitely on the anti-QVO drive team now :) Thank you for the recommendation of the drives. Since I am replacing those terrible drives I will definitely take your advice and go enterprise. Also I like your 3 stripped mirrors approach, I will do the same. I also have a spare samsung nvme pro laying around here somewhere for a log device.

Looking at what @c77dk said, my motherboard is a year behind in firmware and I plan running through the options after flashing, disabling c-states and any sata power saving features.

Again thanks all for the input. I'll update after tonight updating firmware.. I'll try to push some load to the array and see if it dies.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
16,530
nvme pro laying around here somewhere for a log device.

Not suitable for a SLOG. Do not add this to your pool. If you cannot come up with a proper SLOG SSD with PLP, you are better off simply disabling sync writes and letting things work at full speed. Adding a bad SSD for SLOG will only serve to slow down your pool without gaining the benefits of SLOG.
 

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
@jgreco advice taken. I already have the vm storage repo dataset set to sync disabled. Researching the slog leads to I don't have the right hardware for it and the proper hardware is far out of my budget.
 

NugentS

Wizard
Joined
Apr 16, 2020
Messages
1,669
Another ebay link
This is a link to an Intel 900p Optane device. Technically it doesn't have PLP - but as there is no dram cache the liklehood of data loss is low. It works extremely well as a SLOG having extremely high endurance and extremely low latency. For a lab this will do very nicely.

For production I would move to the 4801x (4801x) or similar.

Note that a SLOG needs a very small amount of space. For 10Gb networking you need to be able to hold 10Gb for 5 seconds. = 1.25GB/second = 6GB. So I allow 20GB just in case. I have even split one of these into two and used it as a SLOG on two pools - which also worked well (just not reccomended / supported as it required hacking around in the GUI - which was a lot easier in Core than Scale)

Do remember however the sync disabled is much faster than sync enabled and sync enabled + SLOG is somewhere in the middle, depending on the SLOG device itself. You can mirror but that doesn't improve speeds.
 

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
No luck on the bios settings, I think the drives are just to blame. I do feel better running the latest mobo firmware though. After upgrading the mobo firmware one of the qvo's faulted an hour later. Removing the drives and putting them in my desktop and checking them with samsung magician all say they are perfectly fine (did all the tests). It seems that when the work load is too high these qvo's they just fold. Current plan is to replace the qvo's with used enterprise ssds and go from raidz2 to striped mirrors. See how performance is and re-evaluate next upgrades after that. Thanks again for everyone's input. It will take me some time to get the drives replaced.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
4,032
I posted this in another thread just today, but it's a graph showing the sustained write performance of the 870 QVO - hopefully it works when cross-linked here. The drives "soiling themselves" under workload as @NugentS put it is fairly accurate.

If the embed doesn't work, please check the source post:

https://www.truenas.com/community/t...ions-and-possible-bad-ssd.104038/#post-717071

toms hardware 870 qvo sustained write graph 2.jpg


Ref: https://www.tomshardware.com/reviews/samsung-870-qvo-sata-ssd/2
 

cyth

Cadet
Joined
Sep 21, 2022
Messages
8
I rebuilt the pool with just 4 drives as striped mirrors. No qvo's in the mix. Everything is super snappy now. Previously it would take me 10 - 15s to create a dataset. Now it is instant. Was nerve racking and nice at same time to destroy, rebuild the pool, replicate back to it. I am still going to add in 2 more enterprise ssds when I am able to.
 
Top