Pool Offline - Failing Integrity Check

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
Hello everyone,

Today, I woke up to a failed HDD in the "Media" pool in my TrueNAS machine. As I seem to have stumbled across a bad batch of WD Red Pro drives, I figured I'd start an RMA process on this and remove the failed drive (the drive was registering OFFLINE already). Upon booting back into the OS, the pool is now offline and will not mount. Running "zpool import" from the shell yields the following results:

Code:
        Media                                           DEGRADED
          raidz1-0                                      DEGRADED
            gptid/2b85c0f3-f873-11ea-90a1-50e5495b05a6  ONLINE
            gptid/017b7cf2-8d97-11ea-a47e-50e5495b05a6  UNAVAIL  cannot open
            gptid/7e59b147-cb8d-11ea-b20a-50e5495b05a6  ONLINE
            gptid/38f3dd57-1d25-11eb-9963-50e5495b05a6  ONLINE
            gptid/786ddc6d-dd8e-11ea-b9ec-50e5495b05a6  ONLINE
            gptid/020d0519-8d97-11ea-a47e-50e5495b05a6  ONLINE


Obviously this pool is degraded as it is missing a disk now. When putting the failed disk back in, it still won't show up at all or register as installed. Running "zpool import Media" yields:

Code:
internal error: cannot import 'Media': Integrity check failed
zsh: abort (core dumped)  zpool import Media


The WebGUI is also indicating the following error:

Code:
Failed to check for alert VolumeStatus: Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/alert.py", line 706, in __run_source alerts = (await alert_source.check()) or [] File "/usr/local/lib/python3.8/site-packages/middlewared/alert/source/volume_status.py", line 31, in check for vdev in await self.middleware.call("pool.flatten_topology", pool["topology"]): File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call return await self._call( File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args) File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) File "/usr/local/lib/python3.8/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run result = self.fn(*self.args, **self.kwargs) File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 438, in flatten_topology d = deque(sum(topology.values(), [])) AttributeError: 'NoneType' object has no attribute 'values'


Searching this forum there seems to be a few threads started regarding this matter with no replies. I'm hoping someone will be able to assist with this matter. If the pool can import, I can replace/resilver the drive, but in it's current state, I am not able to access anything.

The system has the following specs:

AMD Athlon(tm) II X4 631 Quad-Core Processor
8GB RAM
2 pools: one consisting of 4 disks and one consisting of 6 disks (this is the degraded pool)
Pool in question is all WD Reds 4TB
running TrueNAS-12.0-U1.1

System was recently upgraded from FreeNAS 11.3 to TrueNAS 12 about three days ago. No problems encountered during upgrade.

Being a casual user, I am not extremely well versed in what the error message indicates in the WebGUI. Is it possible I have a second failed drive and the pool is toast? Thank you for the assistance!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Does the GPTID of the unavailable drive correspond to the disk that was removed?
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
That's a really good question. How would I go about checking that? Running "glabel status" shows 9 drives connect, but none of the GPTIDs match any of the drives listed in either of the pools.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
No, run gpart list, and look at the rawuuids for the p2 partitions.
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
Using "gpart list," all online disks match with the "UNAVAIL" not listed at all since it is disconnected. So to answer your original question, yes, the GPTID does correspond to the removed drive.
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
I suppose this now means there seems to be an integrity issue with the pool, hence the error, since all drives are healthy save the failed and removed drive. How would I go about investigating what is wrong with the integrity of this pool?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Most likely there was some data in flight that can't be reconstructed. What does zpool import -F -n Media say? This will show what the barriers to an import are, but won't actually do anything non-recoverable.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Try rebooting again to see if the pool imports on boot. Since there wasn't any reason exhibited, the pool should be importable.
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
I have rebooted about 6 times since this issue has manifested including just now with no change. The pool still will not mount.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Ok, try zpool import -F Media to try correcting errors preventing import, then zpool import Media.
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
Here are the outputs:

Code:
internal error: cannot import 'Media': Integrity check failed
zsh: abort (core dumped)  zpool import -F Media


Code:
internal error: cannot import 'Media': Integrity check failed
zsh: abort (core dumped)  zpool import Media
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Can you attach your /var/log/console.log?
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
What would be the best way to extract those logs? I'm able to view the file in the console with nano and using the more function, but I'm not seeing an easy way to export them. Does this need to be don via FTP? I don't have FTP setup, but can do that.

Also, on a whim, I decided to go ahead, place the old drive back in the system and replace the SATA cable. I'm not sure what happened here, but now the pool is online and healthy and even passes a SMART test on that drive. It is no longer degraded, which doesn't make sense as I did change the contents in the pool copying files over this morning after it had already degraded. I feel like something else is definitely up here and will work on getting that log file.

Code:
root@freenas[~]# zpool status
  pool: Data
 state: ONLINE
  scan: scrub repaired 0B in 05:46:17 with 0 errors on Fri Jan  1 05:46:17 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        Data                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/13098032-8d97-11ea-a47e-50e5495b05a6  ONLINE       0     0     0
            gptid/135c339e-8d97-11ea-a47e-50e5495b05a6  ONLINE       0     0     0
            gptid/136203b9-8d97-11ea-a47e-50e5495b05a6  ONLINE       0     0     0
            gptid/1369541e-8d97-11ea-a47e-50e5495b05a6  ONLINE       0     0     0

errors: No known data errors

  pool: Media
 state: ONLINE
  scan: resilvered 222M in 00:00:05 with 0 errors on Fri Jan 29 18:47:47 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        Media                                           ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/2b85c0f3-f873-11ea-90a1-50e5495b05a6  ONLINE       0     0     0
            gptid/017b7cf2-8d97-11ea-a47e-50e5495b05a6  ONLINE       0     0     0
            gptid/7e59b147-cb8d-11ea-b20a-50e5495b05a6  ONLINE       0     0     0
            gptid/38f3dd57-1d25-11eb-9963-50e5495b05a6  ONLINE       0     0     0
            gptid/786ddc6d-dd8e-11ea-b9ec-50e5495b05a6  ONLINE       0     0     0
            gptid/020d0519-8d97-11ea-a47e-50e5495b05a6  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:00:56 with 0 errors on Tue Jan 26 03:45:56 2021
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2       ONLINE       0     0     0

errors: No known data errors


Code:
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red Pro
Device Model:     WDC WD4003FFBX-68MU3N0
Serial Number:    V1GU2NJG
LU WWN Device Id: 5 000cca 0bccb6745
Firmware Version: 83.00A83
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Jan 29 19:06:13 2021 CST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
scp is the easiest way to get the log, but there's no need, now that your pool is back.
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
Ah ha! My CL is a bit rusty! I've attached the console.log in a zip folder since it didn't like the raw log file uploaded.

On another note, now would be an excellent time to get this data backed up! I've been spending time organizing with the intention of backing up when finished. I shouldn't wait and be responsible by backing it up now.
 

Attachments

  • console.zip
    23.5 KB · Views: 144

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
OK, the console log explains why you were having problems importing the Media pool.

Code:
Jan 29 18:47:51 freenas Beginning pools import
Jan 29 18:47:51 freenas Importing Media
Jan 29 18:47:51 freenas spa.c:6138:spa_tryimport(): spa_tryimport: importing Media
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADING
Jan 29 18:47:51 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/017b7cf2-8d97-11ea-a47e-50e5495b05a6': best uberblock found for spa $import. txg 2254233
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=2254233
Jan 29 18:47:51 freenas spa.c:8187:spa_async_request(): spa=$import async request task=2048
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADED
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): UNLOADING
Jan 29 18:47:51 freenas spa.c:6138:spa_tryimport(): spa_tryimport: importing Data
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADING
Jan 29 18:47:51 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/13098032-8d97-11ea-a47e-50e5495b05a6': best uberblock found for spa $import. txg 1991272
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=1991272
Jan 29 18:47:51 freenas spa.c:8187:spa_async_request(): spa=$import async request task=2048
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADED
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): UNLOADING


To read the pool status first for import, for Media, the representative device is gptid/017b7cf2-8d97-11ea-a47e-50e5495b05a6, which is the one that was unavailable. (For Data, the representative device is gptid/13098032-8d97-11ea-a47e-50e5495b05a6.)

Then during the actual import:

Code:
Jan 29 18:47:51 freenas spa.c:5990:spa_import(): spa_import: importing Media
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load(Media, config trusted): LOADING
Jan 29 18:47:51 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/017b7cf2-8d97-11ea-a47e-50e5495b05a6': best uberblock found for spa Media. txg 2254233
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load(Media, config untrusted): using uberblock with txg=2254233
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load(Media, config trusted): read 118 log space maps (118 total blocks - blksz = 131072 bytes) in 720 ms
Jan 29 18:47:51 freenas mmp.c:241:mmp_thread_start(): MMP thread started pool 'Media' gethrtime 45864379785
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254234, spa Media, vdev_id 0, ms_id 80, smp_length 183848, unflushed_allocs 7774208, unflushed_frees 6356992, freed 0, defer 0 + 0, unloaded time 45864 ms, loading_time 37 ms, ms_max_size 5660490137
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254234, spa Media, vdev_id 0, ms_id 84, smp_length 400784, unflushed_allocs 1351680, unflushed_frees 1294336, freed 0, defer 0 + 0, unloaded time 45901 ms, loading_time 61 ms, ms_max_size 4019449856
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254234, spa Media, vdev_id 0, ms_id 88, smp_length 437792, unflushed_allocs 5562368, unflushed_frees 5390336, freed 0, defer 0 + 0, unloaded time 45963 ms, loading_time 43 ms, ms_max_size 6126748467
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254234, spa Media, vdev_id 0, ms_id 90, smp_length 135688, unflushed_allocs 1425408, unflushed_frees 1433600, freed 0, defer 0 + 0, unloaded time 46007 ms, loading_time 39 ms, ms_max_size 5546781900
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254234, spa Media, vdev_id 0, ms_id 104, smp_length 287952, unflushed_allocs 6651904, unflushed_frees 6701056, freed 0, defer 0 + 0, unloaded time 46047 ms, loading_time 42 ms, ms_max_size 343682416
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254234, spa Media, vdev_id 0, ms_id 105, smp_length 458544, unflushed_allocs 1089536, unflushed_frees 1187840, freed 0, defer 0 + 0, unloaded time 46090 ms, loading_time 52 ms, ms_max_size 468327792
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254234, spa Media, vdev_id 0, ms_id 117, smp_length 12808, unflushed_allocs 6815744, unflushed_frees 6774784, freed 0, defer 0 + 0, unloaded time 46143 ms, loading_time 18 ms, ms_max_size 5050230374
Jan 29 18:47:51 freenas spa.c:8187:spa_async_request(): spa=Media async request task=1
Jan 29 18:47:51 freenas spa.c:8187:spa_async_request(): spa=Media async request task=16
Jan 29 18:47:51 freenas spa.c:8187:spa_async_request(): spa=Media async request task=2048
Jan 29 18:47:51 freenas spa_misc.c:411:spa_load_note(): spa_load(Media, config trusted): LOADED
Jan 29 18:47:51 freenas spa_history.c:309:spa_history_log_sync(): txg 2254235 open pool version 5000; software version unknown; uts  12.2-RELEASE-p2 1202000 amd64
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254235, spa Media, vdev_id 0, ms_id 118, smp_length 14568, unflushed_allocs 1236992, unflushed_frees 1425408, freed 0, defer 0 + 0, unloaded time 46375 ms, loading_time 14 ms, ms_max_size 5488205004
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254235, spa Media, vdev_id 0, ms_id 119, smp_length 12936, unflushed_allocs 1359872, unflushed_frees 1400832, freed 8192, defer 221184 + 0, unloaded time 46389 ms, loading_time 26 ms, ms_max_size 44
Jan 29 18:47:51 freenas spa.c:8187:spa_async_request(): spa=Media async request task=32
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254236, spa Media, vdev_id 0, ms_id 120, smp_length 10280, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46676 ms, loading_time 49 ms, ms_max_size 68719345664, max size
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254237, spa Media, vdev_id 0, ms_id 121, smp_length 13096, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46676 ms, loading_time 50 ms, ms_max_size 68719435776, max size
Jan 29 18:47:51 freenas spa_history.c:309:spa_history_log_sync(): txg 2254237 import pool version 5000; software version unknown; uts  12.2-RELEASE-p2 1202000 amd64
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254237, spa Media, vdev_id 0, ms_id 124, smp_length 8448, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46727 ms, loading_time 13 ms, ms_max_size 67178217472, max size e
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254237, spa Media, vdev_id 0, ms_id 122, smp_length 92120, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46726 ms, loading_time 84 ms, ms_max_size 68469997568, max size
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254237, spa Media, vdev_id 0, ms_id 125, smp_length 124904, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46741 ms, loading_time 201 ms, ms_max_size 46275559424, max siz
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254237, spa Media, vdev_id 0, ms_id 127, smp_length 15352, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46942 ms, loading_time 50 ms, ms_max_size 68719403008, max size
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254237, spa Media, vdev_id 0, ms_id 126, smp_length 103312, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46810 ms, loading_time 190 ms, ms_max_size 65327792128, max siz
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254237, spa Media, vdev_id 0, ms_id 128, smp_length 9104, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 46993 ms, loading_time 63 ms, ms_max_size 68719353856, max size e
Jan 29 18:47:51 freenas dsl_scan.c:1112:dsl_scan_restart_resilver(): restarting resilver txg=2254239
Jan 29 18:47:51 freenas metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 2254238, spa Media, vdev_id 0, ms_id 129, smp_length 11600, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 47001 ms, loading_time 124 ms, ms_max_size 68719312896, max size


gptid/017b7cf2-8d97-11ea-a47e-50e5495b05a6 seems to be thrashing at txg 2254235, and then following txgs 2254236-8.

Even though your pool is back, and resilvered, this disk still seems to be laboring, and will likely need to be replaced.

The console log shows something similar may be going on in the Data pool with gptid/13098032-8d97-11ea-a47e-50e5495b05a6, at txgs 1991281-4.
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
Looks like it would be prudent for me to replace these drives then. Thanks for the help!
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
Looking at the console logs again, I'm seeing similar lines for gptid/2b85c0f3-f873-11ea-90a1-50e5495b05a6 on the Media pool at about 18:56:27. Does this mean that disk is struggling too?
 

rhosch

Dabbler
Joined
Jan 29, 2021
Messages
12
Here's the excerpt:

Code:
Jan 29 18:56:27 freenas Beginning pools import
Jan 29 18:56:27 freenas Importing Media
Jan 29 18:56:27 freenas spa.c:8187:spa_async_request(): spa=$import async request task=2048
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADED
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): UNLOADING
Jan 29 18:56:27 freenas spa.c:6138:spa_tryimport(): spa_tryimport: importing Media
Jan 29 18:56:27 freenas spa.c:6143:spa_tryimport(): spa_tryimport: using cachefile '/data/zfs/zpool.cache.saved'
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADING
Jan 29 18:56:27 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/2b85c0f3-f873-11ea-90a1-50e5495b05a6': best uberblock found for spa $import. txg 2254257
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=2254257
Jan 29 18:56:27 freenas spa.c:8187:spa_async_request(): spa=$import async request task=2048
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADED
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): UNLOADING
Jan 29 18:56:27 freenas spa.c:5990:spa_import(): spa_import: importing Media
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load(Media, config trusted): LOADING
Jan 29 18:56:27 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/2b85c0f3-f873-11ea-90a1-50e5495b05a6': best uberblock found for spa Media. txg 2254257
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load(Media, config untrusted): using uberblock with txg=2254257
Jan 29 18:56:27 freenas spa_misc.c:411:spa_load_note(): spa_load(Media, config trusted): read 118 log space maps (118 total blocks - blksz = 131072 bytes) in 846 ms
Jan 29 18:56:27 freenas mmp.c:241:mmp_thread_start(): MMP thread started pool 'Media' gethrtime 42031487921
 
Top