I have a Mini 3.0 X+ with 64GB of RAM and 5 Ironwolf Pro 18TB drives in a raidz2 pool. The dataset in question has record size 128k, lz4 compression, and AES256 encryption. The test is running by itself; the system has no other load.
There is a directory with 42,000 subdirectories, each of which has just a few file entries. This structure was created all at the same time with a recursive `cp`. Using a minimal file walker written in Go (to avoid any excess overhead of `find | wc` or the like) it takes many minutes to traverse this structure and count the 68,000 total files. It is processing fewer than 30 directories per second. The disks are seeking like crazy. (If I run the file count again (so using cached metadata) it executes instantly, as one would expect.)
Simple question: this seems ridiculously slow. Is this behavior expected? Thanks for any wisdom/advice.
There is a directory with 42,000 subdirectories, each of which has just a few file entries. This structure was created all at the same time with a recursive `cp`. Using a minimal file walker written in Go (to avoid any excess overhead of `find | wc` or the like) it takes many minutes to traverse this structure and count the 68,000 total files. It is processing fewer than 30 directories per second. The disks are seeking like crazy. (If I run the file count again (so using cached metadata) it executes instantly, as one would expect.)
Simple question: this seems ridiculously slow. Is this behavior expected? Thanks for any wisdom/advice.