J
jpaetzel
Guest
We were recently asked now many files ZFS can support. The theoretical limits of ZFS are impressive, but as they say, in theory, theory is better than practice, but in practice it rarely is.
We decided to build a test rig to see how ZFS would fare with lots and lots of files.
After two days of running a shell script we had a pool with one million directories, with one thousand files in each directory, for a total of one billion files.
Our test system had 192 GB of RAM, and even with a 150 GB ARC the metadata cache was blown out of the water. An ls -f in the directory with one million directories took about 30 minutes. The same operation over gigE NFS took around 7 hours. A find . -type f took several hours locally, and ran overnight via NFS.
I guess we expected that sort of behavior. The good news is, doing operations that did not hit that big directory were not painful at all. Snapshot creation and deletion operated normally, doing metadata operations to the 1000 file subdirectories had decent performance. All in all I'd say ZFS handles the pathological situation fairly well.
We decided to build a test rig to see how ZFS would fare with lots and lots of files.
After two days of running a shell script we had a pool with one million directories, with one thousand files in each directory, for a total of one billion files.
Our test system had 192 GB of RAM, and even with a 150 GB ARC the metadata cache was blown out of the water. An ls -f in the directory with one million directories took about 30 minutes. The same operation over gigE NFS took around 7 hours. A find . -type f took several hours locally, and ran overnight via NFS.
I guess we expected that sort of behavior. The good news is, doing operations that did not hit that big directory were not painful at all. Snapshot creation and deletion operated normally, doing metadata operations to the 1000 file subdirectories had decent performance. All in all I'd say ZFS handles the pathological situation fairly well.