Chris Moore
Hall of Famer
- Joined
- May 2, 2015
- Messages
- 10,080
I have not created many threads here and I don't know if this is the best spot for this one, but I figured I would give it a try. Here is the problem I have.
I have a server that houses several ArcGIS "File geodatabase" containers. This is a special ArcGIS thing that is not really a database in the sense of something like SQL where all the data is inside a single file. This is a folder (directory structure) that contains thousands of small files. When the application is accessing these files, it opens many of them and relates the data in the individual files to produce composite imagery in the form of 3D maps that have many layers of data. Some of the files are small as 1k.
Here is a link to ArcGIS info on it: http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/what-is-a-geodatabase.htm
When a user makes changes, it may involve updates to many of the files and this is apparently working a bit too slow on my server. According to users, "Access to the database on the server 'freezes' where the same database, when moved to a local drive, works fine." It is not convenient to move the database to a local drive because it can take quite a while to do that as there are so many small files to copy and it would need to be moved back to the server after work is complete.
I am wondering what I can do to ZFS to accelerate the updates. I am currently using a RAIDz2 pool with 4 vdevs and I am willing to make changes, I just am not sure what would give me the best results.
The server in question is running on a Supermicro X10 dual Xeon board with two 8 core processors at 2.6GHz and 256GB of memory. The network connection between the workstation and the server is 1gig, but it is not fully utilized during the transactions. The slowdown appears to be on the server side. Server processor utilization is usually less than 25% and memory utilization has stabilized around 75%. The system has been running for about a year and this is really the only complaint. It is a little slow and I would like to make it faster.
Any suggestions?
I have a server that houses several ArcGIS "File geodatabase" containers. This is a special ArcGIS thing that is not really a database in the sense of something like SQL where all the data is inside a single file. This is a folder (directory structure) that contains thousands of small files. When the application is accessing these files, it opens many of them and relates the data in the individual files to produce composite imagery in the form of 3D maps that have many layers of data. Some of the files are small as 1k.
Here is a link to ArcGIS info on it: http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/what-is-a-geodatabase.htm
When a user makes changes, it may involve updates to many of the files and this is apparently working a bit too slow on my server. According to users, "Access to the database on the server 'freezes' where the same database, when moved to a local drive, works fine." It is not convenient to move the database to a local drive because it can take quite a while to do that as there are so many small files to copy and it would need to be moved back to the server after work is complete.
I am wondering what I can do to ZFS to accelerate the updates. I am currently using a RAIDz2 pool with 4 vdevs and I am willing to make changes, I just am not sure what would give me the best results.
The server in question is running on a Supermicro X10 dual Xeon board with two 8 core processors at 2.6GHz and 256GB of memory. The network connection between the workstation and the server is 1gig, but it is not fully utilized during the transactions. The slowdown appears to be on the server side. Server processor utilization is usually less than 25% and memory utilization has stabilized around 75%. The system has been running for about a year and this is really the only complaint. It is a little slow and I would like to make it faster.
Any suggestions?
Last edited: