[Archivesspace_Users_Group] Problems working with archival object with large number of direct children
Joshua D. Shaw
Joshua.D.Shaw at dartmouth.edu
Tue Nov 15 15:46:44 EST 2016
Hi all –
We at Dartmouth have experienced similar issues. We have some large resources as well (one has 60K+ objects in the tree) and anything that involves a save or rearrangement (moving a file around, etc) can take a *lot* of time (many minutes) and may cause an error – typically of the “another user is modifying this record” type.
If we have to do any modifications to a resource of that size, we a) budget a lot of time and b) do things in small increments – ie don’t move more than a couple of files around at a time. It’s not a great solution, but it does minimize some of the headache.
I *think* (but haven’t had the time to really dig into this) that one reason the error comes about is because the indexer steps on/collides with the process that the save/arrangement kicked off. We are still running 1.3 and hope that some of our issues will be mitigated when we move to 1.5.1, though we know that not all of them have been resolved yet.
One other data point is that we’ve got a plugin that runs as a background job doing a bunch of importing. This background job touches some of the larger resources, but does *not* cause the errors and long save times, which leads me to believe that a lot of the problem is in the frontend – perhaps with the way the tree is populated - as Jason pointed out.
From: <archivesspace_users_group-bounces at lyralists.lyrasis.org> on behalf of Jason Loeffler <j at minorscience.com>
Reply-To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org>
Date: Tuesday, November 15, 2016 at 3:25 PM
To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org>
Cc: "archivesspace at googlegroups.com" <archivesspace at googlegroups.com>
Subject: Re: [Archivesspace_Users_Group] Problems working with archival object with large number of direct children
Definitely, yes. We have many resources with 5,000 or more archival object records. We've deployed on some pretty decent Amazon EC2 boxes (16GB memory, burstable CPU, etc.) with negligible improvement. I have a feeling that this is not a resource allocation issue. Looking at the web inspector, most of the time is spent negotiating jstree<http://jstree.com/> and/or loading all JSON objects associated with a resource into the browser. Maybe an ASpace dev can weigh in.
From the sysadmin side, Maureen Callahan at Yale commissioned Percona to evaluate ArchivesSpace and MySQL performance. I've attached the report. Let me know if you need any help interpreting the report.
At some point, and quite apart from this thread, I hope we can collectively revisit the staff interface architecture and recommend improvements.
On Tue, Nov 15, 2016 at 2:37 PM, Sally Vermaaten <sally.vermaaten at nyu.edu<mailto:sally.vermaaten at nyu.edu>> wrote:
We're running into an issue with a large resource record in ArchivesSpace and wonder if anyone has experienced a similar issue. In one resource record, we have a series/archival object with around 19,000 direct children/archival objects. We've found that:
· it takes several minutes to open the series in the 'tree' navigation view and then, once opened scrolling through series is very slow / laggy
· it takes a couple of minutes to open any archival object in the series in edit mode and
· it takes a couple of minutes to save changes to any archival object within the series
Does anyone else have a similarly large archival object in a resource record? If so, have you observed the same long load/save time when editing the component records?
The slow load time does not seem to be affected by memory allocation; we've tried increasing the speed / size of the server and it seemed to have no effect. We'd definitely appreciate any other suggestions for how we might fix or work around the problem.
We also wonder if this performance issue is essentially caused by the queries being run to generate the UI view - i.e. perhaps in generating the resource 'tree' view, all data for the whole series (all 19k archival objects) is being retrieved and stored in memory? If so, we wondered if it would be possible and would make sense to change the queries running during tree generation, etc. to only retrieve some batches at a time, lazy loading style?
Weatherly and Sally
Project Manager, Archival Systems
New York University Libraries
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Archivesspace_Users_Group