[Archivesspace_Users_Group] related records

Chris Fitzpatrick Chris.Fitzpatrick at lyrasis.org
Tue Jun 2 07:19:00 EDT 2015


Hi,


So, there's a lot of conflating of issues going on in this thread...


This ticket ( https://archivesspace.atlassian.net/browse/AR-707 ) is realted to an issue with a resource with lots of instances taking a long time to load. In regards to instances, ASpace currently loads all the associated instances for a record. The reason the ticket was closed was because this isn't a bug, it's just the outcome of how the UI was built. There is feature request to paginate out the instance records here =>  ( https://archivesspace.atlassian.net/browse/AR-982 )<https://archivesspace.atlassian.net/browse/AR-982> . Sorry the links got lost when we moved to Jira.


There is a similar, but separate, issue involving records with lots of "siblings", for example an archival object that shares a single parent object with a lot of other archival objects. Here's an example : http://sandbox.archivesspace.org/resources/7 . This is a resource that has 3000 AO children, which takes about 30 seconds to load.


Looking at the screenshot Ben sent, this looks like a similar issue, since its getting a timeout after ~60s when trying to resolve the tree. You can bump up timeout wait time on your proxy, maybe to 300s, which might let the object resolve. The ideal permanent fix would probably be to add a feature that only initially loads a limited number of children but loads more on request. Or, alternative would be to cache the trees in some manner. But, yes, currently it's not really build out to handle many thousands of children associated to a single parent.


best, chris.






Chris Fitzpatrick | Developer, ArchivesSpace
Skype: chrisfitzpat  | Phone: 918.236.6048
http://archivesspace.org/
________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org <archivesspace_users_group-bounces at lyralists.lyrasis.org> on behalf of Ben Goldman <bmg17 at psu.edu>
Sent: Monday, June 1, 2015 10:54 PM
To: Archivesspace Users Group
Subject: Re: [Archivesspace_Users_Group] related records

Chris,

Thank you for calling my attention to this ticket. I admit to being somewhat confused by the conclusions reached there, and would welcome some clarification from ASpace. There seems to be some indication that the challenges associated with large amounts of related records are the same as resource records with large amounts of sibling archival object children. But the cases where this is likely to exist at Penn State are numerous, particularly related to institutional record collections. It would be nice to have a clearer sense of what ASpace can currently accommodate (so we know how widespread this problem is likely to be), and what the plans are to address it.

Thanks,
Ben




________________________________
From: "Christopher John Prom" <prom at illinois.edu>
To: "Archivesspace Users Group" <archivesspace_users_group at lyralists.lyrasis.org>
Sent: Monday, June 1, 2015 4:28:11 PM
Subject: Re: [Archivesspace_Users_Group] related records

Brian,

We have had quite a bit of issues with this "many sibling record" problem as well and it it is one of the ‘deal breaker’ issues (along with the public interface and a few other things), that are preventing Illinois from going live in a production environment.

The problem occurs in multiple areas, not just resource components, but also where a lot of digital objects are related to a resource or resource component record.  II am hoping at some point it can be prioritized for fixing. There is background information in the comments here on the closed (but undressed) request I had previously submitted regarding this known.

https://archivesspace.atlassian.net/browse/AR-707<https://archivesspace.atlassian.net/browse/AR-707?filter=-4&jql=text%20~%20"many"%20ORDER%20BY%20createdDate%20DESC>

Thanks,

Chris
<https://archivesspace.atlassian.net/browse/AR-707?filter=-4&jql=text%20~%20"many"%20ORDER%20BY%20createdDate%20DESC>

Christopher Prom, PhD
Professor, University Library
Assistant University Archivist
1408 W. Gregory Drive
Urbana, IL 61820
(217) 244-2052
prom at illinois.edu<mailto:prom at illinois.edu>

http://archives.library.illinois.edu
Blog: http://e-records.chrisprom.com

On Jun 1, 2015, at 3:46 PM, Brian Hoffman <brianjhoffman at gmail.com<mailto:brianjhoffman at gmail.com>> wrote:

I think that is probably the issue. The tree is designed to load as you navigate through it, but the logic is based on the assumption that there aren’t lots of siblings to load at once. That will need to be revisited in order to fix the loading problems you are seeing.




On Jun 1, 2015, at 3:43 PM, Ben Goldman <bmg17 at psu.edu<mailto:bmg17 at psu.edu>> wrote:

Hi Brian,

Yes, in this particular case, there are several thousand siblings, all children to the same parent (the resource record). In a few other cases, we have an equivalent number of objects, but arranged into parents series, so a little more structured.

-Ben

________________________________
From: "Brian Hoffman" <brianjhoffman at gmail.com<mailto:brianjhoffman at gmail.com>>
To: "Archivesspace Users Group" <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Sent: Monday, June 1, 2015 3:15:27 PM
Subject: Re: [Archivesspace_Users_Group] related records

Hi Ben,

Does the collection that is causing this have a large number of sibling components? In other words, what is the maximum number of components that all share the same parent?

Brian


On Jun 1, 2015, at 2:37 PM, Ben Goldman <bmg17 at psu.edu<mailto:bmg17 at psu.edu>> wrote:

Aaaaand... the screenshots now.

-Ben

________________________________
From: "Ben Goldman" <bmg17 at psu.edu<mailto:bmg17 at psu.edu>>
To: "Archivesspace Users Group" <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Sent: Monday, June 1, 2015 2:30:42 PM
Subject: Re: [Archivesspace_Users_Group] related records

Hi Chris,

Looks like the issues are on the client side in Chrome. A lot of 200s. Harder to tell in Firefox, though the Network Monitor in Firefox appears to be a little more detailed . I've attached screenshots here.

I don't want to belabor this issue at everyone's expense. I just want to get an idea of what ASpace can handle in terms of treesize/attached records. Obviously, we're all going to have a few outlying, extremely large resource records to deal with, though.

Thanks,
Ben



________________________________
From: "Chris Fitzpatrick" <Chris.Fitzpatrick at lyrasis.org<mailto:Chris.Fitzpatrick at lyrasis.org>>
To: "Archivesspace Users Group" <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Sent: Monday, June 1, 2015 6:12:12 AM
Subject: Re: [Archivesspace_Users_Group] related records

Hi Ben,

This could be the result of a few thing.
First off, on the server-side, you can adjust the java heap space settings by adding a ASPACE_JAVA_XMX  ( or a Xmx setting to your JAVA_OPTS ). This will depend on how much RAM you have on your system.Performance can also vary depending on the number of processors you have on the server.

All that said, what you're probably seeing is actually on the client-side. That is, ASpace is giving your browser a bunch of data that is killing your browsers javascript engine.

So, are the 5000 AOs directly associated to a single parent, or is this a tree of objects with multiple parents? And do the individual records have a lot of subrecords associated to them? Such as instances, agents, or subjects?

You can see this by looking at the network status ( Tools => Web Developer => Network in Firefox, View => Developer => Developer Tools => Network in Chrome ). You should see all the requests that the browser is sending here...if all the requests are getting a "200", then it's the browser's dealing with the JS that's causing the issue...if you see a request that just hangs in "Pending" status, then the issue is on the server side.

Does that make sense?

b,chris.






Chris Fitzpatrick | Developer, ArchivesSpace
Skype: chrisfitzpat  | Phone: 918.236.6048
http://archivesspace.org/<https://urldefense.proofpoint.com/v2/url?u=http-3A__archivesspace.org_&d=AwMF-g&c=8hUWFZcy2Z-Za5rBPlktOQ&r=jGJMaTc-8I-z6_tkoj_Qyi4UF1KtYBfcz4s2Ly33jmw&m=siNmj04yAK6f53SgBqRvxJojJUDg4PhGirRCvUL5XHU&s=oq8kzjNZpdypClfbfGq5FeIUHVhgLS5Awmk1UTy66nM&e=>
________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> <archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org>> on behalf of Ben Goldman <bmg17 at psu.edu<mailto:bmg17 at psu.edu>>
Sent: Thursday, May 28, 2015 2:37 PM
To: Archivesspace Users Group
Subject: Re: [Archivesspace_Users_Group] related records

Hello,

We are still having this issue in 1.2 on collections with large numbers of archival objects and/or large numbers of related records. What I'd like to know is whether any local configurations could be adjusted to increase performance.

Thanks,
Ben


Ben Goldman
Digital Records Archivist
Penn State University Libraries
University Park, PA
814-863-8333
http://www.libraries.psu.edu/psul/speccolls.html<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.libraries.psu.edu_psul_speccolls.html&d=AwMF-g&c=8hUWFZcy2Z-Za5rBPlktOQ&r=jGJMaTc-8I-z6_tkoj_Qyi4UF1KtYBfcz4s2Ly33jmw&m=siNmj04yAK6f53SgBqRvxJojJUDg4PhGirRCvUL5XHU&s=TyZYm19WrlgGmUvjftNEewraeB6CkpD0VQJb0luPVEw&e=>


________________________________
From: "Kevin Clair" <Kevin.Clair at du.edu<mailto:Kevin.Clair at du.edu>>
To: "Archivesspace Users Group" <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Sent: Wednesday, May 6, 2015 8:13:04 PM
Subject: Re: [Archivesspace_Users_Group] related records

Hello,

We've had similar issues at DU with our collections that have large amounts of archival objects attached. Any sort of editing we want to do in those collections is greeted with the same "Loading..." message that never resolves itself. We notice it generally in collections with more than 5,000 archival objects, though we've had problems with smaller collections if we have many active users at a time.  -k
________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> [archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org>] on behalf of Ben Goldman [bmg17 at psu.edu<mailto:bmg17 at psu.edu>]
Sent: Wednesday, May 06, 2015 1:40 PM
To: Archivesspace Users Group
Subject: [Archivesspace_Users_Group] related records


Hi All,

At Penn State we've noticed some issues with opening resource records that have a large amount of related records. I would have to do some further digging to know how common such a case is, but just to provide one example: we have an institutional records collection with a couple hundred name records and maybe 80 accession records attached. When we click to Edit the record, it seems stuck on "Loading..."

I should mention that we're currently in the middle of migrating to v1.2, so maybe these issues will be addressed, but I am wondering if anyone has encountered any challenges with an excessive amount of related resources, or whether there is an optimal amount or limit (?).

Thanks,
Ben

Ben Goldman
Digital Records Archivist
Penn State University Libraries
University Park, PA
814-863-8333
http://www.libraries.psu.edu/psul/speccolls.html<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.libraries.psu.edu_psul_speccolls.html&d=AwMF-g&c=8hUWFZcy2Z-Za5rBPlktOQ&r=jGJMaTc-8I-z6_tkoj_Qyi4UF1KtYBfcz4s2Ly33jmw&m=siNmj04yAK6f53SgBqRvxJojJUDg4PhGirRCvUL5XHU&s=TyZYm19WrlgGmUvjftNEewraeB6CkpD0VQJb0luPVEw&e=>


_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group<https://urldefense.proofpoint.com/v2/url?u=http-3A__lyralists.lyrasis.org_mailman_listinfo_archivesspace-5Fusers-5Fgroup&d=AwMF-g&c=8hUWFZcy2Z-Za5rBPlktOQ&r=jGJMaTc-8I-z6_tkoj_Qyi4UF1KtYBfcz4s2Ly33jmw&m=siNmj04yAK6f53SgBqRvxJojJUDg4PhGirRCvUL5XHU&s=Xk_2hD-7a8Bgoud9kgtQl3OH9MEKi8s3aOB-zVX_WiI&e=>



_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group<https://urldefense.proofpoint.com/v2/url?u=http-3A__lyralists.lyrasis.org_mailman_listinfo_archivesspace-5Fusers-5Fgroup&d=AwMF-g&c=8hUWFZcy2Z-Za5rBPlktOQ&r=jGJMaTc-8I-z6_tkoj_Qyi4UF1KtYBfcz4s2Ly33jmw&m=siNmj04yAK6f53SgBqRvxJojJUDg4PhGirRCvUL5XHU&s=Xk_2hD-7a8Bgoud9kgtQl3OH9MEKi8s3aOB-zVX_WiI&e=>


_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group<https://urldefense.proofpoint.com/v2/url?u=http-3A__lyralists.lyrasis.org_mailman_listinfo_archivesspace-5Fusers-5Fgroup&d=AwMF-g&c=8hUWFZcy2Z-Za5rBPlktOQ&r=jGJMaTc-8I-z6_tkoj_Qyi4UF1KtYBfcz4s2Ly33jmw&m=siNmj04yAK6f53SgBqRvxJojJUDg4PhGirRCvUL5XHU&s=Xk_2hD-7a8Bgoud9kgtQl3OH9MEKi8s3aOB-zVX_WiI&e=>

<Screen Shot 2015-06-01 at 2.28.01 PM.png><Screen Shot 2015-06-01 at 10.16.02 AM.png>_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group<https://urldefense.proofpoint.com/v2/url?u=http-3A__lyralists.lyrasis.org_mailman_listinfo_archivesspace-5Fusers-5Fgroup&d=AwMF-g&c=8hUWFZcy2Z-Za5rBPlktOQ&r=jGJMaTc-8I-z6_tkoj_Qyi4UF1KtYBfcz4s2Ly33jmw&m=siNmj04yAK6f53SgBqRvxJojJUDg4PhGirRCvUL5XHU&s=Xk_2hD-7a8Bgoud9kgtQl3OH9MEKi8s3aOB-zVX_WiI&e=>


_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/attachments/20150602/2937fdc9/attachment.html>


More information about the Archivesspace_Users_Group mailing list