<div dir="ltr">Thanks, Andrew,<div><br></div><div>So far, I've not been able to capture anything in the logs which shows errors, though I'm still trying. I've bumped up the logging to try to give a better idea of what is happening.<br><br>One thing I have seen is that the server load really jumps when we are trying to do the OAI harvest. Something is taking a lot more resources though the only process on the server which I can really see is the "java" process running ArchivesSpace. </div><div><br></div><div>I anticipate having to come back to this and to the group after we have some more data. If anyone is running 3.2.0 (with Java 11) and has ideas for me, that would be great.</div><div><br></div><div>Tom</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 30, 2022 at 5:33 AM Andrew Morrison <<a href="mailto:andrew.morrison@bodleian.ox.ac.uk">andrew.morrison@bodleian.ox.ac.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>If you post the error messages in your log files from around the
time when you get an "Internal Server Error" it would help
diagnose the problem. But here are some observations that might be
relevant.<br>
</p>
<p>Exporting EAD, whether from the staff interface or via the
OAI-PMH service, uses both MySQL and Solr. The former to retrieve
the resource and the IDs of its archival objects. The latter to
retrieve the archival objects, although it checks whether the
version in Solr is the same as in MySQL, and fetches from the
database if not. So your problems could be with either, or both.
Also, if Solr and MySQL are out-of-sync on your 3.2 system, but
in-sync on the 2.8.1 one, that could explain some of the
difference in response time. You could try a soft re-index and see
if that has any effect:</p>
<p><a href="https://archivesspace.github.io/tech-docs/administration/indexes.html" target="_blank">https://archivesspace.github.io/tech-docs/administration/indexes.html</a><br>
</p>
<p>Wherever it gets the records from, they're retrieved in batches
of 20 at a time. Those are then converted from JSON to EAD. That
is the CPU-intensive part, and single-threaded, so typically takes
up most of the overall runtime. But if there's something about
your infrastructure which makes retrieval slow, you could reduce
total waiting time by increasing the batch size, which is possible
by putting the following in <i>backend/plugin_init.rb</i> in a
local plugin and restarting ArchivesSpace: </p>
<p>module ASpaceExport<br>
module LazyChildEnumerations<br>
PREFETCH_SIZE = 50<br>
end<br>
end</p>
<p>Your mileage may vary. Bigger batches will increase memory usage.
And it might make a big difference for some collections, but none
at all in others, because the exporter only ever requests
siblings. Therefore a collection with a deeply-nested structure
can require hundreds more batches than one which is shallow,
despite having the same total number of archival objects in both,
regardless of how high you set the prefetch size.<br>
</p>
<p>The OAI-PMH service has an additional issue that it cannot stream
its output. See here:</p>
<p><a href="https://archivesspace.atlassian.net/browse/ANW-1270" target="_blank">https://archivesspace.atlassian.net/browse/ANW-1270</a></p>
<p>Andrew.<br>
</p>
<p><br>
</p>
<div>On 29/03/2022 17:37, Andy Boze wrote:<br>
</div>
<blockquote type="cite">Just to
elaborate a bit on what Tom wrote, we are harvesting EAD records.
I've done a bit of comparison, making OAI requests for the same
records on 2.8.1 and 3.2. A record on 2.8.1 that took about 10
seconds , took about 3 minutes on 3.2. A record that took about 3
minutes to respond on 2.8.1 timed out on 3.2 after 20 minutes with
an "Internal Server Error" message.
<br>
<br>
Andy
<br>
<br>
On 3/29/2022 11:57 AM, Tom Hanstra wrote:
<br>
<blockquote type="cite">We have set up a test server running
ArchivesSpace 3.2.0. As required, that means a separate Solr
instance which I've installed on the same server.
<br>
<br>
Most things have gone OK, but we are seeing some timeout issues
with OAI harvesting tests. The harvest will address a few of the
records but regularly receives "Internal Server Error" messages.
What seems to be happening is that we are hitting certain
records which time out. We've tried skipping over such records
to see if it was just a bad record, but that will simply cause a
failure a bit further down the line. Our time out is set for 20
minutes, which should be plenty of time. So these timeouts don't
make much sense.
<br>
<br>
These records are harvesting without similar issues on our 2.8.1
instance, so I would not expect this to be a record issue
directly. Could it be something about how we have set up Solr? I
see no errors in any of our ArchivesSpace or Solr logs, so I'm
not sure how to debug this. Any suggestions?
<br>
<br>
Thanks,
<br>
Tom
<br>
<br>
-- <br>
*Tom Hanstra*
<br>
/Sr. Systems Administrator/
<br>
<a href="mailto:hanstra@nd.edu" target="_blank">hanstra@nd.edu</a> <a href="mailto:hanstra@nd.edu" target="_blank"><mailto:hanstra@nd.edu></a>
<br>
<br>
<br>
<br>
_______________________________________________
<br>
Archivesspace_Users_Group mailing list
<br>
<a href="mailto:Archivesspace_Users_Group@lyralists.lyrasis.org" target="_blank">Archivesspace_Users_Group@lyralists.lyrasis.org</a>
<br>
<a href="http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group" target="_blank">http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group</a>
<br>
<br>
</blockquote>
<br>
</blockquote>
</div>
_______________________________________________<br>
Archivesspace_Users_Group mailing list<br>
<a href="mailto:Archivesspace_Users_Group@lyralists.lyrasis.org" target="_blank">Archivesspace_Users_Group@lyralists.lyrasis.org</a><br>
<a href="http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group" rel="noreferrer" target="_blank">http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div><b style="font-family:arial,helvetica,sans-serif;font-size:12.7273px;color:rgb(136,136,136)">Tom Hanstra</b><br></div><div style="color:rgb(136,136,136);font-size:12.8px"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div style="font-size:12.7273px"><div><div><i style="font-size:12.7273px;font-family:arial,helvetica,sans-serif">Sr. Systems Administrator</i></div><div><a href="mailto:hanstra@nd.edu" style="color:rgb(17,85,204);font-size:12.7273px;font-family:arial,helvetica,sans-serif" target="_blank">hanstra@nd.edu</a><br></div></div><div><span style="font-family:arial,helvetica,sans-serif"><br></span></div></div><div style="font-size:12.7273px"><img src="https://docs.google.com/uc?export=download&id=1GFX1KaaMTtQ2Kg2u8bMXt1YwBp96bvf0&revid=0B7APN9POn6xAQ244WWFYMFU3aVJwZ0lxbmVHK3FxNXlCd0RRPQ"><br></div></div></div></div></div></div></div></div></div></div></div>