[Archivesspace_Users_Group] (re)indexing in 2.8.1

Blake Carver blake.carver at lyrasis.org
Fri May 20 12:00:37 EDT 2022


It should look something like these  lines, where you should see "GET whatever" and then the crash is happening at the "GET"

I've found it usually happens around the archival objects so "GET /repositories/2/archival_objects" whatever and then FATAL or maybe ERROR

Sometimes running it 1x1 will cause it to stop crashing though. And on large sites it takes a heckuva long time to get through all the records.

, [2022-05-19T13:32:55.617646 #93541]  INFO -- : Thread-2952: Staff Indexer [2022-05-19 13:32:55 -0400] Indexed 1 records in 2 seconds
D, [2022-05-19T13:32:55.700686 #93541] DEBUG -- : Thread-2960: GET /repositories/2/digital_objects?all_ids=true&modified_since=0 [session: #<Session:0x75e8ced0 @store={:user=>"search_indexer", :login_time=>2022-05-19 13:32:52 -0400, :expirable=>false}, @system_mtime=2022-05-19 17:32:52 UTC, @id="238c65ef0b6ba04ccf6120ce85df6d8daa77135ec9a972eaca6deb521c74714b">]

I, [2022-05-19T13:32:57.817094 #93541]  INFO -- : Thread-2952: Staff Indexer [2022-05-19 13:32:57 -0400] Indexed 1 records in 1 seconds
D, [2022-05-19T13:32:57.899146 #93541] DEBUG -- : Thread-2960: GET /agents/people?all_ids=true&modified_since=0 [session: #<Session:0x5b92f7d4 @store={:user=>"search_indexer", :login_time=>2022-05-19 13:32:52 -0400, :expirable=>false}, @system_mtime=2022-05-19 17:32:56 UTC, @id="238c65ef0b6ba04ccf6120ce85df6d8daa77135ec9a972eaca6deb521c74714b">]

I, [2022-05-19T13:33:02.485591 #93541]  INFO -- : Thread-2952: Staff Indexer [2022-05-19 13:33:02 -0400] ~~~ Indexed 1 of 1 classification records in repository Blake
D, [2022-05-19T13:33:02.544059 #93541] DEBUG -- : Thread-2970: GET /repositories/2/classifications?id_set=1&resolve%5B%5D=location_profile&resolve%5B%5D=container_profile&resolve%5B%5D=container_locations&resolve%5B%5D=subjects&resolve%5B%5D=places&resolve%5B%5D=linked_agents&resolve%5B%5D=linked_records&resolve%5B%5D=classifications&resolve%5B%5D=digital_object&resolve%5B%5D=agent_representation&resolve%5B%5D=repository&resolve%5B%5D=repository%3A%3Aagent_representation&resolve%5B%5D=related_agents&resolve%5B%5D=top_container&resolve%5B%5D=top_container%3A%3Acontainer_profile&resolve%5B%5D=related_agents&resolve%5B%5D=records&resolve%5B%5D=collections&resolve%5B%5D=surveyed_by&resolve%5B%5D=reviewer&resolve%5B%5D=creator&resolve%5B%5D=related_accessions [session: #<Session:0x17f0011f @store={:user=>"search_indexer", :login_time=>2022-05-19 13:32:52 -0400, :expirable=>false}, @system_mtime=2022-05-19 17:33:01 UTC, @id="238c65ef0b6ba04ccf6120ce85df6d8daa77135ec9a972eaca6deb521c74714b">]
D, [2022-05-19T13:33:02.568091 #93541] DEBUG -- : Thread-2970: Post-processed params: {"id_set"=>[1], "resolve"=>["location_profile", "container_profile", "container_locations", "subjects", "places", "linked_agents", "linked_records", "classifications", "digital_object", "agent_representation", "repository", "repository::agent_representation", "related_agents", "top_container", "top_container::container_profile", "related_agents", "records", "collections", "surveyed_by", "reviewer", "creator", "related_accessions"], "repo_id"=>2, "modified_since"=>0, "sort_field"=>:id, "sort_direction"=>:asc}

D, [2022-05-19T13:33:23.296930 #93541] DEBUG -- : Thread-2970: GET /repositories/2/archival_objects?all_ids=true&modified_since=0 [session: #<Session:0x1fc1fb33 @store={:user=>"search_indexer", :login_time=>2022-05-19 13:33:22 -0400, :expirable=>false}, @system_mtime=2022-05-19 17:33:22 UTC, @id="28669392a202b2532b3e9029ee3582888c889fe2c43b697ad61b04d6adfca7c3">]
D, [2022-05-19T13:33:23.310005 #93541] DEBUG -- : Thread-2970: Post-processed params: {"all_ids"=>true, "modified_since"=>0, "repo_id"=>2, "sort_field"=>:id, "sort_direction"=>:asc}
D, [2022-05-19T13:33:23.376931 #93541] DEBUG -- : Thread-2970: Responded with [200, {"Content-Type"=>"application/json", "Cache-Control"=>"private, must-revalidate, max-age=0", "Content-Length"=>"10"}, ["[1,2,3,4]\n"]]... in 104ms
I, [2022-05-19T13:33:23.389366 #93541]  INFO -- : Thread-3270: PUI Indexer [2022-05-19 13:33:23 -0400] ~~~ Indexed 4 of 4 archival_object records in repository Blake


________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org <archivesspace_users_group-bounces at lyralists.lyrasis.org> on behalf of Tom Hanstra <hanstra at nd.edu>
Sent: Thursday, May 19, 2022 9:58 AM
To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org>
Subject: Re: [Archivesspace_Users_Group] (re)indexing in 2.8.1

Thanks again.

So, I currently break up my logs into four different individual logs with these settings:

## Possible Log levels: debug, info, warn, error, fatal (severe only)
#
AppConfig[:frontend_log] = "/var/log/archivesspace/frontend.log"
AppConfig[:frontend_log_level] = "debug"
#
AppConfig[:backend_log] = "/var/log/archivesspace/backend.log"
AppConfig[:backend_log_level] = "debug"
#
AppConfig[:pui_log] = "/var/log/archivesspace/pui.log"
AppConfig[:pui_log_level] = "debug"
#
AppConfig[:indexer_log] = "/var/log/archivesspace/indexer.log"
AppConfig[:indexer_log_level] = "debug"

In each of the other logs I see both INFO and DEBUG statements, sometimes a lot, other times fewer. But in the indexer.log file, I only see INFO statements, no DEBUG.

But your suggestion above that the line is like "indexed ..." got me searching other logs and I found a couple of lines like this in my backend log:

D, [2022-05-16T14:31:45.734159 #5273] DEBUG -- : Thread-4912: Responded with [200, {"Content-Type"=>"application/json", "Cache-Control"=>"private, must-revalidate, max-age=0", "Content-Length"=>"21295"}, ["{\"page_size\":1,\"first_page\":1,\"last_page\":1,\"this_page\":1,\"offset_first\":1,\"offset_last\":1,\"total_hits\":1,\"results\":[{\"id\":\"/repositories/2/archival_objects/1702566#pui\",\"uri\":\"/repositories/2/archival_objects/1702566\",\"title\":\"Brother Paul Hermit CSC not indexed.\",\"primary_type\":\"archival_object\",\"types\":[\"archival_object\",\"pui\",\"pui_archival... in 73ms

Is this the type of thing I should continue to look for?

Thanks,
Tom

On Wed, May 18, 2022 at 4:12 PM Blake Carver <blake.carver at lyrasis.org<mailto:blake.carver at lyrasis.org>> wrote:
> - Even though the config setting is for DEBUG, the indexer log is only showing INFO. Should there be more than that?

Yep. Something must not be quite right in the config, you should see DEBUG showing up in the logs.

If it's running 1x1 AND on DEBUG you should see something better by the crash, you should be able to spot the record. I can't remember what exactly it looks like, but something like "indexed some/path/with/an/id/in/it/here" and then you can see the Resource or Archival Object ID in the line there.


________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> <archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org>> on behalf of Tom Hanstra <hanstra at nd.edu<mailto:hanstra at nd.edu>>
Sent: Wednesday, May 18, 2022 3:21 PM
To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Subject: Re: [Archivesspace_Users_Group] (re)indexing in 2.8.1

Blake and others,

Overall, we went for a week or so with no errors in terms of re-indexing. I'm looking into some database timeouts which are occurring periodically (AS and our database server are in two different locations) but with the connection getting re-established, I can work to address that separately.

But last night we did run into some errors which we are trying to track down.  A couple of things:

- Even though the config setting is for DEBUG, the indexer log is only showing INFO. Should there be more than that?

- If the log says:

I, [2022-05-17T22:27:11.756683 #5273]  INFO -- : Thread-2890: Staff Indexer [2022-05-17 22:27:11 -0400] ~~~ Indexed 216
of 20898 archival_object records in repository UNDA

immediately followed by an error, how do we determine which record that is in ArchivesSpace?  We are not sure how to find the record which is indicated in the log.

Thanks again,
Tom



On Mon, May 16, 2022 at 5:19 PM Blake Carver <blake.carver at lyrasis.org<mailto:blake.carver at lyrasis.org>> wrote:
That could be it, looks like some kind of "funny" character ArchivesSpace doesn't like in there probably. Do the "Indexed x of x whatever " numbers start over right there in the log?
You should see what record it was working on just before there.

________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> <archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org>> on behalf of Tom Hanstra <hanstra at nd.edu<mailto:hanstra at nd.edu>>
Sent: Monday, May 16, 2022 10:43 AM
To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Subject: Re: [Archivesspace_Users_Group] (re)indexing in 2.8.1

I've been monitoring logs and have run into a couple of instances where I receive this type of error:

E, [2022-05-16T08:57:57.207392 #5273] ERROR -- : Thread-3962: Unhandled exception!
E, [2022-05-16T08:57:57.207810 #5273] ERROR -- :
invalid byte sequence in UTF-8

and then a lot of detail. Are these errors with data that should be addressed?  I'm still in DEBUG mode in the backend log but it is not clear how to figure out which record(s) are being flagged. What am I looking for?

Thanks,
Tom

On Thu, May 12, 2022 at 8:59 AM Blake Carver <blake.carver at lyrasis.org<mailto:blake.carver at lyrasis.org>> wrote:
> Could it be that we added a big record which is now having issues being extracted. Or a corrupted record which is causing such issues?

 I don't think so? It could be, but those errors make me think there's something else going on with the DB, or maybe the network between the application and DB server? I'm not really sure what the problem is, but it seems more like a hardware/network/server issue than an ArchivesSpace issue. I can't be sure, but those errors don't look like ArchivesSpace troubles to me. Those are pretty common errors, so I'd do some searching around to see what you can find to troubleshoot.
________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> <archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org>> on behalf of Tom Hanstra <hanstra at nd.edu<mailto:hanstra at nd.edu>>
Sent: Wednesday, May 11, 2022 9:30 AM
To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Subject: Re: [Archivesspace_Users_Group] (re)indexing in 2.8.1

Thanks again for your help.

Late yesterday, both indexes indicated completion so I thought maybe things were going to be OK. Consequently, I did not do much in terms of testing.

This morning, the logs again had errors, however.  In the logs, I found this error in the indexer log:

I, [2022-05-10T21:36:07.181427 #30003]  INFO -- : Thread-2890: Staff Indexer [2022-05-10 21:36:07 -0400] Index round complete
I, [2022-05-10T21:36:37.182006 #30003]  INFO -- : Thread-2890: Staff Indexer [2022-05-10 21:36:37 -0400] Running index round
E, [2022-05-10T21:36:37.283423 #30003] ERROR -- : Thread-2890: uri:classloader:/jsonmodel_client.rb:493:in `all'
/home/app/archivesspace/data/tmp/jetty-0.0.0.0-8091-indexer.war-_aspace-indexer-any-/webapp/WEB-INF/app/lib/periodic_indexer.rb:154:in `run_index_round'
/home/app/archivesspace/data/tmp/jetty-0.0.0.0-8091-indexer.war-_aspace-indexer-any-/webapp/WEB-INF/app/lib/periodic_indexer.rb:283:in `run'
/home/app/archivesspace/data/tmp/jetty-0.0.0.0-8091-indexer.war-_aspace-indexer-any-/webapp/WEB-INF/app/main.rb:32:in `block in main'
E, [2022-05-10T21:36:37.284431 #30003] ERROR -- : Thread-2890: #<RuntimeError: {"error":{"db_error":["Database integrity constraint conflict: Java::JavaSql::SQLNonTransientConnectionException: No operations allowed after connection closed."]}}

and in the backup log there were issues with timeouts retrieving a record:

Java::ComMysqlCjJdbcExceptions::CommunicationsException: The last packet successfully received from the server was 1,759 milliseconds ago. The last packet sent successfully to the server was 28,849,143 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.

Could it be that we added a big record which is now having issues being extracted. Or a corrupted record which is causing such issues?

I've now restarted with the 1x1 and DEBUG on and only staff indexing and it is still thinking indexing is complete. I'll keep things going this way until we hit an error again and hopefully that will give additional information.

I'll also look into the "autoReconnect=true" piece, since we seem to have a situation where, once this happens, nothing more works until a restart.

Thanks again for any thoughts on this,
Tom

On Wed, May 11, 2022 at 5:03 AM Andrew Morrison <andrew.morrison at bodleian.ox.ac.uk<mailto:andrew.morrison at bodleian.ox.ac.uk>> wrote:

Indexing can also fail at the commit stage, not related to any one record. That is when ArchivesSpace tells Solr to transfer changes made in memory to storage. It does that at several points in the indexing process, but the longest one is at the end of the PUI indexer's run. If, because you've got a lot of records, or slow storage on your Solr server, it takes longer it respond than the value of AppConfig[:indexer_solr_timeout_seconds], it will start all over again, and potentially go into a loop. The workaround is to increase the timeout.


You might not notice you've got enough records to cause this until you do a full re-index, or someone edits something linked to most or all records (e.g. a repository, or a very widely-used subject), triggering the re-indexing of most of the system's records.


Andrew.



On 10/05/2022 22:06, Blake Carver wrote:
 1x1 would mean setting both records_per_thread and thread_count to 1. Having loglevel on debug and running at 1x1, you'll be able to see exactly which thing is being indexed as it happens, and when it crashes, you'll see what it was working through at the time.

PUI will always take longer, and a VERY long time 1x1, but unless you're sure which indexer is crashing, I'd switch them both up.

You can just `grep Indexed archivesspace.out` after it's running and watch those numbers. As long as they're going up, all is well.

It is also possible that it will finish without crashing running so slow as well. I've seen that happen with LARGE records.
________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> <archivesspace_users_group-bounces at lyralists.lyrasis.org><mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> on behalf of Tom Hanstra <hanstra at nd.edu><mailto:hanstra at nd.edu>
Sent: Tuesday, May 10, 2022 4:15 PM
To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org><mailto:archivesspace_users_group at lyralists.lyrasis.org>
Subject: Re: [Archivesspace_Users_Group] (re)indexing in 2.8.1

Thanks, Blake.

Turns out we did add quite a few records recently, so maybe there was something in there that it did not like all that much.

How can you tell which record it is choking on?  Is that your "1x1" suggestion?  Or does the DEBUG option make that more clear?  I have my indexing set to:

AppConfig[:indexer_records_per_thread]      = 25
AppConfig[:indexer_thread_count]            = 2

for both PUI and Staff records. I believe you are suggesting it would most easily be found using 1 and 1?  I can see where that could take a long time. But it if is going to choke over and over on the same record, then that may be the best way to address it.

Do you think if I just did staff indexing without PUI, that it would be identified faster?  Or could it pass the staff side but then die on PUI later?

I hope to try some of these ideas after hours today, so if you can confirm that I've got the right idea, that would help.

Tom


On Tue, May 10, 2022 at 2:17 PM Blake Carver <blake.carver at lyrasis.org<mailto:blake.carver at lyrasis.org>> wrote:
> Is this possible?

Short answer, Yes, it's possible your indexer is starting over.

Long answer. This can be tricky to figure out. Something is wrong, the indexer never wants to do that. Sometimes "something" "bad" gets into ArchivesSpace and the indexer will just crash and start over. The problem is the "something" can be anything and the "bad" can be hard to figure out. The more stuff you have in your DB, the harder it is to figure out.

First, I'd make sure this is happening. Your logs should make it obvious. You might see some FATAL errors just before it starts over.  You MIGHT be able to narrow it down from that. That is, what group of records had that error in the logs? Maybe that narrows it down enough. You just got lucky!

I don't think I've ever been so lucky. What I'd do next is set your loglevel to DEBUG and restart. If you're feeling lucky or just impatient or both, leave the indexer speed as is. You'll get more details out of the logs and you should be able to narrow it down better. Ideally, you want to run the indexers at 1x1, which means it could take forrreeevverrrrr to get back around to the crash again. If you're lucky, it'll crash on a record, you'll go look at that record, the problem will be obvious, and there will be much rejoicing. With it running 1x1 you should see exactly what's causing the fail. If it's not crashing on the same record every time.... ugh. That's an even longer answer.



________________________________
From: archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org> <archivesspace_users_group-bounces at lyralists.lyrasis.org<mailto:archivesspace_users_group-bounces at lyralists.lyrasis.org>> on behalf of Tom Hanstra <hanstra at nd.edu<mailto:hanstra at nd.edu>>
Sent: Tuesday, May 10, 2022 10:23 AM
To: Archivesspace Users Group <archivesspace_users_group at lyralists.lyrasis.org<mailto:archivesspace_users_group at lyralists.lyrasis.org>>
Subject: [Archivesspace_Users_Group] (re)indexing in 2.8.1

I don't look at the logs a lot unless there are issues with ArchivesSpace, so maybe this is something normal. But, after a restart due to some complaints about database connectivity, it looks like our ArchivesSpace instance has decided to do a full reindex. The index log sure looks as if it is starting from scratch and running through the indexing of both PUI and Staff indexes.

Is this possible?  Is it something that happens periodically and I just did not notice it? Nothing has changed in my data directory, so I don't see any reason for indexing to occur. Yet that is what the logs show.

If it is doing this for some reason, and knowing that we restart periodically, it seems like we will get into a loop where indexing just keeps happening all the time. Also, it would be helpful to understand what caused this to happen.

Any thoughts or experiences from those who have run this for longer would be appreciated. I'd like to understand if it would be a good idea to clear the data directory and perform a full index over the weekend rather than an unexpected and possibly never ending round in the background.

Thanks,
Tom
--
Tom Hanstra
Sr. Systems Administrator
hanstra at nd.edu<mailto:hanstra at nd.edu>

[https://ci3.googleusercontent.com/mail-sig/AIorK4wQjvBdM9TFi5bR5RBsq_1dY3HTxh-Kg_4W690bwTCSKeVGyazMoj0wdmkNgJ0kfjeRnparhiw]
_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


--
Tom Hanstra
Sr. Systems Administrator
hanstra at nd.edu<mailto:hanstra at nd.edu>

[https://ci3.googleusercontent.com/mail-sig/AIorK4wQjvBdM9TFi5bR5RBsq_1dY3HTxh-Kg_4W690bwTCSKeVGyazMoj0wdmkNgJ0kfjeRnparhiw]



_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


--
Tom Hanstra
Sr. Systems Administrator
hanstra at nd.edu<mailto:hanstra at nd.edu>

[https://ci3.googleusercontent.com/mail-sig/AIorK4wQjvBdM9TFi5bR5RBsq_1dY3HTxh-Kg_4W690bwTCSKeVGyazMoj0wdmkNgJ0kfjeRnparhiw]
_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


--
Tom Hanstra
Sr. Systems Administrator
hanstra at nd.edu<mailto:hanstra at nd.edu>

[https://ci3.googleusercontent.com/mail-sig/AIorK4wQjvBdM9TFi5bR5RBsq_1dY3HTxh-Kg_4W690bwTCSKeVGyazMoj0wdmkNgJ0kfjeRnparhiw]
_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


--
Tom Hanstra
Sr. Systems Administrator
hanstra at nd.edu<mailto:hanstra at nd.edu>

[https://ci3.googleusercontent.com/mail-sig/AIorK4wQjvBdM9TFi5bR5RBsq_1dY3HTxh-Kg_4W690bwTCSKeVGyazMoj0wdmkNgJ0kfjeRnparhiw]
_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group at lyralists.lyrasis.org<mailto:Archivesspace_Users_Group at lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


--
Tom Hanstra
Sr. Systems Administrator
hanstra at nd.edu<mailto:hanstra at nd.edu>

[https://ci3.googleusercontent.com/mail-sig/AIorK4wQjvBdM9TFi5bR5RBsq_1dY3HTxh-Kg_4W690bwTCSKeVGyazMoj0wdmkNgJ0kfjeRnparhiw]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/attachments/20220520/1edadd50/attachment.html>


More information about the Archivesspace_Users_Group mailing list