[Archivesspace_Users_Group] RAM for ArchivesSpace server

Seth Shaw seth.shaw at unlv.edu
Wed Sep 30 11:24:09 EDT 2020


All good points. As an addendum to my earlier post, our archivesspace
instance (staff interface only) is running on a dedicated VM with the
database on a separate host, so 4 GB has been more than enough. The number
of simultaneous users hasn't noticeably increased our need for RAM (we had
a backlog reduction project which at least doubled the number of
simultaneous staff users for at least a year).

We began restarting weekly when we started using AS 2.x.

On Wed, Sep 30, 2020 at 7:28 AM Blake Carver <blake.carver at lyrasis.org>
wrote:

> Also, to chime in one more time with details...
>
> If you're running ArchivesSpace on a dedicated machine, that is, there's a
> machine, it's running Linux, and it is dedicated to running ArchiveSpace,
> the total RAM on that machine should probably be 6 or 8 gigs. If your
> ArchivesSpace site has a bunch of staff users and the PUI is enabled,
> things might be happier with 12 gigs. There should be at least 3 cores,
> more if you can afford it. I'm also assuming MySQL is running here on that
> same box. A decent sized site will use all your cores when it's doing a
> full reindex.
>
> If you're running ArchivesSpace on some type of a VM, and that VM is
> Linux, I think the same numbers are probably safe. Though I don't have too
> much experience running on a VM, my guess is that VM should have several
> cores and at least 6 gigs of RAM for a well-used site. You can probably get
> away with less if you only have one or two staff members using it. A
> decent sized site will use all your cores when it's doing a full reindex.
>
>
> Also, there's the actual RAM that ArchivesSpace uses, as set in
> archivesspace.sh and outlined here:
> http://archivesspace.github.io/archivesspace/user/tuning-archivesspace/
> The defaults are usually fine for smaller sites. A decently busy site
> would need double that, and some busy sites could use up to 4 gigs. Usually
> if it needs more than 4 gigs to keep going, something is wrong.
>
> It's not a bad idea to restart ArchivesSpace occasionally as well, weekly
> is probably fine.
>
>
> ------------------------------
> *From:* archivesspace_users_group-bounces at lyralists.lyrasis.org <
> archivesspace_users_group-bounces at lyralists.lyrasis.org> on behalf of
> Gadsby, Eric T. <egadsby at towson.edu>
> *Sent:* Wednesday, September 30, 2020 9:35 AM
> *To:* Archivesspace Users Group <
> archivesspace_users_group at lyralists.lyrasis.org>
> *Subject:* Re: [Archivesspace_Users_Group] RAM for ArchivesSpace server
>
>
> To chime in on the original question, we had been running on 4 GB and
> found ourselves getting close to running out of RAM.  Upgrading to 8 GB
> seems to be a good sweet spot for us.  Thanks!
>
>
>
>
>
>
>
> [image: Towson University logo] <http://www.towson.edu/>
>
> *Eric T. Gadsby*
>
> *Pronouns: he/him/his*
>
> IT Operations Specialist  |  Albert S. Cook Library
>
> *—*
>
> P: 410-704-3340
>
> SMS: ‪443-338-3792
> egadsby at towson.edu  |  libraries.towson.edu
> <http://www.towson.edu/https:/libraries.towson.edu>
>  *—*
>
>
>
> Confidentiality Notice: This message may contain information that is
> confidential, privileged, proprietary, or otherwise legally exempt from
> disclosure. If you are not the intended recipient, you are notified that
> you are not authorized to read, print, copy or disseminate this message,
> any part of it, or any attachments. If this message has been sent to you in
> error, please notify the sender by replying to this transmission, or by
> calling Albert S. Cook Library at 410-704-3340 .
>
>
>
>
>
> *From: *<archivesspace_users_group-bounces at lyralists.lyrasis.org> on
> behalf of Blake Carver <blake.carver at lyrasis.org>
> *Reply-To: *Archivesspace Users Group <
> archivesspace_users_group at lyralists.lyrasis.org>
> *Date: *Wednesday, September 30, 2020 at 9:13 AM
> *To: *Archivesspace Users Group <
> archivesspace_users_group at lyralists.lyrasis.org>
> *Subject: *Re: [Archivesspace_Users_Group] RAM for ArchivesSpace server
>
>
>
> [EXTERNAL EMAIL - USE CAUTION]
>
> What version are you running?
> Older (maybe about 2 years old or so) versions benefited from nightly or
> weekly restarts.
>
> There's a chance it's stuck in an indexing loop, so just double check it's
> not always indexing, that can cause crashes as well.
> ------------------------------
>
> *From:* archivesspace_users_group-bounces at lyralists.lyrasis.org <
> archivesspace_users_group-bounces at lyralists.lyrasis.org> on behalf of
> Jessika Drmacich <jgd1 at williams.edu>
> *Sent:* Wednesday, September 30, 2020 9:04 AM
> *To:* Archivesspace Users Group <
> archivesspace_users_group at lyralists.lyrasis.org>
> *Subject:* Re: [Archivesspace_Users_Group] RAM for ArchivesSpace server
>
>
>
> Thanks to all that responded. We have an issue where our AS instance slows
> down considerably after about a month of adding data (new collections,
> digital objects, a large number of accessions). After our systems person
> adds more RAM, the system speeds up considerably. I was wondering if we
> should add more RAM every three weeks. But it seems like, based on your
> answers, RAM might not be the source of our speed issue (but somehow
> connected?).
>
>
>
> Any ideas on what might slow down loading time of records and search speed?
>
>
>
> Jessika
>
>
>
>
>
> *Jessika Drmacich *
>
> *Records Manager & Digital Resources Archivist  *
>
> *Williams College Libraries*
>
> *Special Collections*
>
> *413-597-4725 (o)*
>
> *she/her/hers*
>
>
>
> *Please Note: Due to COVID-19 library services have changed. See the
> library’s webpage
> <https://secure-web.cisco.com/1deWx4xJLQkGYwdSYWzIVSGcVUymwGaOYQG66dX1ZvJi6AV7ubk8j2O6Vqo6XIWYasCu2Aajt9Qqu3OXLiAiFyu6Ufwn67sPl7cHuubelCxakaO0SG9XIJgL1Ax_rrDoK1UkjhXlFI5BzJdHqsSvYA4VRbHeOtDFb51VwZcsbgbY35isi_z2GqZYdMUrN-0s5CdNaXhfU1lo6moX0jewRkb_Qa8FQC7LZDpNbjPyqBuNmiKDFGkGUjf69dXB5HdKuzy_0eUxKrHL7vByHREJTZn7fRUgoUzvCk37LHvRJBj5rcrTPMTWaa4MUqxpSZV4Dhadcf5JZAB07y72yiD-syQFsNtY7-dqWJivxu5w0zNdyIMpyrsOJhTGWDeCLnncDxT0chj9wZ55xHbNon3L0h4Rclvx_vvcWxQTXW-12v25SBDexlU9SUR_gNFnVpFMH/https%3A%2F%2Flibrary.williams.edu%2F2020%2F03%2F11%2Flibrary-services-for-spring-2020%2F> for
> information on how to access collections and services. I am working both
> from home and my office in special collections and I will be checking email
> regularly. I’m also available for in-person appointments and virtual
> consultations via GoogleMeet.*
>
>
>
>
>
> On Tue, Sep 29, 2020 at 11:38 PM Seth Shaw <seth.shaw at unlv.edu> wrote:
>
> Agreed. We have 5GB for ArchivesSpace but we rarely touch even close to
> that except for really big jobs like generating PDFs of some of our massive
> finding aids (1k+ pages). We have 4 processors available but, again, with a
> lot of breathing room. That stated, we don't use the PUI.
>
>
>
> On Tue, Sep 29, 2020 at 5:23 PM Blake Carver <blake.carver at lyrasis.org>
> wrote:
>
> That's more than enough for any site. 1/2 that is more than enough for
> most sites.
>
> (I'm assuming Linux, not Windows)
>
>
>
> If you're having problems with crashes with that much power behind your
> site, there might be something wrong.
>
>
> ------------------------------
>
> *From:* archivesspace_users_group-bounces at lyralists.lyrasis.org <
> archivesspace_users_group-bounces at lyralists.lyrasis.org> on behalf of
> Jessika Drmacich <jgd1 at williams.edu>
> *Sent:* Tuesday, September 29, 2020 5:35 PM
> *To:* Archivesspace Users Group <
> archivesspace_users_group at lyralists.lyrasis.org>
> *Subject:* [Archivesspace_Users_Group] RAM for ArchivesSpace server
>
>
>
> Hi all!
>
>
>
> Will those who maintain your own on-site instance of ArchivesSpace share
> the memory specifications of your server?
>
>
>
> Ours is:
>
>
>
> 16 GB ram, 11 GB assigned to AS. 6 cores (processors)
>
>
>
> My very best,
>
>
>
> Jessika
>
>
>
>
>
> *Jessika Drmacich *
>
> *Records Manager & Digital Resources Archivist  *
>
> *Williams College Libraries*
>
> *Special Collections*
>
> *413-597-4725 (o)*
>
> *she/her/hers*
>
>
>
> *Please Note: Due to COVID-19 library services have changed. See the
> library’s webpage
> <https://secure-web.cisco.com/1deWx4xJLQkGYwdSYWzIVSGcVUymwGaOYQG66dX1ZvJi6AV7ubk8j2O6Vqo6XIWYasCu2Aajt9Qqu3OXLiAiFyu6Ufwn67sPl7cHuubelCxakaO0SG9XIJgL1Ax_rrDoK1UkjhXlFI5BzJdHqsSvYA4VRbHeOtDFb51VwZcsbgbY35isi_z2GqZYdMUrN-0s5CdNaXhfU1lo6moX0jewRkb_Qa8FQC7LZDpNbjPyqBuNmiKDFGkGUjf69dXB5HdKuzy_0eUxKrHL7vByHREJTZn7fRUgoUzvCk37LHvRJBj5rcrTPMTWaa4MUqxpSZV4Dhadcf5JZAB07y72yiD-syQFsNtY7-dqWJivxu5w0zNdyIMpyrsOJhTGWDeCLnncDxT0chj9wZ55xHbNon3L0h4Rclvx_vvcWxQTXW-12v25SBDexlU9SUR_gNFnVpFMH/https%3A%2F%2Flibrary.williams.edu%2F2020%2F03%2F11%2Flibrary-services-for-spring-2020%2F> for
> information on how to access collections and services. I am working both
> from home and my office in special collections and I will be checking email
> regularly. I’m also available for in-person appointments and virtual
> consultations via GoogleMeet.*
>
> _______________________________________________
> Archivesspace_Users_Group mailing list
> Archivesspace_Users_Group at lyralists.lyrasis.org
> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group
>
> _______________________________________________
> Archivesspace_Users_Group mailing list
> Archivesspace_Users_Group at lyralists.lyrasis.org
> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group
>
> _______________________________________________
> Archivesspace_Users_Group mailing list
> Archivesspace_Users_Group at lyralists.lyrasis.org
> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/attachments/20200930/393005ef/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 1824 bytes
Desc: not available
URL: <http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/attachments/20200930/393005ef/attachment.jpg>


More information about the Archivesspace_Users_Group mailing list