<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div><br></div><div>What I had been doing was running EADConverter locally on a batch of files, saving the JSON</div><div>output if successful, and posting those JSON files to </div><div><a href="http://archivesspace.github.io/archivesspace/doc/file.API.html#post-repositoriesrepoidbatchimports">http://archivesspace.github.io/archivesspace/doc/file.API.html#post-repositoriesrepoidbatchimports</a></div><div><br></div><div>( This two stage process was also very useful in earlier versions when the error reporting</div><div> was missing the context: ArchivesSpace would tell you what was missing, but it didn’t point</div><div> to where it was missing from. It was possible to inspect the JSON and look for the null or</div><div> missing value. ) </div><div><br></div><div>You want to:</div><div><br></div><div><div style="margin: 0px; font-size: 15px; background-color: rgb(189, 238, 237); position: static; z-index: auto;"><span class="Apple-tab-span" style="white-space:pre"> </span>converter = EADConverter.new( eadxml )</div><div style="margin: 0px; font-size: 15px; background-color: rgb(189, 238, 237); position: static; z-index: auto;"><span class="Apple-tab-span" style="white-space:pre"> </span>converter.run</div></div><div>and then do something (move, copy or sent directly to batch_imports API) with:</div><div><div style="margin: 0px; font-size: 15px; background-color: rgb(189, 238, 237); position: static; z-index: auto;"><span class="Apple-tab-span" style="white-space:pre"> </span>converter.get_output_path</div></div><div><br></div><div>and if you wrap it in a begin/rescue block, you can catch and report the errors. </div><div><br></div><div><br></div><div>I’ve experimented with a couple of variations on the error catching and processing.</div><div>For example, if you move the JSON output in the ensure clause ( begin/rescue/ensure ),</div><div>you can save the JSON to inspect even if it’s not complete enough to successfully </div><div>import with /batch_imports, but you might not want to mix “good” and “bad” JSON in</div><div>your output files. </div><div><br></div><div>More recently, I’ve been experimenting with using an alternate EAD importer with </div><div>looser schema validation rules. </div><div><br></div><div>One problem with importing thousands of EAD files by this batch method is that </div><div>we have had problems with “namespace pollution” of the controlled vocab lists </div><div>for extents and containers. These values are controlled from the webapp and editor,</div><div>but importing from EAD adds to the values in the database. If you import a few </div><div>EAD files at a time, it’s not difficult to merge and clean up these values, but</div><div>importing several thousand EAD files that aren’t very controlled for those values</div><div>led to an explosion that makes the drop down lists of those values unusable. </div><div><br></div><div>See a previous message about this: </div><div><a href="http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/2015-March/001216.html">http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/2015-March/001216.html</a></div><div><br></div><div><br></div><div><br></div><div>— Steve Majewski / UVA Alderman Library </div><div><br></div><br><div><div>On Apr 6, 2015, at 3:08 PM, Dallas Pillen <<a href="mailto:djpillen@umich.edu">djpillen@umich.edu</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr">Hello all,<div><br></div><div>I was curious if anyone has had any success starting EAD import jobs via the API?</div><div><br></div><div>I was thinking this could be done using POST /repositories/:repo_id/jobs_with_files described here: <a href="http://archivesspace.github.io/archivesspace/doc/file.API.html#post-repositoriesrepoidjobswithfiles">http://archivesspace.github.io/archivesspace/doc/file.API.html#post-repositoriesrepoidjobswithfiles</a></div><div><br></div><div>However, I am not entirely sure how the job and file parameters should be sent in the POST request, and I haven't seen anyone ask this question before or give an example of how it might work. I've tried sending the POST request several different ways and each time I am met with: {"error":{"job":["Parameter required but no value provided"],"files":["Parameter required but no value provided"]}}. </div><div><br></div><div>I suppose it's worth mentioning that the reason I want to do this is that, at some point, we will be importing several thousand EADs into ArchivesSpace. We're doing a lot of preliminary work to make our EADs import successfully, but know there will likely be some that will fail. Right now, the only way to do a batch import of EADs is to do a batch as a single import job. If one EAD in that job has an error, the entire job fails. For that reason, I would like to be able to import each EAD as a separate job so that the EADs that will import successfully will do so without being impacted by the EADs with errors. However, starting several thousand individual import jobs would be very tedious, and I'm looking for a way to automate that process. If anyone else has come up with any creative solutions or knows of a better way to do that than the API, I would be very interested to know.</div><div><br></div><div>The end goal would be to have a script that would batch start the import jobs, get the ID for each job, check up on the jobs every so often and, once there are no longer any active jobs, output some information about each of the jobs that failed. I've figured out how to do most of that using the API, but I'm stumped on how to get the whole process started.</div><div><br></div><div>Thanks!</div><div><br></div><div>Dallas<br><div><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div style="font-size:12.8000001907349px"><b>Dallas Pillen<br></b>Project Archivist<b><br></b></div><div style="font-size:12.8000001907349px"><br></div><div style="font-size: 12.8000001907349px;"><img src="https://webapps.lsa.umich.edu/dean/lsa_emails/bentley-sig-em.png" height="40" width="351"><br></div><div style="font-size: 12.8000001907349px;"><font size="1"> <a href="http://bentley.umich.edu/" style="color:rgb(17,85,204)" target="_blank">Bentley Historical Library</a></font></div><div style="font-size: 12.8000001907349px;"><font size="1"> 1150 Beal Avenue</font></div><div style="font-size: 12.8000001907349px;"><font size="1"> Ann Arbor, Michigan 48109-2113</font><span style="font-size:x-small"> </span></div><div style="font-size:12.8000001907349px"><a value="+17347643482" style="font-size:x-small"> </a><a value="+17347643482" style="color:rgb(34,34,34);font-size:x-small">734.647.3559</a></div><div style="font-size: 12.8000001907349px;"><font size="1"> <a href="https://twitter.com/umichBentley" style="color:rgb(17,85,204)" target="_blank">Twitter</a> <a href="https://www.facebook.com/bentleyhistoricallibrary" style="color:rgb(17,85,204)" target="_blank">Facebook </a></font></div></div></div></div></div>
</div></div></div>
_______________________________________________<br>Archivesspace_Users_Group mailing list<br><a href="mailto:Archivesspace_Users_Group@lyralists.lyrasis.org">Archivesspace_Users_Group@lyralists.lyrasis.org</a><br>http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group<br></blockquote></div><br></body></html>