Category Archives: Meta

To theme or not to theme?

FacebookTwitterShare

After the November EC/ASECS conference and the Thanksgiving holiday–as well as working on other projects, like my Kinski paper–I’m now back to work on the database. While I have a basic working model, I still need to put it online, and there are several other issues I’m trying to work through before I need more help from people who know more than I. I am currently working on adding basic references and personography details, and rethinking the thematic markup. Each time I think I have a thematic schema, something emerges to undermine, complicate, or destabilize it. Any thematic assessment is necessarily interpretive, and I am not sure how elaborate my interpretive additions should be. As Jerome McGann and Dino Buzzetti have aptly noted, “Markup…is essentially ambivalent” and it “should not be thought of as introducing—as being able to introduce—a fixed and stable layer to the text. To approach textuality in this way is to approach it in illusion.” This is especially evident when attempting a thematic markup, but it is also an issue when identifying what to note in, for instance, a personography. The questions that I have, or rather, that I think students will get meaning out of without overly biasing or directing their engagements with the texts, will emerge as more or less evident given how I structure and mark up the materials. Thematic markup has the real potential to take agency away from the student reader, rather than helping the student reader place the excerpt in a larger material context, which, despite our efforts at informational literacy, still eludes many. Additionally, the fulltext search feature is generally sufficient for keyword thematic analysis. For the time being, therefore, I have chosen not to incorporate thematic analysis in the XML, but I do want to explore issues of personography, bibliography, allusion, reference, and structure.

Some of the next steps are to determine what needs to be removed from the header information (that is, what I can put in a separate file or elsewhere, so as not to duplicate it in each file); identify key names/dates/places/references and consider how they may best be marked up;  and adapt the XQL to account for some of these changes (those that affect display). Then, a big project I hope to have some help tackling is upgrading my server, installing eXist-db (and other necessary packages), and installing a live working copy!

Iteration and calling new functions

This past week I think I’ve had a couple breakthroughs in terms of understanding how xQuery works. The rubicon was when Christian Moser from the eXist forum explained iterative variables to me. I know, of course, what iterative variables are, but I didn’t know exactly how they worked in XQL–in my head, creating a variable $n and not tying it to a TEI element n is strange. But there’s no reason for that–what we call something in XQL has nothing but convention to do with what we call something in XML. So, there was that, and also the fact–which had not occurred to me–that you can define variables in for expressions as well as let expressions. Also, the iterative variable can be bound to the position in the iterative cycle. I don’t know if this makes any sense, but it helped me tremendously.

For instance, here’s how I made the TEI statement of responsibility <respStmt> display, accounting for more than one editor. Let’s say we have two people collaborating on a document:

<respStmt>
<resp>Transcription and correction</resp>
<name>Elizabeth Ricketts</name>
</respStmt>
<respStmt>
<resp>Correction, editorial commentary, and markup</resp>
<name>Tonya Howe</name>
</respStmt>

We want to render them on separate lines, connecting the <resp> and the <name> with ” by ” for readability. Here are two things I tried:

{
for $respStmt in $header
let $respCount := count($header//tei:respStmt)
return
if ($respCount > 1) then
<p>{string-join(($titleStmt/tei:respStmt/tei:resp, $titleStmt/tei:respStmt/tei:name), ‘ by ‘)}</p>
else
concat($titleStmt/tei:respStmt/tei:resp, ‘ by’, $titleStmt/tei:respStmt/tei:name)
}

and
{
 for $resp in $resps
return
<p>{string-join(($titleStmt/tei:respStmt/tei:resp, $titleStmt/tei:respStmt/tei:name), ‘ by ‘)}</p>
 }

These bits gave me the following results, respectively:

Transcription and correction by Correction, editorial commentary, and markup by Elizabeth Ricketts by Tonya Howe

and
Transcription and correction by Correction, editorial commentary, and markup by Elizabeth Ricketts by Tonya Howe
Transcription and correction by Correction, editorial commentary, and markup by Elizabeth Ricketts by Tonya Howe

Here’s what I ended up with, through the magic of iteration.

{
for $n in $resps
return
<p>{concat($n//$resps/tei:resp, ‘ by ‘, $n//$resps/tei:name)}</p>
}

Transcription and correction by Elizabeth Ricketts
Correction, editorial commentary, and markup by Tonya Howe

It was the $n in front of the //$resps/tei:resp that made no sense to me–but you can think of it as the iterative variable and its position in the cycle of respStmts. Pretty cool, huh? I thought so.

And then, today, that breakthrough led me to another: I had been struggling to get the source information in <imprint> to display properly, especially since I want there to be multiple sources, including but not requiring a first edition, an online edition, and any other edition used for the project. The online imprint should display a live link. Here’s the catch–I’m using <date type=”firstEd”> or <date type=”accessed”> to identify dates associated with print versus online, and <extent type=”physical”> or <extent type=”onlineLink”> to identify the extent of the sources. This seems to be the most consistent in terms of TEI P5 guidelines. So, I wanted to test something (I went with date, but could have gone with extent) to see if it were a web source, and if so, display different information. Here’s what I came up with, in tei2html.xql:

<h3><b>Sources:</b></h3>
{
for $n in $imprints
return
<li>
{$n//$imprints/tei:pubPlace}: {$n//$imprints/tei:publisher}. {$n//$imprints/tei:date}.
{
if ($n//$imprints/tei:date/@type = “accessed”) then
<a href=”{$n//$imprints/tei:extent}”>{$n//$imprints/tei:extent}</a>
else
$n//$imprints/tei:extent
}
{$n//$imprints/tei:note}
</li>
}

And it did exactly what I wanted it to do!

Then, from this whole collection of moments, I could fix my image display problem. I had already gotten the <pb facs=”image.png”> elements to display inline with the text using a typeswitch in the main XQL display functions page that called a little function in the same page each time it met with a <pb> element. Like this, in the tei2html.xql:

declare function tei2:tei2html($nodes as node()*) {
for $node in $nodes
return
typeswitch ($node)case element(tei:pb) return
tei2:pageImages($node)
};

declare function tei2:pageImages($pb as element (tei:pb)) {
let $facsPage := $pb/@facs
for $pb in “work”
return
<img src=”../images/{$facsPage}”/>
};

But this seemed clunky to me in the page display as a whole–in addition to being an inelegant solution generally. I wanted to put the page images in a sidebar, but translating that interdependent function to a standalone function and then calling it with <div data-template=”tei2:pageImages”/> kept returning errors. tei2:pageImages can’t be called as is with <div data-template=”tei2:pageImages”/> in the sidebar location, for reasons having to do, I think, with what information is being called in the node and the work as a whole. There are probably other reasons, too; suffice to say, it didn’t work. Using what I learned here and above, though, I came up with this, and put it not in tei2html.xql, but in app.xql:

declare function app:pageImages($node as node(), $model as map(*)) {
for $pb in $model(“work”)//tei:pb
let $facsPage := $pb/@facs
return
<p><a href=”../images/{$facsPage}”><img src=”../images/{$facsPage}” width=”100%”/></a></p>
};

Success!

ss34
Page images are now contained nicely in the sidebar!

As that worked like a such a charm, I wanted to try to move the source information elsewhere–possibly the sidebar, maybe a footer, who knows. For the moment, I went with sidebar, since I was on a roll. Here’s how I adapted it from above, putting it instead into app.xql, which is then called through templating:

declare function app:sources($node as node(), $model as map(*)) {
for $n in $model(“work”)//tei:imprint
return
<li>
{$n//tei:pubPlace}: {$n//tei:publisher}. {$n//tei:date}.
{
if ($n//tei:date/@type = “accessed”) then
<a href=”{$n//tei:extent}”>{$n//tei:extent}</a>
else
$n//tei:extent
}
{$n//tei:note}
</li>
};

As you can see, I was also able to streamline some of the code, making it more compact. I’m doing my happy dance right now, though you can’t see it.

Updating the search and display features

I’ve been pretty busy this last week on the database, though I haven’t committed a lot of it to this blog–I have, however, been updating the project on GitHub, which has been a lot easier to use now that I have a project that needs version control in this specific way. Most of my time has been spent taking things out, putting things in, tweaking the xql, reverting to the original, and starting over, in between posting to and reading around on the eXist forum, as I learn how to untangle the samples.

The most important things to note are 1.) I have pre-ordered the eXist O’Reilly book by Adam Retter and Erik Siegel–it should ship in November, if I’m lucky! 2.) I got the search query results to return and display correctly, and I’m learning how those features work. 3.) I put some page images into the Mary Hays file, and got them to display both in browse and in query return modes–that’s pretty exciting for me! It’s not an elegant solution, and I have a lot of work to do, but there’s a brute force version of it there now.

ss30
Narrow search results for ‘character’
ss31
Navigate to matched search results inside the Hays file
ss32
Sample page images inserted into the Hays file

Next up I think are some pretty big issues, I think–definitely more conceptual in nature. How should I display the page images, for instance? I want to keep the elements inline with the xml, but it may be more elegant to display them in a side bar or a footer, so as not to interrupt the flow of reading. The page images should be thumbnailed, with full sized images behind them. The big question here is how to automate this, which is likely also determined by how the images are stored. Another conceptual issue that’s arisen has to do with incorporating images as primary interests in the database–though this may be something to consider in a later iteration. Currently, the textual transcription and markup is the primary focus, with the page images functioning as secondary visual resources. But what if I were to incorporate paintings, or photographs of material objects? Even if I don’t want to do this here, it may be something that others would want, so considering it now is probably a god idea. I’ll also need to begin marking up major topics in each xml file, which will also mean making sure the structure overall is solid and that I have a good topic schema. Users should be able to at least see a list of topics marked up as such in each file, though ultimately I’ll want to incorporate that as a search feature, but I don’t want to have to re-do everything when I incorporate the mallet module, eventually. More details in the readme!

 

 

 

Github and more

NiC is now available for collaboration on github! What have I learned in the process? Help is essential, first of all. And probably last of all. Otherwise, I did learn a few important things about the github workflow, which was very confusing to me. For those of you who, like me, are new to the tool and are mostly humanities-type people, I hope this clarifies things a bit. It’s very helpful to learn a bit about how others set up their workflow, because things can get very confusing, very fast, once you start making changes to the app, and with collaborators, it’s just ridiculous to assume you don’t need to think about this. You’ll see what I mean if you try to go forth, willy-nilly.

For starters, a little about my setup. I have oXygen and eXide (through the eXist-DB) installed on my computer, and I work in them both–I should really make a decision about what to use, when, because things can get a little tricky if you don’t refresh while moving from one to the other. I also set up oXygen to work with eXist-DB–here’s a how-to from eXist, and here’s some documentation on native XML database support from oXygen.

I have github running through the GUI on my Windows machine. (I tried, I really did, to use the command line, and it stumped me. Hey, I’m picking my battles!) In eXide, I set up the app to synchronize automatically to the local repository folder, which I’ve called NiC and located inside Documents/Github. You can set this from eXide in Application/Synchronize.

ss27
Synchronizing from eXide

Now that that’s clear… Basically, I have a process that’s derived from this forum post from @wolfgangm (HT @joewiz):

My workflow usually is: I create a new app in eXide, do some first edits and synchronize the entire app to a directory, which I then put into git. I continue to develop inside the db and synchronize again when I think it’s time for another commit. The advantage is that I can easily delete my app from the db when I messed it up and re-import the xar from the directory on disk (calling “ant” inside the directory will generate a .xar which can be uploaded again via the dashboard).

Now, some of that is Greek to me, but I also learned from @joewiz that oXygen has its own install of “ant” locally, so there’s no need for the command line. Unless you’re really hardcore about it. Or unless you know stuff that we regular folk don’t.  More about “ant” later. This is just to show that hearing about other people’s workflow is useful before you really get into things.

Once I got to a place where I needed more collaboration, I downloaded the application from inside eXide (Application/Download app), which gave me an .xar file. (This is basically like a .zip file, and in fact you can just rename it from .xar to .zip to see what’s inside. You load new packages into your dashboard by adding .xar files, so this is an important bit of information.) Then, I renamed it .zip, uncompressed it, and copied all that stuff into Github/NiC. You could uncompress into that folder, too, but I wasn’t that savvy.

Now, I need to get all this stuff into the public repository. So, I went to the github GUI on my local machine, made a new repository, pointed at the local folder I just uncompressed that .xar/.zip file into.  Once that’s done, you can make your first commit–in my case, I called this “Added xar contents.” You can see that (basically first commit–the earlier one is set up automatically [I think] when you create a new repository) here:

ss28
Local git clone folder and the Windows GUI with commits visible

So, your stuff’s up there for folks to see and make edits to. You can choose to incorporate these edits (commits) or not as you work on your project and push those edits to the cloud through the github GUI by using the handy check boxes that will appear. (I’ll try to remember to put a picture of that here, too.)

Remember: you’ve now got two copies on your local machine, and one copy that is sort of like “swing space” between your machine and the cloud–that’s what’s on github. This means you’ve got three pens/paddocks/corrals for your app, and now you can juggle. Sorry for the mixed metaphors there, but you get my point. Here’s an image of the workflow I’m using:

 

workflow
My workflow

I just made this up, but I think it shows what I’m doing. Normally, the flow between A. and B. is pretty seamless, but the app may need to be reinstalled when you’rve accepted edits to configuration or controller files. To do this (and note, this will work for any app if you have it in .xar form–pretty cool, huh?), open build.xml from the local github synch folder in oXygen, and run the transform scenario; this calls oXygen’s copy of apache ant, which is “a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.” Basically, it’s a compiler, I think. When you run that from oXygen, you’ll be left with a tidy new .xar file in your github synch folder, which you can then add via the eXist package manager. Neat, huh?

ss25
Building an .xar file from oXygen

I did all this and re-opened the NiC app–and look, the edits made to the controller.xql and app.xql files for search functionality have taken effect! I now have basic search functionality within the titles, heads, and paragraphs in my xml files.

ss26
Full-text search for “probability”

It would be weird if this worked fully, though–clicking on the title produces an error, because apparently the app is calling part of the outline function, which I deleted as there is no need for an outline with these short .xml files.

ss29
A new error!

This just gives me a new problem to solve. Onward!

 

Recovering the DB

 

And so it begins: this is what greeted me when I started eXist after, apparently, mucking about in expathrepo when I wasn’t supposed to. What follows is basically a screenshot diary of the recovery process–I’m not sure I really want to revisit it, but perhaps I’ll come back and add a narrative later…

ss14
Yuck. But click into the Java Client, which still works, and you can do the repairs.

ss15 ss16

ss17
I think the backup at this point didn’t make much difference. Always back stuff up!
ss18
First attempt at repairing didn’t work because the expathrepo directory still existed and therefore couldn’t be rebuilt.
ss19
Renaming expathrepo to badbadbad so it can be rebuilt
ss20
So many thanks to the twitterverse!
ss21
The repair is working!
ss22
Repair is done
ss23
A new expathrepo directory–now I can delete the corrupted part
ss24
Aaaand, we’re back!