May 19, 2011
Teacher-scholars unite! I’ve been testing some possible applications of Omeka archives and Zotero as collaborative tools organizing the development of literary research methodologies classes, and I’d like to take the wonderful opportunity of THATcamp to begin developing the structure and content of project I see as The Next Step. I’d like your help to discuss, plan, and/or block out a template for a full-class, full-term student project that works toward researching, annotating, and encoding a small number (perhaps just one per term?) of thematically-selected texts in our shamefully neglected special collections room. Ideally, this project would therefore include study of the texts themselves, research about their material and digital existences (using the ESTC, Google Books, and something like Eighteenth-Century Book Tracker) a basic practical/theoretical framework for DH, collaboratively writing a useful and accessible overview and producing an XML version of the text. Each term or year, students and faculty would work together to select, create, and grow the entries according to a broader thematic logic that can expand over time, based on the strengths of the collections. I’d like to use this template as a basis for a grant application that would allow the project to grow and, ultimately, link faculty, students, and resources at area institutions.
I think this would be a viable model for an advanced undergraduate seminar, and it has the benefit of drawing together a variety of practical and theoretical facets of the digital humanities. Some questions to consider include how we can best design the arc of the class? What specific parts of the project would have as their goal which practical or conceptual outcomes? What are the technological hurdles to be 1.) aware of, 2.) avoided, or 3.) embraced? What should the Omeka site look like/allow, in order to help the project grow over time? How might faculty help students approach the text encoding portion of the project? What are the most useful introductory text-based sources providing a theoretical framework for such a practical project? And what might steps after The Next Step look like?
via Archives, Encoding, and Students, Oh My! | THATCamp CHNM 2011.
via THATCamp CHNM 2009 » Blog Archive.
For the past year or so, I’ve been interested in putting together a small team of like-minded folks to help bring to fruition a data visualization project that could benefit less-prepared college students, teachers in the humanities, and researchers alike. Often, underprepared or at-risk educational populations struggle to connect literary study with the so-called “real world,” leading to a saddening lack of interest in the possibilities of the English language, much less literary study. I’d like to collaborate with someone to develop a web application drawing on WordNet—and particularly the range of semantic similarity extensions built around WordNet—to visually mark up and weight by color the semantic patterns emerging from small uploaded portions of text. This kind of application can not only help students attend more fully to the structures of representation in literature and the larger world around them—through the means of a tool emphatically of the “real world”—but also enable scholars to unearth unexpected connections in larger bodies of text. Like literary texts to many students, the existing semantic similarity tools available through the open source community can seem inaccessible, even foreign, to a lay audience; this project seeks to lay open the language that so many fear, while enabling the critical thinking involved in literary analysis. Ultimately, we hope to extend this application with a collaborative and growing database of user-generated annotations, and perhaps with time, to fold in a historically-conscious dictionary as well. We are seeking an NEH Digital Humanities startup grant to pursue this project fully, and I’d like the opportunity to throw our idea into the ring at THATcamp to explore its problems as well as possibilities, even gathering more collaborators along the way.
Here’s a hand-colored version of something like what I’m thinking; I used WordNet::Similarity to generate the numbers indicating degree of relatedness, and then broke those numbers into a visual weighting system. Implementation hurdles do come out pretty clearly when you see how the numbers are generated, so I’m hoping someone out there will have better insights into the how of it all.
To a related, larger point: I always have the sneaking suspicion that this has been done before–Jodi Schneider mentioned LiveInk, a program that reformats text according to its semantic units, so that readers can more effectively grasp and retain content. This strikes me as simlar, as well, to the kinds of issues raised by Douglas Knox–using scale and format to retrieve “structured information.” Do the much-better-informed Campers out there know of an already-existing project like this? I wish the checklist of visual thinking tools that George Brett proposes were already here!