Teaching with Omeka: Presenting the Peries Project

Yesterday the seminar I’ve been teaching at Penn finished their digital project: an online edition of John Leyden’s “Tales of the Peries,” a handwritten manuscript in the archives of the National Library of Scotland. Leyden was a romantic poet as well as close collaborator of Walter Scott’s, before traveling to Southeast Asia as a functionary of the East India Company. Once there, Leyden established himself as an Orientalist and specialist in Asian languages, and the Tales of the Peries is an example from this fruitful period before his early death in 1811.

As part of a larger class on historical fiction, fantasy, and the influence of empire, my students built an Omeka site that includes digital facsimiles of the manuscript, transcriptions using Scripto, a plugin for Omeka, and a “readerly edition” that incorporates their research into editorial practices and critical editions and links to supporting materials and entries in the Omeka collection and on the wiki. In addition, they built a host of supporting materials for the site, from critical evaluations of the Tales, its verse, and the influence of Urdu and Arabic literature, to information about Leyden, his involvement with the EIC, even an animated Flash map that walks the reader through the geography of the tale and details the main transformations of Melech Mahommed, the main character, over the course of the narrative.

Continue reading

The David Livingstone Spectral Imaging Project, I Presume?


Yes, groan. But I spent this morning looking through the beautiful online collection produced by the David Livingstone Spectral Imaging Project. Livingstone’s 1871 field diary, from the months leading up to his ‘discovery’ by Morton Stanley, was written in a berry-based natural ink across the pages of newsprint, and has faded to near invisibility. Using spectral imaging (which images at distinct spectra and then recombines them), the team has managed to reveal the journal entries and strip out the original newsprint. The results are simply amazing — it reminds me of looking at Hubble images of distant nebula. Gorgeous, strange, new. In addition the the extensive documentation and supporting bibliographic and historical materials, the snazzy interface, which allows you to coordinate scrolling across the color and spectral facsimiles (as in the above image), is just stunning.

On the one hand, it’s a case of an extraordinary archival find (Adrian Wisnicki and Anne Martin’s recovery and reassembly of the often uncatalogued portions of the journal across several distinct accessions at the David Livingstone Center) combined with an ideal technology (the Archimedes Palimsest team brought their expertise to bear). But when you look at the extensive documentation provided, it’s also a window into the extraordinary challenge of producing collaborative, trans-Atlantic research in the digital humanities.
Continue reading

ThatCampPenn 2012

I spent Wednesday on campus at Penn’s inaugural ThatCamp. It was set up by the Penn Library and the Penn Humanities Forum, and showed the promise and possibility of the “unconference” format, particularly when applied to something as tentative and collaborative as the digital humanities.

Amanda French, who came up from house THATCamp at George Mason and the Center for History and New Media. She set precisely the right open, collaborative, free-wheeling tone at the opening session, and it carried through. The thing that struck me most forcefully is that the open formatting creates environments that are extraordinarily friendly to non-specialists. Continue reading

Westward, Ho!

It’s been a few months since I’ve updated the site. I spent part of this morning beefing up the security (again), and updating. And I’ve been busy over the last months trying to find a permanent position, sharing some work at MLA, working to wrap up the first year of Integrated Studies at Penn, attending Penn’s inaugural THATCamp, and helping my historical fiction and fantasy seminar produce an Omeka collection. More on all of these heads shortly.

But I wanted to announce that I’ll be heading to Los Angeles to take up a AP position in the English department at USC. I can’t figure out the trumpeting voice I need to communicate how excited/relieved/sapped I feel in joining the faculty there (and rejoining some old friends). It feels like the culmination of decades of work. Sure, it marks another stage, but reaching this place has often seemed not only elusive but impossible.

I have too many people to thank for the care that helped me get here. I’ve been thinking about all of the big and little things people have done for me, taught to me, fixed for me. The workshops, seminars, hours of critique and late night commiserations given by friends and colleagues at Rutgers. The years my committee spent with my work. The chance to come to Penn and soak up a new intellectual climate and make a whole new set of friends and colleagues. These things change the course of your professional life. And now I get to head west to join a campus that’s crackling with energy and innovation. The future is filled with possibility. And in the path-dependent way of all living systems, those possibilities are firmly grounded in the chances and opportunities I’ve been given by others. Happy to be in a place where I can start to do the same for others.

AP Testing and Public Education

Now that I’m back in Philly, I’ve spent a part of this morning luxuriously catching up on my RSS feed over a cup of Stump Town coffee (thanks Kristen and Chad!). I see Stanley Fish has a second column up about DH (neither aggravating nor extraordinarily illuminating), and there was some kind of primary in New Hampshire.

But what caught my eye was a Chronicle op-ed by Michael Mendillo, astronomy prof. at BU, on why we should kill the AP testing program:

Offering credit beyond the accomplishment itself (simply because it was not easy to do) is a terrible lesson to give to students. I believe this notion of value-added has led to another version of getting more bang for the buck: the fantastic pressure put on students to get more than “just a bachelor’s degree” for their four years of tuition. Double-majors, multiple minors, and combined bachelor’s/master’s degree programs are becoming so mandatory for the best students that enrichment courses are simply not an option. Is that really the optimal way to achieve an educated citizenry?

Short answer: yes. After all, why should we offer separate degrees? Shouldn’t the quality of the effort be what matters? You could extend this line of thinking to get rid of the distinction between the B.A. & B.S., combine “magna” “suma” and plain-old “cum” laude honors, fold the M.D. into the D.O. — you can see where I’m going. The Ph.D. I was conferred for all that additional work beyond the M.A. didn’t feel like a terrible lesson. Continue reading

The Perils of Certain WordPress Servers

Right now I’m flying back from the Seattle MLA, where I gave two talks, including the one that I’ve been writing about below. I’ll share some of those materials shortly. But I noticed this morning that the server had crashed and it took me some time to sort it out. And I realized that, even though I back up the blog, I’d never backed up the server itself using Amazon’s image-making service. Pretty dumb. That’s fixed now, but I’m also going to figure out a way to set up a test for the website so that I get an email if it goes down again. Probably a good thing to work out as I head into this semester, when I’ll be hosting and supporting an OMEKA server for my Historical Fiction and Fantasy class. More soon.

(As a side note, it’s pretty remarkable that I can now get into my server and get it up and running at twenty thousand feet. Age of wonder, indeed.)

Speculations on Fashion as Information Technology, or, Bellbottoms as a Series of Tubes

Recently I’ve been enjoying an online discussion of the decline in fashion innovation. As Kurt Anderson observes, the visual innovation of mainstream American dress is far less pronounced when comparing someone from 1991 to someone in 2011, that 1971:1991, or 1951:1971. This violates our expectations for the forms of accelerating cultural development we sometimes ascribe to the twenty-first century. Anderson:

Now try to spot the big, obvious, defining differences between 2012 and 1992. Movies and literature and music have never changed less over a 20-year period. Lady Gaga has replaced Madonna, Adele has replaced Mariah Carey—both distinctions without a real difference—and Jay-Z and Wilco are still Jay-Z and Wilco. Except for certain details (no Google searches, no e-mail, no cell phones), ambitious fiction from 20 years ago (Doug Coupland’s Generation X, Neal Stephenson’s Snow Crash, Martin Amis’s Time’s Arrow) is in no way dated, and the sensibility and style of Joan Didion’s books from even 20 years before that seem plausibly circa-2012.

But as Tyler Cowen and Alan Jacobs respond, there’s a big caveat to Anderson’s argument, since it explicitly excludes technology. Cowen notes that we still see innovation in information-dependent technologies:

Today the areas of major breakthrough innovation are writing, computer games, television, photography (less restricted to the last decade exclusively) and the personal stream. Let’s hope TV can keep it up, and architecture counts partially.

I think it’s fair to say that innovation in each of these areas has been driven by the proliferation of new and powerful technologies, from the impact of social media, the internet, and attendent cultural forms on the novel, to the power of new scripting and rendering technologies in architecture.

Yet I think this discussion just misses the more basic insight, which is that fashion, as a channel for communication, is an information technology. Continue reading

Gephi Network Visualization of Humphry Clinker

I’m still working on slides for my talk at the MLA on Stevenson and Oliphant, and Victorian reflections on the ’45 (force-directed network and Google map visualizations here and here). I’m also starting to experiment with Gephi, a powerful open source graph editor. I was blown away by Matthew Jocker’s “Nineteenth-Century Literary Genome” animation, and wanted to know how it was made. Apparently, they produced it one frame at a time as separate png files and then assembled them using Quicktime.

I’m still trying to figure out how to produce animations, but I like working in Gephi. It has a feature-rich interface and allows you to edit and remove nodes, perform clustering and various forms of network analysis easily and produces sharp images. Here is the location entity network from Humphry Clinker (1771), arranged into eight clusters, with nodes and edges colored by group:

Gephi makes beautiful static images, and as can be seen in genome video, beautiful animations. On the other hand, unlike the Protovis graphs, finished visualizations are not dynamic or interactive. You can’t output a script-based visualization that the user can play with, or that could be embedded in a presentation. Not a problem for a presentation, really, but I like the activity that a Protovis graph can bring to web publishing.

I’m also evaluating these various visualization approaches in order to prepare for my historical fiction and fantasy seminar next semester, which will ask the students to help produce an online textual exhibit using Omeka. I’m going to ask them to look at what’s possible and then pitch paratextual visualizations & tools to package with the exhibit.

“Webby” Publishing and Scholarly Digital Literacy

HASTAC 2011 has posted videos of some of their panels, and I was taken with two points, brought up by Dan Cohen and Tara McPherson as part of the panel “The Future of Digital Publishing,” which can be viewed here.

First, I was taken with Cohen’s final suggestion, that humanities scholars are “terrible economists” because our pursuit of print perfection causes an inordinate investment in the final stages of publication (proof-reading, reformatting notes into periodical-specific styles). As he notes, we have learned to look past such fastidiousness in some web formats, and this indicates that, in a “webby” mode, we are able to relax those standards and still take work seriously.

McPherson’s wide-ranging discussion canvased the new formats and possibilities which digital archives are opening up to us, and she asked Humanities scholars not to cede the task of figuring out how to manage massive data sets to scientific and computer scientific communities. I was particularly caught by the description she gave of a question that keeps her up at night: ten or fifteen years for now, how will young scholars make sense of the wild explosion of publication formats and approaches which archives and DH work have opened up?

This, for me, raises a third question and perhaps more challenging problem: how will we cultivate scholarly digital literacy? Part of what reinforces the power and importance of print and text-based publication is the high-level textual literacy that humanists develop. I think about how hard it was to develop the specialized literacy it took for me to understand scholarly publishing formats — this demanded a huge evolution in my reading practices, above and beyond what I would describe as my already high-level textual literacy as an undergraduate. When we present something like a simple chart, or even an object as complex as an active network visualization, much less expose users to archives of new material, we tacitly demand some literacy in those formats. Brief textual descriptions and introductions don’t suffice here. Cohen’s observation regarding the relaxed constraints offered by “webby” publishing standards emphasizes the point: we tolerate spelling errors because the effort otherwise put into exhaustive spell-checking is being invested elsewhere, in the aspects of digital scholarship that entail considerable investments in both acculturation and ongoing labor.

I think there is an expectation that great content and great scholarship will cultivate literacy. The iPad certainly shows (as McPherson notes), that a transformative product can drive technical literacy in a way that seems immediate and unreflective — a transformation so profound that it produces what Thomas Kuhn describes as the “gestalt” experience of a new episteme — and the duck becomes a rabbit. And yet, I worry that new technologies and techniques, especially as they are initially developed, pull in precisely the opposite direction. Certainly, this concern weighed on me as I decided which path to pursue in my own work, and I’ve opted primarily for publication in traditional print formats, and the forms of scholarship that would help me achieve that aim. If you watch to the close of the talk, the audience questions are dominated by the problem of tenure and scholarship-evaluation standards. But this is generally cast in terms of accommodation or, alternatively, forcing traditional scholars to change their practices, rather than acknowledging that digital scholarship demands, in effect, a new, and truly complex, set of literacies.

Google Fusion Tables

I’m still working on my Stevenson and Oliphant talk for the MLA, and I thought I’d try to map some of the location data that I’ve been collecting for that talk. My friend Mitch Fraas, a Bolinger Fellow here at Penn, has been using fusion tables to look at the geographic distribution of printed books from the records at Van Pelt. Basically, all you have to do is upload the location data as a table to Google Docs and select the visualization that you want. You can embed the visualization directly, or produce a Google Earth view that adds geographic images. I’ve done both below for Smollett’s Humphrey Clinker. The interesting thing about such a visualization is that it helps to highlight the different imaginary spaces of geography. On the one hand, there are the physical locations. On the other, you can use tools like network analysis to figure out how closely associated those places are in the world of the book. (Zoom & click on icons for count. Count number is distinguished by color.)

Locations in Humphry Clinker (1771); Google Fusion Table Map

Differences between geomapping and other location-based visualizations can help to demonstrate how literary networks distort the geographic spaces of the novel. For instance, in the force-directed network graph at the bottom of this earlier post, Edinburgh is closer to England, and Scotland closer to France, due to the close proximity of these locations in terms of their citation in the novel.