Browse Month: July 2011

Travel Blog: Finding Joy at the 2011 NEH Vectors-CTS Institute at USC

The Institute for Multimedia Literacy, USC (Richard Neutra Building)
Post written by Elizabeth Cornell. This month Elizabeth Cornell, a doctoral candidate in the Department of English at Fordham University, will be reporting on her summer residency at the NEH-Vectors-CTS Summer Institute at the University of Southern California. She is also the Project Coordinator for the Keywords Collaboratory, a wiki-based space where students and researchers can collaborate on keywords projects inspired by the book, Keywords for American Cultural Studies, edited by Bruce Burgett and Glenn Hendler. This is the first installment of her report. Welcome, Elizabeth!

 

•  •  •

On a recent plane trip to Los Angeles, I read Jennifer Kahn’s New Yorker profile of Jaron Lanier, a pioneer of virtual reality. According to Lanier, digital technology should aim to enhance and deepen “human interaction”:

“One of [Lanier’s] most recent ventures has been to help Microsoft construct a new, joystick-free gaming system, called the Kinect, which uses a computerized camera to match the movements of a player’s body to the avatar in the game—allowing someone to kick a virtual ninja using her actual foot.”

Lanier considers Kinect, “the fastest selling-electronic device of all time, … an example of technology that could ‘expand what it means to think.’” To me, a real human kicking a computerized ninja is a strange, if not depressing, example of digital technology expanding the mind and deepening human interaction. Maybe I’m not fully appreciating the technological expertise behind a joystick-free video game.

Perhaps my reaction to that comment is because in the digital humanities, real human interaction and collaboration, exciting thinking and knowledge production, is taking place through the use of technology. Some of it is happening at the National Endowment for the Humanities Vectors-CTS Institute at USC. For one month this summer, twenty researchers from various North American institutions are attending the Institute to learn about software platforms designed specifically for use by humanities scholars. The projects this summer are contextualized within the field of American Studies, but the similarities end there. And, no joysticks are in sight.

For example, Curtis Marez, an associate professor in ethnic studies at UC San Diego, is developing a project that uses Cesar Chavez’s video library to explore the role that video and photography and played in struggles between California corporations and workers of color in the agricultural industries of the San Joaquin Valley from the 1940s to the 1990s.

Nicholas Sammond, an associate professor of cinema studies and English at the University of Toronto, is creating a project that examines the relationship between the industrialization of American commercial animation and blackface minstrelsy. Why, he asks, does Mickey Mouse wear white gloves?

Like many people attending the Institute, both Curtis and Nic are using historically significant archives of film and photography to build on the material in their print-based books on related subjects. To create their projects, they’ll use two different but connected software platforms called Hypercities and Scalar. Unlike a book, which is static, these two platforms allow users to add additional visual material to the database, as well as detailed commentary, audio, and interactive maps. Visitors can leave comments; in some cases they can add their own archival material and create new links and paths to related web sites and archives.

If Lanier and, for that matter, his boss, Bill Gates, really want to see digital technology in meaningful action, they should visit USC’s NEH Vectors Institute this summer, where people are using technology to expand the way we think and using it to produce knowledge in innovative, significant ways.

I’ll be at the Institute until mid-August, where I’m exploring ways that Scalar and Hypercities might make Prof. Glenn Hendler’s Keywords Collaboratory a more dynamic, multimodal space for students and researchers. In my next blog entry, I’ll write more about that.

TravelBlog: The Erotic Life of Data, or LOD-LAM and the Pursuit of Compatible Data

Linked Open Data Summit meeting planning board, Day 1

“Sharing is caring, but linking is loving.” 
— Laura Smart, June 3, 2011

Spoken with the acoustic equivalent of a wink, Caltech metadata librarian Laura Smart’s wrap-up in the closing circle of the LOD-LAM Summit captured that unspoken je ne sais quoi that infused the two-day conference on Linked Open Data in Libraries, Archives and Museums Summit. Organized by Jon Voss and Kris Carpenter Negulescu, with support from the Sloan Foundation, the National Endowment for the Humanities Office of Digital Humanities, and the Internet Archive, a certain juicy enthusiasm for the possible permeated the meeting that otherwise revolved around the more technical matters of the relative merits of XML, JSON, and RDFa. A hankering for limitless understanding and knowledge, akin to the promise of the apple in the Garden of Eden, imbued the meeting with something like desire without an object, something like what the Lacanian psychoanalysts might call a feminine jouissance. The idea of everything linked to everything else, of infinite connectivity, has that expansive pre-Oedipal appeal that can make anything less seem petty, vulgar, even obscene.

Just after the June 2-3rd LOD-LAM meeting wound down, three major search engine players — Google, Microsoft Bing, and Yahoo — announced that they had agreed to conform to the new standards developed by schema.org. And two weeks later, at a panel on funding opportunities at the DH-2011 conference (notably marked with a psychedelic pink and orange “summer of DH-love” theme), NEH-ODH Director Brett Bobley observed that, “Linked open data is getting really hot.” In the month of June, linked open data was just about everywhere one turned, often accompanied by its delightfully disruptive partners open source code and open access repositories. The possibilities for linked open data and an emerging semantic web were beginning to unfold as realities.

But what does linked open data mean for us, for you and me? As one LOD-LAM Summit participant pointed out, linked open data may sound awesome, but do you really want your banking data to be linked and open? Maybe not so much. How about those medical records that are rapidly going online? Hmmh. Probably not. Linked open data holds within it the possibilities for the machine-reading of vast quantities of data, yielding results that we can’t possibly foresee. The promise of the view of everything, the ultimate digitally-enhanced panopticon, arrives without ethical boundaries. Completely transparent data leaves open the possibility not just of the bliss of connection, but the paranoia of exposure.

So what is the middle ground here, between the eros of linked open data and the terror of naked exposure? Between the boundless, possibly unnavigable sea of open data and the Babel of incompatible coding, proprietary silos and paywall obstructions?

The breakout group convened by Eric Kansa of UC Berkeley’s I-School arrived at a moderate plan. We crafted an agenda for making data linkable rather than necessarily linked. Kansa described the context we’ve been working with: in 2002 there were 35 different projects created in response to the cultural heritage crisis in Iraq precipitated by the U.S. invasion of that nation, yet none of these projects had interoperable data. This led the Institute for Museum and Library Services (IMLS) to respond by developing a focus on interoperable data for digital collections and content and prompted Kansa and his colleagues to develop the Open Contexts project for cultural heritage data.

Our LOD-LAM breakout group developed a measured, incremental proposal. We proposed a set of readability standards that all might be applied to all online data to support link-ability. Among the recommendations:

  • (X)HTML and CSS be validated code (devoid of those funky little coding errors that allow it to run, but hang it up when moving from platform to platform) 
  • code has a mobile version (via CSS) 
  • code be ADA (Americans with Disabilities Act) compliant 
  • pages have a print version (via CSS) 
  • pages be machine read-able – in XML, RFD or JSON 
  • all coding to use a mark-up with metadata standard and relevant to the specific field (VrayCore, DarwinCore, and other community vocabularies and schemas) 
  • all objects have unique permalink 

Our goal was compatibility, or interoperability, among various data rather than immediate linked open data. In the overall erotics of data, if linking is loving, then linkable is loveable. The goal of compatible or interoperable data is to ensure that the data remains linkable/loveable even as fashions in programming, metadata, and platforms change with time. Linkable, loveable: that’s compatible data.

All you need is love.  And data standards.

•   •   •

This is the second of several blog posts chronicling a June of DH Travel. Next up: “On Place and Space,” reporting on site visits to Georgetown University’s Center for New Designs in Learning and Scholarship, and the University of Virginia’s Scholar’s Lab. 

Please note: Opinions expressed in the TravelBlog briefs of the Fordham Digital Humanities do not necessarily reflect the views of my home institution. 

—Micki McGee


Skip to toolbar