Main »

INKE Research Foundations For Understanding Books And Reading In A Digital Age Text And Beyond

Conference report on the INKE Birds Of a Feather conference at Ritsumeikan University, November 18th, 2011. Others like Neil Fraistat are tweeting it with #inke hashtag.

Note: These notes are being written live so they will be full of typos and will not be thorough.

Mitsuyuki Inaba: Welcome

Inaba gave a welcome talk in which he talked about the etymology of the name of the university we are at, Ritsumeikan. It literally mean that "place to establish your destiny" and comes from Mencius an important Confucian philosopher. He showed how the characters Ritsu-mei get changed in different contexts from Mencius to Ritsumeikan documentation. He then shifted to asking what is the destiny of our meeting. He concluded that our destiny is to discuss the diversity of reading - the way the characters we read go in different directions and are carved different ways.

Mashiro Shimoda: Creating New Research Environments in International Alliance

Shimoda is from the University of Tokyo. He talked about building the Buddhist Research Knowledge Base (RBIB). They have eight projects digitizing important Buddhist works around the world.

SAT Daizōkyō Text Database (SAT comes from "good" or "true" in Sanskrit) is the name of the database project developing the new environments needed and offering the digitized texts.

He talked about the problems they have faced:

  • There are few mechanisms for evaluating quality of such work
  • Scholars shy away from involvement in large-scale projects
  • The need for funds mean that scholars have to engage in fund-raising in various ways
  • The challenges of digitization
  • The challenges of a continuous tradition of interpretation and criticism

Their solution is to involve scholars as the project progresses sharing everything. They are also coordinating with other projects like the Digital Dictionary of Buddhism and the Collected Words of Korean Buddhism.

Their texts have characters from Chinese, Indic and Japanese. They expect the costs to be about 5 million dollars and to involve hundreds of scholars.

This project struck me as typical of large disciplinary projects where there is a large body of evidence that needs to be digitized and a number of related projects that need to be coordinated. It reminds me of what Nines is trying to do with 19th century studies.

Kozaburo Hachimura: Digital Archiving of Intangible Cultural Properties

Hachimura is at Ritsumeikan and directs the Art Research Centre. He started by introducing the Art Research Centre. ARC was established in 1998 and is a centre for collaboration between IT and Humanities. They have a number of projects funded by the Global COE Program that are digitizing Japanese arts and culture. Their first intention was to digitize resources for industrial utilization, but now they are looking more at research, education and preservation.

Hachimura talked a bit about what they think the digital humanities here. It is the intersection of humanities and IT fields. It aims to renew humanities education.

Hachimura is the head of Digital Archives Research in his lab. Hachimura talked specifically about his work digitizing intangible heritage like Noh theatre. He showed slides about the motion capture system they have for tracking Noh dance, Kabuki and other forms of dance. He mentioned my blog entry on Theoreti.ca as a place to read more. There a number of interesting problems when you have rich motion data. How do you find the similarity between two motions. How do you search a database of motions? Can you analyze emotion in motion?

A related area of research is around generating computer generated animation from the motion capture. They can recreate traditional stages, costumes, and movement to recreate what a performance might have looked like or what the view of the performer might have been.

This sort of research through recreation is, I think, going to become more common. When you recreate or simulate a phenomena it shows gaps in our knowledge that provokes more research. It also allows counter factual "what-if" modelling and it connects us with the larger amateur community interested in re-enacting.

Then Hachimura talked about their work using Labanotation to input and edit Noh motion. They have built a very neat editor that is connected to an animation engine so people can notate a performance and then visualize it.

Lastly, Hachimura talked about their work digitizing the Gion festival in Kyoto. The Gion festival is one of the largest in Japan and involves a parade in July with 32 floats. There are approximately 200,000 spectators for the 5 hour parade. They are using the Virtual Kyoto developed by Prof. Yao's group. They are modelling the spectators who are important to the parade along with the crew and performers. They are create 3D sonnifications. They want to create a virtual experience of riding on a float with the rolling and vibration. A Gion festival ride. To that end he showed location and accelerometer data of the vibration of a float that they then used to simulate the trip using an earthquake table.

Neil Fraistat: Textual Addressability and the Future of Reading

Fraistat raised larger questions that have arisen from different projects he has worked on. He started with two observations from others:

  • McGann argues that we are in a turn from the textual condition to a digital condition where our cultural heritage will have to be re-edited. Editions are the products and mediators of the sweeping change.
  • Witmore in Text: A Massively Addressible Object argues that a text is a text because it is massively addressible at different levels of scale. Each level of abstraction is addressable. Addressability means the position in text can be queried at different levels. The edition is an important part of that addressability.

He turned to the complexity of editing The Complete Poetry of Percy Bysshe Shelley. The canon is problematic so Fraistat and Reiman have tried to recreate how Shelley would have presented poems to his audience. For their print editions they had to make decisions and it ended up a silo. A digital edition should allow different organizations, different views, and avoid being a silo.

The edition should not be a final statement, but a field of possibilities. Fraistat would like editions to be massively addressable. They should be interoperable, layered, modular, multimodal, dynamic, scalable, curatable, everted, and sustainable. He borrows "everted" from William Gibson to mean that they are turned out onto the world. By curatable he imagines how we might involve citizen humanists.

He introduced the Shelley-Godwin Archive which has recently been funded by the NEH. The technical infrastructure is supported by MITH. They imagine persistent addressability of the content. The interface will be separate so that content can be repurposed. He described 4 layers each of which should not be dependent on the one below:

  • Digitization
  • Transcription and Encoding
  • Interface
  • User-generated data

He described the archive as becoming a commons which can support all sorts of activity.

Birds Of A Feather Sessions

The remaining sessions were not papers. We all had papers beforehand.

John Bath and Craig Harkema: There's More than One Way to Skin a Book: Experimental Interfaces for Reading Illustrated Books

We had a discussion about whether we can actually separate the skin (interface) and the content. There seems to be a temptation to solve everything by proliferating views, interfaces, and features.

Christian Wittern: Towards an Architecture for Active Reading

Wittern talked about the system he imagines. It has an interesting feature in that while it is distributed each person has a synchronized version of the database locally. A little bit like Drop Box.

As a philosopher I want to say that what is important is the thinking. Complexity can distract from the thinking. We run the danger of thinking that scholarship is an end in itself. When it is an end in itself then complexity, jargon, and specialization is rewarded.

Hussein Keshani: Reading Visually: Can Art Historical Reading Approaches be Digitized

Keshani talked about the problems of tracking names and places. He made an interesting point that the texts he uses are actually pictures of texts. They are representations that have been corrupted over time.

Christian Vandendorpe: The Scholarly Book as a Special Case of Wiki

Vandendorpe gave us an overview of the Wikisource project as a scholarly edition environment. The real problem is the annotation. Annotations are not supported well. He feels that the wikipedia community needs to adapt their encyclopedia philosophy to the situation of scholarship and fiction.

Wikisource has a really cool feature that allows one to export a book made up of the pages you want. You can export to various formats including PDF.

Dan O'Donnell: "Nor doubted once": Editing Text and Context

O'Donnell presented collaborative research. He started with how images have been edited by showing an example from National Geographic. A lot of people say a lot of people got really upset by the image manipulation that National Geographic did on the pyramids of Eygpt. There is all sorts of things we can do with digital editions that our readers may not understand. Our readers need to be educated. One thing we can do is to start to study the editorial history of digital objects like photographs or virtual worlds. We need to develop the forensics and conventions to talk about veracity in digital works.

One of the things we are dealing with is journalism conventions. We believe that a cover of a "news" venue should be "true" in that there should be limitations to the manipulations. In other areas like fashion or art photography we don't expect veracity. We are negotiating the expected veracity in different situations.

To what extent are we now negotiating veracity and authenticity for editions or is there a degree to which we expect editions to be fictions and therefore to not be true in some simple sense.

Constance Compton (and others): The Social Edition in Social Conditions: Editing the Devonshire Manuscript

Compton's presentation was a project report on behalf of a larger team. They are trying to figure out what a social edition might be. They are presenting different editions using different technologies including Wikisource in order to compare and learn about the virtues of each venue.

She also talked about visualizing the network of people involved in the Devonshire Manuscript. You can see a paper on this at Digital Studies, Drawing Networks in the Devonshire Manuscript (BL Add 17492): Toward Visualizing a Writing Community's Shared Apprenticeship, Social Valuation, and Self-Validation.

Stan Ruecker (and others): The Beginning, the Middle, and the End: New Tools for the Scholarly Edition

Stan presented a series of prototypes we have been developing for the electronic edition. We had an interesting discussion about interoperability. I made a point about protoyt

Geoffrey Rockwell (and others): The Face of the Scholarly Corpus and Edition

Tomoji Tabata: Using Random Forests to Spotlight Dickensian Style: Text-mining in Digital Humanities

Tabata started by talking about the drawbacks of keyword-based analysis. He showed how his Random Forests approach reliably separates Dickens and Collins.

Kyoko Omori: Analysis of Silent Cinema an Benshi Narration

Omori talked about the Comparative Japanese Film Archive. She has a demo up there now. She is interested in documenting the Benshi who were the live narrators of silent movies. Different Benshi would have different scripts.

Harvey Quamen: The Limits of Modelling: Database Culture and the Humanities.

Quamen talked about how this project grew out of Willard McCarty's book Humanities Computing where McCarty argues that what we are doing is modelling. Quamen's argument is that databases are not necessarily models. You can use a database for modelling or not.

Richard Cunningham (and others): Ready, Set, Populate: The Architectures of the Book Online Reference Resources

Cunningham started by commenting on how we now see the book since we have an alternative that makes it strange. He then talked about ArchBook, a curated and reviewed site on the architecture of the book. There was a discussion about the sustainability of such a work.

William R Bowen: Changing Paradigms in Digital Humanities: A Case Study Looking Forward to Iter's 20th Year

William talked about how Iter has evolved over the years and what they are thinking about next. They started as a bibliography and are now hosting Drupal sites for their community and supporting editions. Iter means "path". They thought of Iter as a place you went to get something. Now they are thinking of Iter as a commons. They want to connect the silos and create an environment scholars stay in.

Takaaki Kaneko: Digital Archiving of Printing Blocks and Bibliography Based On It

Kaneko talked about how they are digitizing printing blocks that were used to print books. They shoot images of the blocks with light from different directions to get a sense of the depth. They have a database now of these blocks that they can link to digitized versions of the books printed with these blocks. This database will help develop a sense of early modern publishing.

Susan Brown (and others): From CRUD to CREAM: imagining a rich scholarly repository interface

Brown talked starting with an idea of what a repository does and working out to a scholarly interface. Brown is extending JiTR to model ideas for what a scholarly interface should look like. She talked about using card sorting as a methodology for figuring out what a community wants.

Jon Saklofske: Fluid Layering: Reimagining Digital Literary Archives through Dynamic, User-Generated Content

Saklofske talked about his project NewRadial. NewRadial visualizes the archive/edition. It brings in a community of users.

Brent Nelson (and others): A Short History and Demonstration of the Dynamic Table of Contexts

Stan Ruecker demoed the Dynamic Table of Contents before. Nelson talked about how we move in INKE between the textual studies group and interface design group. The DToC is an example of a bringing ideas form the study of the history of textual forms to prototyping. They are trying to undo the ossification of certain interface structures. Tables of Contents used to be more flexible. He made an interesting distinction between a ToC and an index. A Table of Contents is a representation of sturcture. An index is a representation of related elements.

Lynne Siemens: "Firing on all cyclinders": Progress and Transition in INKE's Year 2

Siemens is studying INKE and how large scale collaborations work over time. Some of the challenges they have:

  • They are having trouble attracting and keeping IT staff and project manager.
  • INKE went from 4 groups to 2 and are now scaling up to 3. How to make major changes in structure?
  • Research assistants are graduating and moving on. We need to bring in more people and find ways to do that.

I would add that I've been thrilled to be in the Interface Design team of INKE though, like any project, there are things like the IP agreement that annoy me.

We had an interesting conversation about failure. I think it is too early to ask about failure in INKE as we are only in the third year. Perhaps the question should be how INKE adapts to the challenges Siemens identified.

Navigate

PmWiki

edit SideBar

Page last modified on November 18, 2011, at 01:05 AM - Powered by PmWiki

^