Culture And Computing 2011
Culture and Computing 2011
These notes are about the 2nd International Culture and Computing conference organized at Kyoto University. I was on the Programme Committee for the Digital Humanities Special Track.
Note: these notes were written live so there will be typos and inaccuracies. Please send me corrections.
Day 1, Friday Oct. 21
Visualizing Cultures: Open Culture on the Web: Benefits and Risks
Shigeru Miyagawa from MIT gave the opening keynote about the Open Courseware project of MIT and his Visualizing Cultures project.
The Visualizing Cultures project is a collaboraiton with John W. Dower to understand history through visual images. They have worked with archives and museums to make images available for people to reuse and weave into essays.
Visualizing Cultures weds images and scholarly commentary in innovative ways to illuminate social and cultural history. Founded in 2002 by MIT Professors John Dower and Shigeru Miyagawa, Visualizing Cultures exploits the unique qualities of the Web as a publishing platform to enable scholars, teachers, and others to: (1) examine large bodies of previously inaccessible images; (2) compose original texts with unlimited numbers of full-color, high-resolution images; and (3) use new technology to explore unprecedented ways of analyzing and presenting images that open windows on modern history.
Visualizing Cultures has positioned itself as a nexus between the institutions that house image collections and the scholars who would like to use them for research purposes. Publishing on MIT’s revolutionary Open Course Ware?—making MIT courses freely available on the Web—Visualizing Cultures has worked with many institutions to negotiate online publication of images for educational purposes using a creative commons license. from About
Hee then talked about the Open Courseware project at MIT. He showed examples of student work. OCW is voluntary, but most courses are coming online. The materials can include student materials.
He gave an example of a course that welcomes in people from outside, the Intro to Computer Sci course. At Stanford a course has 160,000 students on the subject of AI taught by someone at Google.
There is a huge benefit to MIT from openness. 84% of faculty look at stuff from other faculty. They begin learning from each other. The OCW project also enhances relationship with alumnae.
Then he switched to talking about the stuff on Asia and Japan in particular and Visualizing Cultures in particular. He talked about the risks. They have put up disturbing and racist images by Japanese about the Russo-Japanese ware that show Japanese beheading others - these images have offended people who took them out of context. For example, within 6 hours an image was circulated out of context in the Chinese community. This led to an attack on the site. They met with the Chinese student community and the students then sent a message to the larger community. The MIT President also sent out a message affirming the importance of not censoring knowledge.
He talked about Facing East and Facing West - contrasting the views from different cultural perspectives like different views of Perry. He showed how Perry was represented by Americans and by Japanese. He talked about how students discover things when looking at these images.
There were questions about copyright. I thought his answer was useful - Solving copyright takes a lot of work, but it is worth it. Once you sort out procedures it becomes routine. The value of sorting it out is that you can then create the sort of rich resources that really teach.
Visualizing and Analyzing Cultural Voices in Computer-Mediaated Communication through Social Gaming Simulation
Ayae Kido presented a paper on a role-playing game developed to study attitudes and changes in attitudes around nuclear energy. The social RPG is based on actual societal changes - stakeholder deal with issues around nuclear power plants. The GMT Gaming Simulation Tool supports dialogues over internet and records what is written.
There are 3 stakeholders roles - affirmative residents, opposing residents, and mayor (supportive of bringing nuclear power to area). It also had a role for mass media that worked as game master.
The simulation had phases:
They then used Kachina Cube and correspondence analysis to analyze recorded communications. The project suggests ways for social gaming techniques to be used for studying dialogue around social issues. But how accurate is it? I'm guessing there is a literature in the social sciences about such modelling. In effect they are using games to create models that include human input.
Generative Music Workshop
Tomotaro Kaneko gave a great paper on the way he and colleagues are recreating generative music works to understand them. Generative music is a system or a set of rules which once set in motion will create music for you. Music for Airports by Brian Eno would be an example. On the iPhone there are apps like Blook and RjDj. Generative music is very different from performed music as there isn't really a script or performer.
I wondered if we could we create an app that would let people assign music they have to streets and then mash them. I asked what is the difference between generative music and an instrument? It seemed that generative music are instruments that respond to the environment, not performer.
The concept of the Generrative Music Workshop - retrospective research through recreation - strikes me a fascinating. They reproduced masterpieces with what they have at hand. They reproduced pendulum music by Steve Reich at a Make Tokyo event.
Mobile computing might have a role to play.
Historia: Filling the gap of time and space
Naoko Tosa presented on icons and an AR application that lets users play with historic icons. Place becomes a canvas.
Their app was used at the Venice Biennale where players could add iconography to what they saw on their phones.
Imagining Historic Fashion: Digital Tools for the Examination of History Dress
Kathi Martin and Hyeong-Seok Ko presented on a collaboration around presenting historical dress information. Martin started presenting about Drexel and their Drexel Historic Costume Collection.
Traditional displays make it difficult to get close. The Drexel Digital Museum project has developed a web site where you can get close. They have some interesting interface ideas - a scrolling collection that is like a fashion runway. They have zooming that lets you get right up to detail and interactive spots where you can see what is under a jacket, for example. They have rotation of dresses, curator notes and descriptive notes.
While photography has been used by many museums, the challenge is to give a sense of exhibition space. She showed a QTVR (QuickTime Virtual Reality) walkthrough of an exhibition. The QTVR doesn't however show what it is like for fashion on a moving body. It used to be normal to display costumes on live models.
Hyeong-Seok Ko then discussed software they are developing called DC Suite (Digital Clothing Suite). This software can simulate fabric to create 3D models and animations - they can simulate the flex of fabric, the pleating, the transparency. I never thought of it, but to someone interested in fashion they would want accurate representation of how clothing falls, moves, and the stitching details.
DC Suite can also be used to design clothing. The system can show stitching, mesh materials, handle complicated ribbons, stains, pleats. It is suite of tools that can be used in the fashion industry and for historical work. You can model bodies (for example you can take measurements from a historical dress, measure it, and create a model of the body of the original person for whom the dress was cut.)
This talk opened a whole world of issues for me.
towards Preserving Indigenous Oral Stories using Tangible Objects
Andrew Smith talked about a prototyping design project that imagined how to develop for the indigenous people of Africa who have oral storytelling traditions. The project tries to augment this story telling tradition. History and culture is captured through oral stories, dances and music. The target groups are Ba Ntwane? - people of Ntwane, a community that has a rich beading tradition too.
How can they preserve stories in beads? How can they connect technology to the beads? They imagined a wooden box platform that would work with beads with RFID tags. When a bead is placed on the platform a story is recorded or played back. The system hasn't been implemented. They evaluated the technology.
One of their approaches is to match simple tech with complex tech. They hide the complexity of the digital system inside the box and bead system. This raises ethical questions about hiding and showing technology. Will the users understand the implications of recording their stories in this system (that then puts the stories in the cloud.)
Stories are more than oral utterance. There are also gestures and the system doesn't gather those. It also fixes the stories, which may be a problem.
Towards a Dialogic Archive: Canadian Copyright Law, Digital Archives and Fair Dealing
David Meurer, from York U. gave a paper about using the Fair Dealing provisions in Canadian law. He gave examples of difficult copyright examples like Christian Bok reading at a festival from his published and unpublished poetry. Does the festival have rights? Do the publishers? Does Bok?
The rights may be complex, but how many people really would get worked up about them. These festivals have lots of archival value, but little value for resale. He is developing Drupal plug-ins that make it easy to track rights and to document rights information. They use "fair dealing exceptions in Canadian copyright law as a set of guidelines" which struck me as an original approach.
Friday evening we had a lovely banquet in a traditional
Day 2, Oct. 22nd
Cultural Data Sculpting: Immersive Visualization
Sarah Kenderdine gave a keynote where she talked about projects at ALiVE (Applied Laboratory for Interactive Visualization and Embodiment) at the City University of Hong Kong that is led by Jeff Shore. They are working with large screen displays for immersive cultural heritage installations or expanded cinema. They have a number of different surround displays that are suitable for museums or exhibitions like trade shows. They then work with artists to develop interactive or passive cultural installations.
The demand for creative interaction by a participatory generation has challenged the passive museum format. A Li VE? has developed various paradigms for more engaged exhibits. T_Visionarium works with video archives to let people explore large datasets of video clips in a surround environment.
She then talked about new cultural media work they have developed about the Hampi site in India. This has traveled the world and will now return to India. These use the same surround visual spaces but have created new media works that draw on the heritage. These works explore new media modalities. She made a comment about not wanting to use the iPad as input in these spaces as it isolates people if they all have their own interface.
One thing that is important to Kenderdine is full-scale exhibits so people can get a sense of the real scale of cultural heritage.
She also talked about "situated theatre" projects where they have created original works for these immersive spaces that are not necessarily based on cultural heritage.
I asked her about cycloramas and similar panoramic technologies of the Victorian age. She answered that there is a connection both in her research, but also in popular imagination. She was very interesting on the body and how the cyclorama may have been superceded by the roller-coaster (not cinema) that provided a more visceral physical experience.
A Serious Game and Artificial Agents to Support Intercultural Participatory Management of Protected Areas
This paper about the SimParc application that uses a game model to help stakeholders negotiate issues around ecological parks in Brazil. There are two main strategies for managing eco-parks. One is technocratic (scientists recommend to others) and the other is participatory (where proposals come from experts AND social actors.)
The serious game they developed is a role-playing game, but it has artificial agents to provide advice to players. The learning case was for the National Tijuca Park. The roles include park manager, tourist industry representative, and representative of communities. The game cycle has some preliminary steps and then players can make usage proposals that then go through learning stages (presentations, decision, outcomes.) New proposals can be made to counter earlier ones and to negotiate things.
One question I have about these role-playing games is why bother with the computer? We have had these games for a while and often these games get prototyped on paper anyway. His answer is that a) the computer can simulate the world quickly, b) that one can record what is happening.
He talked about the use of rhetorical markers and artificial agents. The park manager is such an agent. This allows players to understand how the decisions are made by the manager.
These interpretative tools raise questions both for the users and developers. To make a game work you typically have to simplify which then may create issues about the validation of process or value of simulation.
The Use of Labanotation for Choreographing a Noh-play
I came in the end of a presentation about a neat interface that lets people use labanotation to describe Noh dance. This can be used in training and in preserving performances. They have an XML format for storing the dance sequences.
Phylogenetic Approach for Estimating Noh Archetypes
Yoshimi Iwata presented an approached that she admitted didn't work. It is good to hear admissions of failure. She started by describing the history of Noh.
Then she moved to the purpose of her research. She first collects data, then built a database with which to do preliminary analysis. This preliminary analysis will then lead to full analysis. She has applied phylogenetic techniques to the data.
She tried a number of different text mining methods on her texts including NCD. She seemed to get interesting clusters, but wasn't able to get the authorship information she seemed to want.
Multi-Device Delivery of Research Results: Case Study of Ningbo Project
The Ningbo project is bringing together electronic texts and making them available through multiple devices including smartphones. They can mash together their texts with Google maps to provide an interface for "common users". The original project brought together 400 texts around the town of Ningbo in China. They now have built a TEI reader for smartphones and it can use the GPS to find documents by location (if you are in Ningbo.)
Term Extraction from Japanese Ancient Writings Using Probability of Character N-grams
Fuminori Kimura presented a paper that dealt with the issue of tokenizing Japanese. There are no word delimiters to use for extracting terms and words. Therefore they extract chacter N-grams using probability of the gram. They estimate the probability of a character string (N-gram) showing up. If the N-gram occurence is greater then they have a candidate for a term. They then evaluated the technique and the mistakes it makes.
Visual Recommendations from Japanese Historical Diary
Alejandro Toledo talked about different ways of visualizing data that they have developed for a historical diary. One involved a stacked graph visualization of names as mentioned in the diary. They implemented some interaction controls so that they could control the minimum and maximum frequencies; or filter by name fragments. Neat idea for visualizing names. The visualization can be overwhelming so they have implemented a recommendation system.
Realizing bilingual and parallel access to Ukiyo-e databases in the world
Biligsaikhan Batjargal presented a project to make Japanese database available to non Japanese speakers. Ukiyo-e is the wood-block print art of the Edo period. It is of interest to many who can't use Japanese so they are developing a federated search system that can also translate queries. They use dictionaries as one way of assisting English users.
A Digital Archive of the Fashion, Dress and Behavior from Meiji to Early Showa Periods (1868-1945) in Japan
Haruko Takahashi presented on a fashion database from a period when both traditional and Western dress where being worn. There are few databases. An exception is the Minpaku Costume Database. They are drawing on illustrations from serial novels from 20 Japanese newspapers. They are interested in the acculturation of fashion, dress and behaviour from 1868 to 1945. Subjects include dress reform, opinions on the body and the kimono. She showed an interesting timeline tool that had multiple streams. It was also a vertical timelines, which may be how the Japanese visualize time.
Stroly: Historic and Illustrated Maps
Harry Vermeulen presented about Stroly. They have an iOS app that lets one use historic maps on smartphones using the GPS. He showed examples of birds-eye maps of different places in Japan which can't be easily fitted to a flat map. They can add pins to the maps with links. However, they are selling their authoring system. They have an open system for people working in the tsunami area.
They have a web-based editor where you can upload maps that you then fit to points in the world using Google maps. That creates the mapping between maps. Then you can add metadata - titles, keywords, landmarks and then add things like pictures.
They then did a evaluation to try the application in taxis of Kyoto.
They didn't present on this, but they also have a cool app called CamCat that captures images with GPS and tilt data.
Study Support and Integration of Cultural Information Resources with Linked Data
Tetsuro Kamura made the case for linked data.
Constructing Situated Learning Platform for Japanese Language and Culture in 3D Metaverse
Michiru Tamai presented a project I had heard about from Dr. Inaba. They are using an interactive metaverse to publsih their results. They have a Japanese garden, shrine, Noh stage and virtual museum in Second Life. The use motion capture to create moving avatars for the virtual spaces.
They did an analysis of what subjects about Japanese arts and culture foreign students want to learn. They found, among other things, that students want to understand a Shinto shrine so they created a model in Second Life. They work with the concept of "situated learning" which is a model based on interaction between old comers and new comers in a community of practice. The idea is to use the metaverse to show people how to do ablutions, offerings, praying, and drawing sacred lots. A Japanese participant (old comer) and the foreign student (new comer) interact in front of the computer about the space. They don't meet in the space (that might come later), they use the space to discuss and re-enact a visit to shrine.
In interviews they did after they found that people felt they learned more and that it was easier to teach using virtual space. Then they did a second experiment where the foriegn students now had to explain to the Japanese participants. These types of interactions force people to learn about their own culture.
They concluded that "the metaverse environment can be meditational means for situated learning of traditional Japanese culture." They want to try other cultural subjects.
Diggable Data, Scalable Reading, and New Humanities Scholarship
Seth Denbo started by pointing out how the digital humanities in the West is about text, while most of the papers at this conference are about media. Text is just not as important in Japan. Denbo talked about techologies we have forgotten like microfilm (remember the Memex.) Microfilm was going to put a "man of letters in every town" (see Binkley New Tools for Men of Letters.)
Seth then turned to the issue of large-scale analysis. He mentioned Moretti and how draws on large-scale historical approaches. Abstraction removes individual authors and works to look at trends.
He then mentioned the Republic of Letters project and Culturnomics. He showed how the Bamboo project that he is leading at Maryland grows out of these. The Republic of Letters is only metadata; Culturnomics doesn't show you the local and seems to be based on measurement as if culture could quantified.
He talked about the new Bamboo which sounds a lot like the Monk project. It also has features similar to CWRC where they want to bring together curation and analysis. It would nice to be able to drill down and see the full text.
At the end the organizers summarized the conference and asked us what we thought. I liked the mixture of media arts, digital humanities, and language technology research. Too often the media arts are kept separate.
|Page last modified on October 24, 2011, at 02:08 AM - Powered by PmWiki|