Main »

SDH-SEMI 2011

SDH/SEMI 2011 Conference

These are my notes on the SDH/SEMI 2011 conference in Fredrickton, New Brunswick. These notes are being written as I listen to the talks so they are raw, full of typos, and incomplete. See #sdhsemi2011 tweets too.

Jon Saklofske: Digitizing Disneyland

Jon's talk was a joint talk with the Canadian Game Studies Association. Jon started by talking about Kubla Khan (the poem by Coleridge). The poem is unfinished - the dream forgotten. He connected this to how often we don't finish games.

He then argued that Disney has years of virtual world creation. We should learn from them. We can use the half-century of imagineering to conceptualize virtual worlds. We can also learn from their failures. Jon wants to go beyond a simple parallelism.

He then covered some basic terms like "setting." He talked about how Disneyland reversed the simulation to embody the virtual in the real. Another term is "virtual reality", "virtual world", and "hyper-reality". He quotes Cline to the effect that VR is an interface through multiple sensory modalities. Can we describe Disneyland as VR? Disneyland seems real, but you are supposed to know that it is constructed.

When talking about virtual worlds he talked about presence. Presence gives us a sense that we are "social actors". Does Disneyland give us that sense presence? Not really. We can't really affect the land.

Is Disneyland a hyper-reality? Eco defined hyper-reality as an "authentic fake."

VR, virtual world, and hyper-reality are branches from Disneyland, not roots. Our sense of what games and virtual worlds could be has been formed by Disneyland, amusement parks, and theme parks. How can we trace the evolution of theme parks to the virtual worlds of games? Theme parks project other-worldiness, they affect "showing", they control perspective and gaze, and they affect "doing" by structuring new modes of being, and it affects "knowing".

As important as Disneyland and other theme parks are, is Jon right that the aesthetics of these parks were the primary influence on virtual worlds? What else influenced the worlds of games? LOTR? Three important worlds in games that Disney doesn't do are the world of war, the aesthetics of the Lord of the Rings fantasy world, and the worlds of sports.

Jon then talked about gamification of Disneyland and the Disneyfication of Disney Universe.

Jon ends with a non-Disney property, the Wizard of Oz. Dorothy unveils the wizard behind the curtain. She returns home to appreciate Kansas differently. Disney is making a prequel to the Wizard. The Wizard of Oz could chronicle how by traveling through the virtual we return with a new appreciation of the real.

But what do we forget of the dream?

Serious Games Research

I introduced a panel on serious games research. I argued that games research involves creative practices (creating games) as part of research. To do this we have to ask about how games are designed, how they are assessed, and how new physical games/toys can be now be developed.

Shannon Lucky: Industry/University Collabortations

Shannon talked about the content analysis that we have done on the interviews we did with game designers in Vancouver, Toronto and Montreal. See the PDF Computer Games and Canada's Digital Economy for the full report. Shannon talked about opportunities and barriers to university/industry collaboration that came out in the interviews. Some of the top barriers are:

  • Intellectual property barriers - industry finds our approach to IP doesn't work for them.
  • Time-line differences - we have a different pace in the university. We can't drop everything for a project.
  • Lack on knowledge in the university - industry designers don't think we know enough about the way games are really designed.
  • Differences in culture - in general there is just too great a difference in our cultures.

Some of the opportunities include:

  • Access to talented students
  • Access to domain specialist
  • Game design archive
  • Internships

Joyce Yu: Post-Secondary Programmes in Gaming

Joyce talked about her survey of game design programmes across Canada. The humanities and social sciences are just beginning to weave in courses that related to game design. College diplomas are more common. Some skills that industry folk told us they are looking for are:

  • Team skills
  • Interdisciplinary breadth
  • Understanding of game development cycle
  • Project management

There are three major ways that game design gets taught:

  • Courses like the Compute 250 at the University of Alberta - these courses are part of existing programmes
  • Interdisciplinary programmes like the Masters of Digital Media
  • Technical college certificates and diplomas

Michael Burden: CatHETR

Michael introduced a first-person game built with the Unity game engine. The game puts you in the position of a resident following a doctor around. It introduces ethical and privacy issues like whether to share patient information in front of others.

Michael then talked about an assessment done with health and medical students. Participants sometimes had trouble with the controls. There were interesting differences between how participants responded to the situations. Participants generally found the length of the game (5 minutes) just about right. Games don't need to be long to introduce ideas. This type of game seems to work well for introducing situations for discussion.

Calen Henry: Campus Mysteries

Calen talked about an augmented reality games research. He started with a game that we have developed for Fort Edmonton Park called Bygone Pursuits. He then introduced the ARG platform fAR-Play that was developed by our colleagues in computing science. He talked about the features and showed the editing environment.

Calen then talked about the first game we ran, Campus Mysteries. This game had participants chasing a ghost around the U of Alberta campus. This was played by summer camp participants and assessed. A paper version of the game was faster and easier than the ARG version, but participants found using the smartphone more phone.

Garry Wong: AXCase

Garry talked about the AXCase project. This builds on the Monome and the Arduinome projects. The Monome is a neat MIDI music controller/toy that has been open sourced. The Arduinome is a way of building your own Monome using the open specifications. Many people who build Arduinomes use tupperware and other enclosures to hold their button interface. The AXCase group designed a case whose parts can be cut by Ponoko. Like the other projects they open sourced their Ponoko designs for others.

Some of the applications of the device include using it as a simple tactile game platform or using it as a music controller as the Monome is.

Garry ended by asking about the place of making in the humanities. How could fabrication be part of our research practice? Garry quoted Bill Turkel on fabrication in the humanities.

Roundtable on Digital Literacy

Richard Cunningham presented Phase 2 of his SSHRC funded project Developing Digital Literacy. The project is about cognition and literacy. He talked about "working memory". He started with a history of views about the brain and neuroplasticity. "We see with our brains, not our eyes."

Richard talked about "cognitive overload" something we think of as a common experience when first encountering new media. To avoid cognitive overload requires a text that offers the right choices at the right time in the right proportion.

Richard then turned to object development. The reader is evolving, the media we read are changing and the content is changing. Some like Carr are arguing that new media are making us stupid (or at least that they are changing our capacity to attend to extended narratives and arguments.) Neuroplasticity suggests that our brains change to handle changing environments.

Sonja Valley presented on a study that looked at how people understand multimedia information. Research has found that people learn better when presented with multiple media information (words and images and so on.) Likewise interactivity is supposed to enhance learning, but may not. Complex interactive materials may lead to overload. The human working memory has a limited capacity so too much media can overload the memory.

To run the study they developed a matrix of 6 different combinations of media and interactivity. Hundreds of students will get different interfaces with more or less control. They will get different tasks. They are using a tool (Morae usability software) that lets them record all interactions and facial expressions in a synchronized fashion. The take-home of the experiment is that the animation (without interactivity) worked best.

Ann Wilkings talked about using the results to develop protocols and best practices for developing new media materials. The idea is to avoid overload. The studies are being done in the Acadia Digital Culture Observatory that she talked about. They have a observation room so they can watch.

She then moved on to the procedural task of developing a new media version of the Arte of Navigation. The work has paper models that the reader has to assemble. They used the instructions on how to assemble a nocturnal and how to use it for another study. An interactive version of the Arte of Navigation will be used for a second round of tests.

Corpus Interfaces

I was part of another panel on Corpus Interfaces.

Stan Ruecker: Introduction

Stan introduced the INKE project and the papers from the Interface Design group.

Geoffrey Rockwell: Consulting the Corpus

I presented on a topology we developed to understand interface features across print and electronic form. Some of the types of interface features we find we can compare include:

  • Entries and Exits
  • Physical Organization
  • Paging Features
  • Record Design
  • Type Features
  • Cultural Practices

Brent Nelson: The Table of Contents

Brent talked in detail about the TOC. He argued that the TOC has 4 functions:

  1. Summation
  2. Location
  3. Visualization
  4. Conceptualization

He argued that TOCs often bleed out into other parts of the text. He also talked about how changes in design can lead to changes in how we conceptualize a work. A TOC can give people a view of the whole that could change their sense of the whole and its parts. He showed different types of TOCs including tabs that people can add to a bible to mark the parts.

He went into detail on Francis Bacon's The Advancement and Proficiency of Learning. The first representation of the whole comes 21 pages in is a mapping of the whole project of which only the first volume was ever written. There are 100 pages of preparatory material. Much of this is different navigation aides on the whole (and eventually unfinished) project. There is a TOC and then an elaboration and summary of the argument. Then a general argument of the first part (published). This takes the form of an outline without page numbers. And then there are even longer presentations of the arguments of the first part. Bacon drills down in this extensive and elaborate preparatory section.

That's not all. He then has a hierarchical chart of the parts showing his mental map. Finally there are a number of indexes.

Brent concluded with a TOC from Wired that uses visualization with visual puzzle pieces for each section. The size of each piece is indicative of the extent of each section. Then there is a second table that cross references conceptualizations against the articles. Paradoxically Wired Online isn't as interesting. Wired is using web interface features like sized tiles in print for an audience that is comfortable with visualizations.

Daniel Sondheim: Corpora from the Page to the Screen

Daniel compared print and electronic corpora by looking at three common types of corpora:

  • Linguistic
  • Literary
  • Artefactual

Daniel compared print and electronic examples of each of these. Of particular interest were his comparisons of corpora of artifacts (inscriptions, coins, Greek vases ...) He compared a print and electronic coin corpus. In the electronic version you get multiple views on the same material.

Daniel concluded by pointing out how electronic corpora introduced automated search replacing a narrative design. There is a shift from a narrative to random access database. The narrative is added back to the database as in cases of tours through a database. This raises questions about the narrative in consultation.

Mihaela Ilovan: Diachronic View of Digital Collections

Michaela talked about the evolution of interfaces for particular collections. She argued that studying the evolution of interfaces over time. She showed an animation of different front pages to the Perseus from the CD-ROM interface to the present. She also showed an animation of Gutenberg. We chose those two in order based on projects that have been around for a while.

One problem we encountered studying the evolution of interfaces is that there is little published by the designers as to what they were thinking. It is also hard to find snapshots of what a web site looked like 10 years ago, despite the Wayback Machine.

Mihaela talked about how one needs to think about the technology at a time. The interface often changes as technologies change and offer more (or less) affordances. Changing screen resolution and Internet bandwidth made it possible for designers to do different things.

Another important consideration is the user. She quoted Jeff Raskin to the effect that, as far as the customer is concerned, the interface is the product. Perseus is an example of a corpus whose anticipated user has changed from students of classics to a broader user.

The third factor that influences the way an interface is designed is the discourse of the publishers/authors. How do the creators represent themselves and how does that influence the interface. Perseus goes from presenting itself as being for students of the classical world to presenting itself as a general purpose digital library.

Mihaela concluded by talking about how we are trying to show that factors like technology, users, and discourse affect interface.

Stan Ruecker: Corpora Interface Prototypes

Stan then showed a bunch of interface prototypes that try to show what we could do. He showed the TextTiles browser. Then he showed the Structured Surfaces that we are building over JiTR. Finally he showed the Dynamic Table of Contents where the reader can use the XML to change the table of contents.

I can't help wondering what an interface that broke rules would look like. Suppose the navigation was in the center of the screen and content on the outside or suppose that the interface changed with every page. This reminds me of http://dontclick.it that demonstrates an interface that doesn't use clicking.

Day 2 of SDH-SEMI

Megan Sellmer: Crowdsourcing Ukrainian Folklore

Megan talked about a project at the University of Alberta that has built a crowdsourcing application so that volunteers can transcribe and translate audio passages of Ukrainian folklore.

The project is trying to get good transcripts and translations of over 200 hours in the Ukrainian Folklore Sound Recordings project.

Megan discussed the development of the Oxford English Dictionary as an early example of crowdsourcing, though they didn't use the internet back then. Megan mentioned other project that are using crowdsourcing.

Megan then talked about a content analysis that she ran on crowdsourcing sites to see what made them successful. She looked at about 10 sites and looked at how users log in, how many clicks it takes to start working, motivation, and what they are applying crowdsourcing to.

Megan then showed what the crowdsourcing site we built looks like. We worked hard to make it easy to use as our anticipated volunteers are often elderly Ukrainian Canadians. Our primary user persona was an elderly woman Elena. Of course it turned out that our major volunteers are younger. I wonder if we should change the design to fit the new users. Megan showed the flow from personas to wireframes to functioning site.

The project coined the word "groupsourcing" as we expected that there would be fewer participants as the tasks are more time-consuming and take specialized skills (Ukrainian).

Megan ended by talking about motivation. There are two types of motivation: intrinsic (they are motivated by the project itself), and extrinsic (external rewards like fame or money.)

We had an interesting discussion about whether crowdsourcing is worth it. Does the amount of translation done compensate for the programming that went in. We need to audit the project at a later point to see if crowdsourcing is cost-effective. We agreed that independent of the work done by volunteers is the value of the community engagement.

Elizabeth Milewicz: Sounding History: Using Digital Technology to Gather Public Insights into Liberated Africans’ Origins

Liz talked about the African Origins Portal project that has multi-sourced historical sources about the slave trade. The big goal of the project is to get an idea of who the people are who were slaved from Africa and ended up in the Americas. The data shows how many left which ports, but we don't know where in Africa they came from, what languages they spoke, and what was their heritage. African-Americans want to know where they came from. The data they have is from the Court of Mixed Commissions registers of Africans liberated from slaving vessels during the 19th century. The British patrolled the Atlantic and liberated slave ships and took people to the Commission. The Commission then recorded data which in some cases is quite detailed on marks that could help identify place of origin. Names can also be used to link to people/places today.

They are also doing research on the ground in Africa asking people about names and naming traditions. They are now trying "citizen science" or "citizen history" to get interpretation of records. They have 2,472 names from modern-day Nigeria. There are 527 languages in Nigeria. Nobody on the team knows that many languages. They need to go broad and involve a broader group. They also have a spelling problem. They have developed a fuzzy search that accounts for different spellings of names. They adapted the Levenshtein Distance algorithm by changing the values of substitutions to account for likely substitutions like QU for KW. They are also crowdsourcing the name matching given the likely names generated by their fuzzy algorithm. There is intrinsic motivation for people to comment on their names.

Their ideal user is someone who is a member of the African-American diaspora. They don't want youth, but would prefer their parents who might know more about traditional naming. This raises the same questions the Ukrainian Folklore project has of involving people who are not as computer literate. They force people to log in and participants can only contribute once per record. They then review contributors statistically to see what they are contributing. They have a neat profile page that helps participants confirm contributions.

The programming of this project cost a significant amount which raises the cost-effectiveness question again. Their fuzzy search system might be shareable to help others.

They have added all sorts of social features to encourage people in.

In the conversation afterward we talked about gamifying the portal so people could get points for records matched that were also matched by others. We discussed how they could reflect back contributions in the summary imputations that go back into the record. A fascinating issue that came up is how people can get credit for participation. SUDA Online provides a way of describing a contribution in your CV.

Jon Bath: The Architecture of Architectures of the Book

Jon talked about the INKE subproject ArchBook or Architectures of the Book. This site is documenting the complexity of historical textual forms. This an output of the Textual Studies team of INKE. ArchBook is online, open-access, but it is also blind reviewed. Being online it can provide rich multimedia exemplars.

Jon talked about the visual design of the ArchBook. He talked about typographic design traditions and the golden ratio. Print design ideas about the design of the page don't translate well to web pages that can be scaled and so on. See http://inke.ischool.utoronto.ca/archbook/ and change the window size for how his page adapts to the browser.

Traditionally there are two gateways into a collection. There are tables of context and indexes. A frontispiece can also serve as a way in. He is trying to create an equivalent for ArchBook.

If I heard Jon right he is encoding articles for the ArchBook in XHTML. It isn't clear why he isn't using TEI-XML.

Ashley Moroz: Viral Analytics

Ashley talked about the eVoyeur embeddable text analysis panels. These can be used in standard content management tools like WordPress and Drupal. See http://hermeneuti.ca/voyeur/embed for more on eVoyeur.

Ashley showed screencasts of using eVoyeur and then discussed the different panels you can get from eVoyeur.

Then Ashley talked about a series of usability interviews that she did to figure out how editors/authors of online information (bloggers, online journal editors and so on) might use it. Editors wanted the ability for analytical panels to be able to process more than just the page they are. They wanted the analytics to link to other articles. One of the problems identified was the size of the panel. eVoyeur can scale to any size, but it is the size of space allocated to tools that is the issue.

eVoyeur is documented on http://hermeneuti.ca/voyeur/embed and code is available as open source.

Brent Nelson: Motivating the Development of New Knowledge Environments: The Illustrative case of the Modern Study Bible

Brent's paper argues that to develop new knowledge environments we need to understand existing ones like the study bible. He started by asking what a knowledge environment is: a book, a desk, a study, implements and tools and an interpretative community. Media scholars like McLuhan argue that the tools we use affect the thinking we come to. A knowledge environment is not neutral. It is also the case that the thinking of a community affects the design of a tool.

New knowledge environments are usually motivated by power, commerce/consumption and need. He talked about a process by which a knowledge community articulates a need that leads to development and then adoption. He looked at the study bible as an example of development with a knowledge environment. The Thomson chain-linking system links a passage to related passages. Thomson created themes and then any relevant passages links to the next in the theme and the first. This is an affordance that anticipates stuff happening in digital realm. Brent then showed modern digital analogues like http://biblos.com to the chain-linking. Biblos is a good example of a community designed set of tools for the study of the bible.

We had a discussion about how communities form and articulate their needs. Brent talked about the importance of an individual agent of change.

Silvia Russell: @

Silvia started by saying that her paper would be entirely different. The paper (as "@" should suggest) is about spatial metaphors and their application to the internet.

She started by talking about metaphors and how they might be apt to the web. Metaphors that are apt model something else.

Space and distance don't seem to be relevant to the internet which would seem to have no distance. Where is YouTube in relation to Amazon. She then shifted to depth. It is possible to talk about depth - moving deeper into a web site - drilling down. Where the spatial metaphors might ring true is the idea that we experience web sites in sequences (of clicks) or paths. Space can be considered as a sequence of moves in a path such that things are closer or further in the path.

The implications of using spatial language is that it gives us a way to talk about experiences of parts of the web. Ideas of public space and private or friendly space can help us understand how the web is experienced. Spaces can have boundaries and can express ownership. We have primary spaces like our Facebook page that we control.

Cyberspace is not just a metaphor, but expresses a model of what we experience. Where does the model break down? Does it dribble over into how we think about physical space?

Peter Organisciak: Pico Safari

Peter gave a paper talked about the Pico Safari project that he worked on when he was at U of Alberta. He joked about the imaginary Picos - imaginary characters that can be seen with ARG software on a smartphone.

Peter backtracked and talked about Augmented Alternate Reality Games and how it is now something that we can affordably create. Alternate Reality overlays meaning over physical reality augmenting it.

Peter talked about tools for creating AARG like Layar.

Constance Crompton: Coding Beyond Categories: Self-identification in Lesbian and Gay Liberation In Canada: A Selected Annotated Chronology, 1964-1975

Constance talked about a project that is digitizing a monograph (Lesbian and Gay Liberation in Canada) which is an annotated chronology about a community whose terms were shifting in the period chronicled. This project is being developed by the CLGA (Canadian Lesbian and Gay Archives) which runs a community archive. Their motto is "Keeping our stories alive." They are collecting representations mostly about Canada. Given the censorship of such materials archiving is important.

The CLGA is volunteer run. It is independent and has not been swallowed by institutions like universities or government. It is a radical archive with few restrictions to access. The problem is that the archive is hard to use which is why they want to digitize the monograph because it provides a good guide to the collection. The digital edition is being done in TEI-XML. They hope that coding will make it possible to see connections between people, places, and events. It sounds like they are developing a prosopography. The challenge is adapting TEI.

She then talked about self-identification and how to tag how people self-identified. The idea should be that information could be pulled out using the tagging. The problem is that TEI and formalizations enforce categorization that are hard to do with such a topic. Gender studies has taught us that identity is not static. Can contextual markup represent more flexible models of time. Identity can be location specific.

There was an interesting discussion about categorization, tagging and interpretation. You are never showing reality when you encode a document - it is always an interpretation.

Day 3

Vika Zafrin: ESTHR and Digital Cultural Landscapes

Vika presented about automated subject tagging. She mentioned the MeSH indexer that is used in health. ESTHR is an experiment to build a similar subject tagger for humanities literature. ESTHR stands for Evolutionary Subject Tagging in the Humanities.

Vika talked about how research can be hindered by contemporary classification. She gave the example of Fichte who falls between philosophy and religious studies and therefore is hard to classify. There is a self-perpetuating cycle of obscurity. ESTHR is being designed to overcome this problem of works that fall between categories.

Vika walked us through a number of the problems related to categorization:

  1. Limitations of current cataloguing practices that are rooted in the physical book as the fundamental unit. Cataloguing tends to focus on placing the book on a shelf.
  2. Knowledge organization is power. Which categories do you assign? Who designs the ontology?
  3. Subject headings are hard. Even librarians don't understand them. They change over time and for different communities.
  4. We might want to have different classification systems for different communities.
  5. People see knowledge that is enabled by what they know.

Their solutions should enable:

  • Different contexts
  • Serendipity
  • Interdisciplinarity
  • Allow faceted refocusing (which is different from faceted drilling down)
  • Iterative classification - facilitate self-reflection on classification
  • Flexibility - a guided social tagging approach
  • Weighted system that facilitates
  • Layering

Any classification system will have interstices, but a layered system lets us seem the gaps better.

We talked about the politics of classification.

Kirsta Stapelfeldt: The Islandora Digital Humanities Ecosystem

Kirsta presented on the Islandora open source project which provides a digital asset management system built on Fedora.

They are supporting administrative, learning, and research work. Islandora was designed for research but is being used for other types of collections. She gave an example, Island Lives, that allows for community folksonomic tagging. They also have a simple web-based XML tagging environment.

They have a automated work-flow that allows you to put TIFFs in and get a digital book out. It does OCR and simple markup and so on.

They have tools for geolocation data so documents can be overlaid on maps or maps on a timeline. Herbs samples can be mapped based on the town they were gathered in. The georeferencing allows them to support smartphones and show what data (pictures, documents) are tied to whatever place you are in.

Islandora is an ecosystem - it leverages other open source systems (Fedora and Drupal.) They add a bunch of objects on top of these. Using Fedora they have lots of objects and RDF. They have data streams for handling information. They use Drupal to create the user interface. From Drupal's perspective is Islandora is just a set of modules.

There is a lot of collaboration around Islandora. They are building a community that contributes. The code is open source, but there is a company that sells services. Discoverygarden.ca is an experiment moving Islandora into the cloud.

Matt Bouchard and Harvey Quamen: The Watson Archive Workflow

Matt and Harvey presented about the EMiC Watson Archive project. Wilfrd and Sheila Watson were two of the foremost Canadian modernists. Their archives are at the Universities of Alberta and Toronto. The project, led by Paul Hjartarson, are editing their letters initially. These can help understand the Watsons and their communities.

They are now developing a workflow.

The issue of silos came up. Why is this project reinventing the wheel? How can they share their knowledge? I tend to think that silos have their purposes. They provide sandboxes where we can be protected form the infinite number of things we should read or people we should communicate with. When you have an infinite amount of reading and communicating you really should do you have no chance to think and work it out for yourself. Sometimes the path is more important than the end and silos can provide places for recapitulating paths.

Chad Gaffield, who attended this session, asked us all what we thought was different about digital work. He expanded on his question by adding that he gets asked how the digital is changing the humanities and he wants our input on what the "elevator speech" might be. Some of the answers were:

  • Scale: The digital has made possible research on a different scale of evidence. We have collections of thousands of digital of books that can be searched, for example.
  • Formalized Methods: The digital allows us to formalize research methods and implement them on computers. Concording was one of the first research tasks that was automated, but we can now imagine new methods. It is also the case that the act of formalizing methods for implementation teaches us about the limits of methods and triggers discussion of what can be formalized.
  • Careers: Integrating digital humanities training into the humanities has given students a broader range of career opportunities. Students with significant training in digital methods can contribute a unique combination of critical thinking and technical experience to the projects they choose.
  • Interdisciplinarity: The digital humanities brings together different disciplines in order to complete projects. Digital humanists typically work together with librarians, information scientists, interface designers, and computer scientists. This is in addition to the breadth of humanities disciplines that meet in the commons of the digital humanities.
  • Creative and Communicative Practice: The digital humanities is often distinguished by the creation of digital scholarly works. It thus combines the traditional excellence of the humanities in critical approaches with practice based research around creating communicative objects.
  • Playful: The digital humanities is increasingly looking at games and fabrication as forms of digital practice. These can be the site for playful research that both engages play as a subject but also recognizes playful practices in serious research.
  • Community Engagement: The web allows us to break down barriers to public engagement in scholarship. It allows us to share research resources of interest to people directly with them. Crowdsourcing projects can bring the interested public into collaborations that generate new research. Such public engagement allows us to make clear how the humanities is really about what matters to people - their histories, stories, and culture.

I'm sure I missed some of the contributions to this discussion; this is my reconstruction of the interventions.

Margaret Conrad: Saving the Digital Humanities one Web Site at a Time

Marg Conrad was awarded the SDH/SEMI Award for Outstanding Achievement, Computing in the Arts and Humanities so she gave a keynote. She focused on the role of the digital humanities in rescuing the humanities from their many sins. She had much to say about change and how it affects us.

Success of revolutions is predicated on bringing life-affirming values to a period of chaos. Change is not necessarily good. History offers many examples of horrible change.

History offers an abundance of evidence that social justice like web sites is never done. We never achieve our ideals, but we have to try. Social justice is like a bath. If you don't take a regular a bath you will eventually bath. She mentioned Martha Nussbaum's argument that the arts are essential to a just society. The liberal arts are under siege - Conrad documented the crisis in the humanities where business assessment exercises are being applied to funding us. Digital humanities is a fragile new field, but in the end we will all have to be digital humanists. Digital humanities is how we will survive this crisis, but we should not attend only to the technical. We need to attend to the human. We can use the digital to transcend the commodification of the humanities and reach those that need us.

Marg's work was to bring Atlantic got into the story of Canada. In the 18th century Atlantic narratives are important and their is excellent documentary evidence. She was also concerned that women were not left out of the digital humanities. This is what she tried to do with the digital. She talked about how women's participation in ICT has actually dropped in the first world. She discussed quotas. She doesn't endorse them, but she encourages balance. We need to make balance a priority. She was guided by a 60/40 principle of not worrying about gender (either way) unless one fell under 40%.

She also talked about the importance of skills like collaborative work and technical skills in student preparation. We need to see what we can do about getting it woven into the liberal arts if the humanities is to survive.

At the same time we want to make sure that centuries of the humanities not be forgotten in the digital turn. She is part of a project looking at how ordinary folk are interested in history. History matters a lot to ordinary folk. It is in our DNA and in our laws. Everyone believes things based on interpretations of the past. Take scrapbooking - it is a popular hobby that is how many people negotiate and share their histories. How are we interacting with this community as it goes digital?

She talked models for weaving digital humanities into the curriculum. She now recommends the distributed model with courses across programs. One person can make a difference, especially in small universities. It is now time to put shoulder to the wheel to get us over the next hump. What we are doing puts stress on many of the structures we have. She quotes Willard McCarty that the danger is not cuts to funding, but industrialization. We don't want to become the purveyors of digital literacy. Willard called for courage at the DHO meeting this year.

SDH/SEMI can play a role in the transformation of the humanities, but to help we need to:

  • We need to develop a working definition of digital humanities and conduct an audit
  • We need to document the resources available - how to digital humanists get the job done
  • We need to have a running tally of projects
  • We need to marshal the arguments for the relevance of the digital humanities (and the humanities)
  • We need to do a better job trying to change attitudes towards the humanities - to do this we may need a more coordinated response

As humans we make history, but not always in the conditions of our own making. There are cracks in which we can operate. Lets get on with making history.

It was lovely listening to Conrad talk patiently and wisely about things.

SDH/SEMI Annual General Meeting

At the end of the conference we have our AGM. Ray Siemens was a our Federation representative. Next year it will be at Waterloo and the year after at Victoria.

Navigate

PmWiki

edit SideBar

Page last modified on June 03, 2011, at 04:38 PM - Powered by PmWiki

^