Main »

New Horizons Conference

University of Virginia New Horizons Conference

University of Virginia, Monday May 19th - 22nd, 2008. See the web site here.

Note: this is now finished!

I was at the University of Virginia at IATH during 2001-2 (my sabbatical) so this was a return for me. Driving in from the airport I was struck by how temperate, green and lush Charlottesville is (compared to Ontario.) The locals don't seem to appreciate their weather - I had regular talks about whether Virginians' deserved such nice weather given how they complain. Personally I think Ontario and Virginia should have a state/province swap so the Virginians can appreciate their natural weather and beauty.

Generally I was impressed by the quality of this conference. It is mostly for UVA folk by UVA folk, showcasing the best digital work in teachng and research across a university that has been a pioneer. I was struck by how many stakeholders there are at UVA for digital media who are collaborating and supporting projects. All sorts of units from IATH to the Library are supporting leading projects.

Bethany and colleagues deserve credit for a smooth and well organized small conference. They don't get credit for the weather.

Monday, May 19th

The opening keynote was given by Dan Cohen of George Mason and Zotero. He talked about the genesis, stresses of success, and future of Zotero.

One of the key take home messages for me was the importance of openness and standards. They have received a lot of support from the community by having open APIs. Some other points he made:

  • Zotero grew out of lessons from previous projects like Syllabus finder, Web Scrapbook and Scribe.
  • Not having a server-side component (yet) is reassuring to academics - it will work even if they are offline.
  • Having a server-side component (where you store data about/by people) adds all sorts of security issues, especially for a successful project with so many users.

Success breeds problems for academic projects. Users want support. You find you have lots of active stakeholders (like Open Office who care if you change things.) Most importantly you need a long-term support model when the grant runs out. Dan gave us a quick tour through the models they are thinking about.

Perhaps the most intriguing point he made was that we should think of Zotero as a trojan horse for humanities computing. Zotero does something very useful that a large group of people beyond the humanities need. It has an API so other projects can work with it. Lets take Dan up on this!

Tuesday May 20th

In the morning there was a great tour of IATH projects. With Bernie Frischer in charge there seems (to me) to have a move towards visual media projects in the broad sense of GIS and 3D projects rather than text and images projects. Rome Reborn is the best example - it is an amazing 3D model of Rome at about 310 AD based on a scan of the Plastico. The virtual Rome has four levels:

  1. The high-end 3D model with some 6000-7000 buildings that can only be accessed at a special workstation.
  2. A connected Google layer of information so that one can get information about buildings (and links to research) layered over Google Maps. (I note that they are not moving low-rez models to Google Maps.)
  3. A online version through a commercial solution that will stream.
  4. A peer reviewed 3D model publication (and process) where models can be reviewed by experts in the model, through walk-throughs.

The publication layer sounds intriguing. I'm gong to corner Bernie and get more info.

Worthy Martin showed a number of projects including The World of Dante led by Deborah Parker. Later we heard music performed for the site. There are apparently lots of references to music in Dante.

At lunch I attended a discussion about the proposed Digital Humanities Centre led by David Germano. They are trying to create a very different digital humanities organization that is ground up rather than top down. It is meant to complement IATH by supporting breadth and forming a user's group rather than providing leadership through high-end projects.

David Germano, presented on the Tibetan Himalayan Digital Library which is the old site that is being turned into a *Tibetan Himalayan Library*. The old site was a host to hundreds of academic projects. David talked about the problem of heroic model of what it is to be a humanist. He talked about a new model that blows up the old pyramid of knowledge. He wants something that allows the *distributed production and dissemination of knowledge*. He is involving all sorts of stakeholders from photographers, development workers and others who make the "dark matter" of knowledge. They are trying to make a participatory network into Tibet where locals can document their region and the knowledge can be stored here. The site also tries to foreground the projects not the THDL. In effect they are becoming a publisher rather than a collection of projects.

The final session was a Arts and Technology Performance Event. It was introduced by Judith Shatin of the Virginia Center for Computer Music. First Jason k. Johnson talked about the Robotic Ecologies lab and his students showed a very cool virtual instrument (or building). Then Lydia Moyer's talked about her digital prints and video art. Paul Walker talked about music and Dante and what we know about the music of the time that Dante reference. His group then sang some of the songs they recorded for the The World of Dante.


William C. MacDonald presented an innovative language teaching approach, "Using Internet-Short Texts in Foreign Language Teaching," that involves having students rewrite authentic texts off the web.

Norm Oliver from the UVA Health System talked about "Race, Poverty, and Prostate Cancer: A Spatial and Multilevel Analysis." He uses GIS to identify patterns of health disparities, in particular incidence of prostate cancer among African-Americans. He described "the whirling vortex of GIA:"

"The question you want to answer", leads to "The data you need to answer that question", leads to "The data you can get", which leads to "The question you can answer," (which becomes "The question want to answer.")

Is this a hermeneutical circle? I suspect this is true across all sorts of disciplines, including the humanities where we don't use data. Oliver also mentioned Tobler's First Law of Geography which seems like a truth for other things.

"Everything is related to everything else, but near things are more related than distant things."

So what is "near" in a text?

Silvia Blemker of the School of Engineering gave a presentation on podcasting in the classroom. She started with some theory about how students (digital natives) learn. I can't help wondering if the digital native/digital immigrant juxtaposition isn't getting overused (but that is because I have given the same prequels.) She presented three multimedia design principles (from Mayer "Congnitive Load Theory"):

  • We have two channels (auditory and visual) - so we should support both with multimedia
  • We have limited capacity - so we need to repeat stuff and use different modalities
  • We use active processing - so we want to let students construct

She also gave six principles of instructional design: Multimedia, Contiguity, Modality, Personalization, Signaling, and Coherence.

Then she showed examples, including a real cool lecture podcast with slides and links to outside resources that can show up in iTunes (or a web site.) It looked like something you could do after class. They use Profcast, though others use Camtasia.

She talked about student-generated content which promotes active learning, but places responsibilities on students. Some neat examples were

  • Students using Flickr to upload scans of engineering sketches with Flickr annotations. This is a good example of using ready-at-hand tools like Flickr.
  • Students using iPhones to record video of a solution being written and talked about on a white board. Neat low-tech solution.
  • Mashups in wikis
  • Students creating podcasts with Garage Band? and iMovie that are short movies about social aspects.
  • "eGuru" (see eGuru for Solid Mechanics) which is a combination of blog and wiki within which all sorts of multimedia can be embedded.

She talked about mining the course information (using tags and Google Analytics) to look at a program and how things are being linked or talked about. The tag cloud for the eGuru for Solid Mechanics, for example, had "external link", "fbd", and "homework" as the major tags. What does that mean?

They now have a large project, Higher Ed 2.0, looking at transforming engineering education along these participatory/social lines. They are studying the effects and mining data.

Glen Bull and Bill Ferster presented on PrimaryAccess. They quoted some interesting statistics about teenage blogging from Pew. Primary Access offers "frictionless access to digital images and materials that enable them to construct compelling personal narratives." The system allows real time compositing of video for teaching and learning to facilitate inquiry.

Thursday May 22nd

Bob DuCharme gave a talk on semantic web technologies, RDF, RDFa n3 and OWL. He showed how Digg has implemented RDFa so you can grab metadata. He showed a very nice mashup he was building, Blog Big Picture, that gets the RDF from Calais and builds a picture of all the people and places talked about. He gathers blog entries, sends them to Open Callais, and then extracts the triplets (subject, predicate, object) into a table that allows him to then show all the the "people".

Other things he talked about were FOAF (Friend Of A Friend,) and SPARQL.

Bob then talked about the Semantic Web and connecting it to the "linked data movement". The promise is to have all this semantic data and then be able to extract new knowledge from it. For example if you that X is the spouse of Y and if you know that "spouse" is a symetric relationship, then a computer can figure out that Y is also the spouse of X. The Semantic Web is not about the meaning of words, it is about linking data and extracting new inferences. In this context he talked about OWL (Web Ontology Language) and Protégé, a free open source editor.

Bob talked about the issue of "ontologies for the sake of ontologies". There seem to be a lot of people creating ontologies for others to use for new data. Bob felt that metadata should be for existing data to add intelligence. He mentioned Jim Hendler's "two schools of semantic web". One "feeds lots of ontology and minimal data in reasoners", and the other "adds a little ontology to an existing set of data."

Bob pointed to a W3C wiki claim, "LinkedData is to spreadsheets and databases what the Web of hypertext documents is to word processor files." He showed the DBpedia a project that exracts data from wikipedia pages and other collections.

Scholarship in the age of mass digitization

I was on a closing lunch panel discussion on scholarship in the age of mass digitization. With me was Dan Clancy of the Google Books project and Linda Frueh of the Internet Archive's Open Library project. James Hilton, Vice President and CIO, University of Virginia was the facilitator. I spoke about the TAPoR project and about the life-cyle of research and how it related to tools and texts. My point was that in different phases in the research process we need different types of tools and, more importantly, we need tools that help make the transition from one phase of research to the next. I was glad to see that the Open Library is really open to TAPoR analysis - they reveal a link to a text file, something Google Books doesn't yet. Dan Clancy talked about how they are trying to figure out how to provide an API to Google Books that has the power and simplicity of Google Maps. They don't want to enable only scholarly mashups - they want to enable all sorts of mashups.

I have begun to write up my ideas about tools across the research cycle here.



edit SideBar

Page last modified on May 27, 2008, at 07:58 AM - Powered by PmWiki