philosophi.ca |
Main »
Short Guide To Evaluation Of Digital WorkBack to MLA Digital Work Home Questions, a check list and how to find an expertThis short guide gathers a collection of questions evaluators can ask about a project, a check list of what to look for in a project, and some ideas about how to find experts in one place. This assumes that evaluators who are assessing digital work for promotion and tenure are:
This is an annotated expansion on the Evaluating Digital Word (PDF) which was prepared as a one page checklist for a presentation to the ADE/ADFL in 2007 (see blog entry about the event.) Questions | Check List | Finding an Expert QuestionsSome questions to ask about a digital work that is being evaluated:
The most basic question to ask of digital work is whether it is accessible to its audience be it students (in the case of pedagogical innovation or users in the case of a research resource.) A work that is hidden and not made available is one that is typically not ready in some fashion. It is normal for digital work to be put up in "beta" or untested form just as it is normal for digital work to be dynamically updated (as in versions of a software tool.) Evaluators should ask for the history of online publication of a work and ask if it has been made available to the intended audience so that there might be informal commentary available.
Digital work is hard to review once it is done and published online as our peer review mechanisms are typically connected to publication decisions. For this reason competitive funding decisions like the allocation of a grant should be considered as an alternative form of review. While what is reviewed is not the finished work so much as the project and track record of the principal investigators, a history of getting grants is a good indication that the candidate is submitting her research potential for review where there is real competition. Candidates preparing for tenure should be encouraged to apply for funding where appropriate.
Given the absence of peer review mechanisms for many types of digital work candidates should be encouraged to plan for expert consultations, especially when applying for funding. It is common in electronic text projects to bring in consultants to review encoding schemes and technical infrastructure - such expert consultations should be budgeted into projects in order to make sure projects get outside help, but they can also serve as formal, though formative, opinions on the excellence of the work. Evaluators should ask candidates to set up consultations that can help contextualize the work and improve it.
Certain types of online work can be submitted to reputable peer reviewed online sources. Online journals exist with review mechanisms comparable to print journals and there are new forms of peer reviewed venues like Vectors that accept submissions of new media work. There are concerns about the longevity of these venues so candidates should also be encouraged to deposit their work in digital repositories run by libraries.
The best way to tell if a candidate has been submitting their work for regular review is their record of peer reviewed conference presentations and invited presentations. Candidates should be encouraged to present their work locally (at departmental or university symposia), nationally (at national society meetings) and internationally (at conferences outside the country organized by international bodies.) This is how experts typically share innovative work in a timely fashion and most conferences will review and accept papers about work in progress where there are interesting research results. Local symposia (what university doesn't have some sort of local series) are also a good way for evaluators to see how the candidate presents her work to her peers.
It should, however, be recognized that many candidates don't have the funding to travel to international conferences and we should all, in this time of restraint, be judicious in our air travel. For that reason candidates should seek out local or regional opportunities to present their work wherever possible.
There are peer reviewed journals that will accept papers that report about the new knowledge gained from digital projects whether pedagogical scholarship or new media work. Further, there are venues for making project reports available online for interested parties to read about the academic context of a project. These reports show a willingness to present to the community for comment the results and context of a project. They also provide evaluators something to read to understand the significance of a project.
The web is about connections and that is what Google ranks when they present a ranked list of search results. An online project that is hidden is one that users are not trying. One indication of how a digital work participates in the conversation of the humanities is how it links to other projects and how in turn, it is described and linked to by others. With the advent of blogging it should be possible to find bloggers who have commented on a project and linked to it. While blog entries are not typically careful reviews they are a sign of interest in the professional community.
A scholarly pedagogical project is one that claims to have advanced our knowledge of how to teach or learn. Such claims can be tested and there is a wealth of evaluation techniques including dialogical ones that are recognizable as being in the traditions of humanities interpretation. Further, most universities have teaching and learning units that can be asked to help advise (or even run) assessments for pedagogical innovations from student surveys to focus groups. While these assessments are typically formative (designed to help improve rather than critically review) the simple existence of a assessment plan is a sign that the candidate is serious about asking whether their digital pedagogical innovation really adds to our knowledge. Where assessments haven't taken place evaluators can, in consultation with the candidate, develop an assessment plan that will return useful evidence for the stakeholders. Evaluators should not look for enthusiastic and positive results - even negative results (as in this doesn't help students learn X) are an advance in knowledge. A well designed assessment plan that results in new knowledge that is accessible and really helps others is scholarship, whether or not the pedagogical innovation is demonstrated to have the intended effect.
That said, there are forms of pedagogical innovation, especially the development of tools that are used by instructors to create learning objects, that cannot be assessed in terms of learning objectives but in terms of their usability by the instructor community to meet their learning objectives. In these cases the assessment plan would resemble more usability design and testing. Have the developers worked closely with the target audience to develop something they can use easily in their teaching?
Digital scholarly projects should deposit their data for archiving when then are finished. While few projects do this because a) well managed repositories are just emerging, and b) many projects, even moribund ones, dream of the next phase; we can expect projects to plan for deposit when they think they are finished. The reason for following guidelines for scholarly encoding or digitization is so that the editorial and multimedia work can be reused by other projects, but without the work being documented and deposited we risk losing a generation of such work. Further, digital scholars should be encouraged to deposit their work so they can move on to new projects as one of the dangers of digital work is the danger of being buried in the maintenance of previous projects.
Best Practices in Digital Work (Check List)Here is a short list of what to check for in digital work:
A scholarly work that represents humanities evidence in a digital form is the result of a series of decisions, the first of which is the choice of what to represent. For example, a digital representation of a manuscript is first a choice of what manuscript to digitize and then what contextual materials to digitize. These decisions are similar to those any editor or translator makes when choosing what to represent in a new edition or translation. A content expert should be able to ask about the choices made and discuss these with a candidate.
Once choices are made about the content then a digital scholar has to make choices about how the materials are digitized and to what digital format. There are guidelines, best practices and standards for the digitization of materials to ensure their long term access, like the Text Encoding Initiative guidelines or the Getty Data Standards and Guidelines. These are rarely easy to apply to particular evidence so evaluators should look for a discussion of what guidelines were adapted, how they were adapted, and why they were chosen. Absence of such a discussion can be a sign that the candidate does not know of the practices in the field and therefore has not made scholarly choices.
In many cases the materials may be digitized to an archival standard, but be made available online at a lower resolution to facilitate access. Again, there candidate can be expected to explain such implementation decisions.
As mentioned in the previous point there are guidelines for encoding scholarly electronic texts from drama to prose. The TEI is a consortium that maintains and updates extensive encoding guidelines that are really documentation of the collective wisdom of expert panels in computing and the target genre. For this reason candidates encoding electronic texts should know about these guidelines and have reasons for not following them if they choose others. The point is that evaluators should check that candidates know the literature about the scholarly decisions they are making, especially the decisions about how to encode their digital representations. These decisions are a form of editorial interpretation that we can expect to be informed though we should not enforce blind adherence to standards. What matters is that the candidate can provide a scholarly explanation for their decisions that is informed by the traditions of digital scholarship it participates in.
Generally speaking projects should choose open and well documented standards (as opposed to proprietary standards like the WordPerfect file format) if they want their materials to be useful to scholars in the future. Electronic scholarly resources that use proprietary formats doom their work to be inaccessible to scholars once that format is superceded. Exceptions to this are project exploring interactivity which often calls for an authoring environment like Flash that can facilitate innovative interfaces. Such projects will typically keep the materials in open standard formats and use Flash to provide the interactive interface.
One of the promises of digital work is that it can provide rich supplements of commentary, multimedia enhancement, and annotations to provide readers with appropriate historical, literary, and philosophical context. An electronic edition can have high resolution manuscript pages or video of associated performances. A digital work can have multiple interfaces for different audiences from students to researchers. Evaluators should ask about how the potential of the medium has been exploited. Has the work taken advantage of the multimedia possibilities? If an evaluator can imagine a useful enrichment they should ask the candidate whether they considered adding such materials.
Enrichment can take many forms and can raise interesting copyright problems. Often video of dramatic performances are not available because of copyright considerations. Museums and archives can ask for prohibitive license fees for reproduction rights which is why evaluators shouldn't expect it to be easy to enrich a project with resources, but again, a scholarly project can be expected to have made informed decisions as to what resources they can include. Where projects have negotiated rights evaluators should recognize the decisions and the work of such negotiations.
In some cases enrichment can take the form of significant new scholarship organized as interpretative commentary or essay trajectories through the material. Some projects like NINES actually provide tools for digital exhibit curation so that others can create and share new annotated itineraries through the materials mounted by others. Such interpretative curation is itself scholarly work that can be evaluated as a form of exhibit or essay. The point is that annotation and interpretation takes place in the sphere of digital scholarship in ways that are different from the print world where interpretation often takes the form of an article or further book. Evaluators should ask about the depth of annotation and the logic of such apparatus.
In addition to evaluating the decisions made about the representation, encoding and enrichment of evidence, evaluators can ask about the technical design of digital projects. There are better and worse ways to implement a project so that it can be maintained over time by different programmers. A scholarly resource should be designed and documented in a way that allows it to be maintained easily over the life of the project. While a professional programmer with experience with digital humanities projects can advise evaluators about technical design there are some simple questions any evaluator can ask like, "How can new materials be added?", "Is there documentation for the technical set up that would let another programmer fix a bug?", and "Were open source tools used that are common for such projects?"
It should be noted that pedagogical works are often technically developed differently than scholarly resources, but evaluators can still ask about how they were developed and whether they were developed so as to be easily adapted and maintained.
The first generations of digital scholarly works were typically developed by teams of content experts and programmers (often students.) These project rarely considered interface design until the evidence was assembled, digitized, encoded and mounted for access. Interface was considered window dressing for serious projects that might be considered successful even if the only users where the content experts themselves. Now best practices in web development suggest that needs analysis, user modeling, interface design and usability testing should be woven into large scale development projects. Evaluators should therefore ask about anticipated users and how the developers imagined their work being used. Did the development team conduct design experiments? Do they know who their users are and how do they know how their work will be used? Were usability experts brought in to consult or did the team think about interface design systematically? The advantage to a candidate of engaging in design early on is that it can result in publishable results that document the thinking behind a project even where it may be years before all the materials are gathered.
It should be noted that interface design is difficult to do when developing innovative works for which there isn't an existing self-identified and expert audience. Scholarly projects are often digitizing evidence for unanticipated research uses and should, for that reason, try to keep the data in formats that can be reused whatever the initial interface. There is a tension in scholarly digital work between a) building things to survive and be used (even if only with expertise) by future researchers and b) developing works that can be immediately accessible to scholars without computing skills. It is rare that a project has the funding to both digitize to scholarly standards and develop engaging interfaces that novices find easy. Evaluators should look therefore for plans for long term testing and iterative improvement that is facilitated by a flexible information architecture that can be adapted over time. A project presented by someone coming up for tenure might have either a well documented and encoded digital collection of texts or a well documented interface design process, but probably not both. Evaluators should encourage digital work that has a trajectory that includes both scholarly digital content and interface design, but not expect such a trajectory to be complete if the scope is ambitious. Evaluation is, after all, often a matter of assessing scholarly promise so evaluators should ask about the promise of ambitious projects and look for signs that there is real opportunities for further development.
Finally, it should be said that interface design is itself a form of digital rhetorical work that should be encouraged. Design can be done following and innovating on practices of asking questions and imagining potential. Design, while its own discipline, is something we all have to do when developing digital works. Unlike books where the graphic and typographic design is left to some poorly paid freelancer paid for by the publisher after the book is written, in digital work it is all design from the first choices of what to digitize. This is especially the case with work that experiments with form where the candidate is experimenting with novel designs for information. In the humanities the digital work has forced us to engage with other disciplines from software engineering, informatics to interface design as we ask questions about what can be represented. It is a sign of good practice when humanists work collaboratively with others with design expertise, not a sign that they didn't do anything. Evaluators should look expect candidates presenting digital work to have reflected on the engineering and design, even if they didn't do it, and evaluators should welcome the chance to have a colleague unfold the challenges of the medium.
The nature of the organization mounting a web resource is one sign of the background of a digital project. Some organizations like the Stoa Consortium will "mirror" an innovative project which typically involves some sort of review and the dedication of resources. Evaluators can ask about the nature of the organization that hosts a project as the act of hosting or mirroring (providing a second "mirror" site on another server) is often a recognition of the worth of the project. While universities doe not typically review the materials they host for faculty, a reliable university host server is one indication of the likelihood that the server at least will be maintained over time, an important concern in digital work as commercial hosts come and go.
A simple sign that a project was designed to advance scholarly knowledge is that it has been demonstrated to peers, whether through local, national, or international venues. A candidate who doesn't demonstrate their work and get feedback is one who is not sharing knowledge and therefore not advancing our collective knowledge. Obviously some works are harder to demonstrate than others, particularly interactive installations that need significant hardware and logistical support. That said, just as university artists are evaluated on the public performances or shows of their work, so can a digital media artist be asked to document their computer installations or performances. Evaluators can ask about the significance of the venue of an installation just as they would ask about an art exhibit.
As mentioned above, certain projects can be expected to be connected online to other projects. Learning materials can be connected to larger learning course systems; hypermedia works can link (reference) other works; and tools should have documentation and tutorials. Evaluators can ask how a work participates in the larger discourse of the field whether by linking or being subsumed. Do other projects in the same area know about and reference this project? Does it show up on lists of such works? For example there are lists of tools on the net - does a particular tool show up on well maintained list?
A basic set of questions to ask about pedagogical scholarship is whether the learning innovation has actually been used and whether it has been used in real teaching and learning circumstances. As mentioned above, for pedagogical digital work evaluators should also ask if the use has been assessed and what the results were. For more see also Demonstrating the Scholarship of Pedagogy.
How to Find an ExpertPlaces to start to find an expert who can help with the evaluation:
See also Demonstrating the Scholarship of Pedagogy Back to MLA Digital Work Home |
Navigate |
Page last modified on July 22, 2009, at 05:23 PM - Powered by PmWiki |