Main »

CHCI 2018

These are my notes on the Consortium of Humanities Centers and Institutes 2018 conference. The hashtag for the conversation on Twitter was #CHCI2018

Note: these are written live and therefore tend to be spotty and full of unfinished thoughts.

Day 1: Public Humanities

Practicing the Public Humanities

Steven Kidd and Daniel Fisher from the National Humanities Alliance opened the day. They talked about some of their initiatives.

NEH for All project is a "digital clearinghouse that profiles influential National Endowment for the Humanities projects across the nation, NEH for All articulates how the Endowment benefits a wide range of communities." They have a nice database of public humanities projects at NEH for All. This will be followed by a Humanities for All project (note that the web site is under construction). I don't really understand why the two are different databases, but it is apparently a matter of audience.

is an initiative that gathers data to make the case for humanities undergraduate study. It offers a toolkit.

They are cultivating groups of advocates at the grassroots and leaders. The most compelling argument for the humanities is the research in the humanities.

They presented a list of types of activities that can be useful:

  • Outreach - activities that reach out to organizations and people outside the university
  • Engaged Research - research into public engagement
  • Engaged Teaching - opportunities like our Community Service Learning
  • Engaged Public Programming - not a lecture, but more a dialogue that inspires thought
  • Infrastructure of Engagement - the programs that create the next generation of scholars who are engaged

There was a great question about the appearance of partisanship in humanities research. How do we avoid looking like our research is partisan.

How do we help faculty do public engagement? How do we help them do things that are consistent with their ethics. Imagining America web site is a good place to start. Museums have a lot of experience listening to communities and we can learn from them.

"Get the need first" - listen and learn from the community before bringing in the scholars to lecture them!

I asked about international contacts. Alas the NHA haven't had the opportunity. They are building initiatives and hope to connect.

There was some discussion around the definition of the humanities. I personally don't think the "humanities" is meaningful outside the academy. It is an administrative term. That said, one person spoke well about the importance of the individual and particular in the humanities. One thing that defines us in a way that people outside the academy might understand is that we value the interpretation of individuals. We care about one person's story or one event. We don't just abstract to general laws or rules or policies. We are ideographic as opposed to nomographic.

Humanities Festivals

My colleague Christopher Lupke moderated a panel on humanities festivals. He talked about Kari Winter's idea of public humanities as gardening.

Jonathan Elmer talked about the Liberal Arts Management Program and course he teaching for them on the ideas industry. What is the long history behind public humanities like literary festivals. He feels that it is partly about face to face meeting. Presence. People want the living word. We have to think about the oral and charisma. The history is the Lyceum program, Chautauqua, festivals and newer activities like the Ted shows. The Lyceum program was about self-help, self-learning of the sort that Jordan Peterson is now offering. Nowadays the business visionary like Steve Jobs or ex-politician like Obama are the equivalent. They circle the country providing live lectures with the charisma. They used to have "courses" with a theme.

Kari Winter talked about gardening and Buffalo. She talked about the humanities festival they hold and the Humanities to the Rescue with the idea that the humanities are the cure, rather than the problem. Buffalo Humanities Festival is held in a public space like the Albright Knox. They have a theme with panels, a keynote, and a VIP event. They involve high school students. Some of the challenges include lack of resources. They try to avoid being just another university obligation of good citizenship. They try to partner with lots of organizations.

David Shumway talked about https://www.cmu.edu/dietrich/humanities-center/humanities-festival/index.htmlthe festival they hold in Pittsburgh. The festival has depended on partnerships which make it not just a CMU thing. Their festival serves as advertising to people who might not otherwise come out. People might come for one event and stay for others. They have live interviews and questions from the audience.

Some of the ideas that came out of

  • One of the features of festivals is building a local community
  • People who pay are more likely to show up
  • How can one "archive" or publish? There are publishers like Publishing Without Walls, Manifold Project and others that offer open publishing
  • A cool model is https://dwellinotherfutures.com/
  • Some universities can piggyback on existing festivals like the Edinburgh fringe. We have a poetry festival in Edmonton that is popular
  • Involve lots of stakeholders including local government, other universities and community organizations
  • Beware of contempt for the locality from colleagues who think of themselves as international (cosmopolitan)
  • What is the language of the festival? How do we help colleagues learn to talk to publics?
  • How to involve other ages?
  • Develop an escape room with ideas

Roundtable on Graduate Education and Community Engagement

Dan Kubis started by talking about the public humanities fellowship programme that they developed at Pittsburgh. This is for grad students who work on translating the humanities to publics. These are summer internships organized with particular community organizations. See http://www.humcenter.pitt.edu/public-humanities-fellows

Jennifer Gunn talked about the Humanities Without Walls summer workshop for pre-doctoral students. They learn from administrators, government people, publishers, foundations, archivists, digital humanists and so on. They are housed together. They learn about social media, working on resumes, learning to value their own expertise. They talk to people running prison humanities programmes and public history projects. They visit museums and other places outside the uni.

Greg Donofrio talked about a Masters in Heritage Studies and Public History that sounded terrific.

Kathryn Temple from Georgetown talked about developing a plan for a PhD in English that would aim to prepare students for diverse careers that included a required residency in an outside organization. They got a grant with the MLA and have a web site on Connected Academics. A week from now they will launch a non-credit graduate certificate.

We talked about resources like those from the Humanities Action Lab.

Whither the Public Humanities Network?

The last session looked at where the Public Humanities would go. We first got a short history. We need a practical network or web site. Here are some of the ideas that were proposed:

Day 2: CHCI Conference

CHCI Business Meeting

In the morning there were business meetings. CHCI has a number of networks including one on Public Humanities (that I attended yesterday) and one on Medical Humanities.

CHCI is going to start a podcast that is done with public radio. They have grants from Mellon to develop Global Humanities Institutes. These institutes create opportunities for collaborative research that culminates with a summer (or winter) institute.

Next year the meeting is in Dublin at the Trinity Long Room Hub and the theme is Cultural Interventions.

CHCI Mellon-funded Ongoing Projects

In the afternoon we heard about CHCI's efforts to internationalize starting with James Chandler. The challenge was to internationalize when the idea of a center or institute is a North American genre. The humanities center in North America has a particular history having to do with the disciplines in North America. Centers grew out of a need for . Ironically the sciences globalize more easily. The globalization of the sciences is often imperializing. The internationalization of the humanities is transformative and therefore not as likely to be imperialistic.

Jane Ohlmeyer from Trinity (Dublin) talked about the institute focused on the "Crises of Democracy." They are going to make their materials open. They plan to create an online "Dubrovnik Syllabus" which is based on the Charlottesville Syllabus model. They are planning summer schools in Brazil that will involve different ranks and students.

Andrés Claró from Chile talked about the "Challenges of Translation" institute. They are thinking of translation as prismatic. One of the challenges they have to deal with is the dominance of English as the academic koine.

We then had presentations about the dynamic projects across Africa that are part of the Africa Humanities Institutes. This was introduced by Premesh Lalu. One of the many interesting projects presented was Les Ateliers de la Penseé - a Workshop of Ideas in Dakar, Senegal that starts with the idea that the future is Africa.

Welcome and Introduction

The President of CHCI, Sara Guyer, welcomed us and talked about the fact that it is the 30th anniversary of CHCI. The theme of this conference is Humanities Informatics which Sara called a Humanities Problem. Then Debjani Ganguly of the Institute of the Humanities and Global Cultures at the U of Virginia welcomed us. She connected the Charlottesville march of last summer to the theme of this conference. The human is emeshed with informatics. "We think through Google."

Ian Balcam, Dean of Arts & Science at UVA then welcomed us with "a very few words." He talked about how the theme is an agenda setting topic. Informatics is vital to the humanities. The theme is timely and impacts on the form of the university. Now we don't have just close interdiscplinarity between near disciplines. Now we have distant collaborations and we use different media. What is driving the most recent round of interdisciplinary intersections. Could it be "grand challenges" like climate change. We know that this is simplistic - what we are seeing are wicked complexity in questions like: Is new media technology making democracy impossible? How is our understanding of the human changing in the age of big data, AI, and genetics?

Srinivas Aravamudan Memorial Lecture: Achille Mbembe: Algorithmic Reason and Planetary Humanities

Mbembe talked about Srinivas and his use of virtuality and tropicalization. His lecture then revisited Hiedegger's Question Concerning Technology. Heidegger was was first concerned with technology and its essence as instrument and as anthropology. The second concern was technology as a way of thinking or of revealing. Technology as an event of truth. What kind of thinking? One that may constrain our freedom to technology.

We need to set ourselves free - truth should set us free. We cannot understand technology through technology. We can't understand it if we put up with it or avoid it. Freedom is the open - a space where we aren't confined to push on blindly with technology, or to push against it. Heidegger talks about technology in terms like disclosing. We are in a tradition of metaphysics that assumes a division between the human and nature (and the human uses technology.) The tradition has anxieties - the anxiety about the relations between people and things, and that people are not things and should not be treated as such. The tradition is haunted by questions about how much of human activity and the human body be replaced. We are now obsessed with keeping the border between human and machine policed. This anxiety prevents us from understanding technology as free.

In African thought the boundary between human and thing was never really there. African thought saw agency in both. Such thought was dismissed as "animism". What was dismissed was the idea of co-agency - shared agency.

There is also a nostalgia for animism - nostalgia for a time when our relationship with nature was direct. Now technology is autonomous. The nostalgia leads us to fear being enslaved by machines.

In countless works the making of tools is seen as paradigmatically human. Marx laments the loss of an intimate relationship between worker and tool.

An increasing number of people are now embedded in technostructures where we interve on a planetary scale. We are having an effect on the whole planet through our technostructures. It thus makes sense to reframe the question of technology in the context of earth-system-analysis. We should critique technology through a critique of our ways of acting and sensing through technology. Technologies are getting more and more tied in complex systems. We release complex new molecules into the ecosystem that make areas inhabitable. Technologies are working towards general intelligence and self-replication. We may soon no longer be the creators. Technology and the apocalyptic are merging - the end of the history of humans.

The predominance of statistical forms of knowledge allow data to support new forms of governance. Algorithms and big data bring a greater belief in statistical modes of existence. Algorithms inspired by natural world - genetic algorithms that can evolve - are embedded in the future. The translation and mirroring of natural processes is believed to be good. We also believe everything can be computable. But life is chaotic - an open and non-linear system.

Concerns about the singularity are linked to fears about extinction - our extinction. But we should think from a planetary perspective.

He concludes with some remarks on algorithmic reason. He builds on Matteo Pasquinelli. Algorithmic reason and governance is about pattern detection and anomaly detection. The DNA of the algorithm is the search for the enemy. (Really?) This puts algorithmic reason in the realm of warfare and detection of the enemy. Including the internal enemy - a whole class of populations that are superfluous. A class that can't be exploited as they can't be entered into a class of exploitation. What do you do with people you can't use? That is the challenge of AI.

We turn everything into data that then can be manipulated by algorithms that recognize patterns. That is the new governance. That is power.

What do we do with the bias that is at the ground of algorithms? We do that by imagining different modes of seeing and measuring.

How do we counter a planetary mode of governance? We have to imagine new ways of inhabiting and sharing the earth.

I found myself stuck by Mbembe's interpretation of the computer as essentially pattern/anomaly recognition. This seems a common move in the humanities: to equate a phenomena like computing with some interesting feature and then overinterpret that feature. While such moves give us a new perspective or theoretical lens with which to understand the phenomenon, such interpretations tend to simplify the phenomenon and play "gotcha" with other disciplines as if we in the humanities really know about the dark side of everyone else.

Day 3: CHCI Conference

Lydia Liu: The Psychic Life of Digital Media

Liu is the author of the The Freudian Robot. She began her talk with two encounters. The first between Heidegger and Heisenberg in 1953. Heidegger's "Question Concerning Technology" was in part a response to Heisenberg (even more so his Age of the World Picture.)

The second was between Weizenbaum and Horkheimer who was critical of technoculture. Weizenbaum read Horkheimer and reflected on reason.

While scientists like Weizenbaum became interested in technoskepticism, the critical theorists, however, never paid attention to the scientists. Scientists were treating the mind as emotional and irrational as they modeled AI, but the critical theorists weren't paying attention.

We need to understand the problem when faced with the human-machine simulacra. Liu is trying to . The freudian robot is her term for the entanglement of humans and machines. She talked about the place of psychoanalysis in the study of digital media. She talked about three thinkers who didn't treat the mind as rational:

  • Sigmund Freud, "The 'Uncanny'" - Freud was fascinated by automata. He was refuting Jentsch's view of the uncanny. For Jentsch the uncanny was the undecidability of the artificial (ie. one can't tell if a doll is alive or not.) For Freud the uncanny was connected to repression.
  • Masahiro Mori, the Uncanny Valley - Mori put forward the hypothesis that our sympathy for a robot increases until it gets close to looking real, at which point we lose sympathy because of how uncanny they are. His theory became important for understanding how audiences respond to simulated actors.
  • Marvin Minsky, the Emotion Machine - declaimed himself as a new Freudian. Minsky drew on Freud's ideas of the unconscious and tried to combine them with Piaget's ideas. AI practicioners were creating philosophies of AI, often to attract investors. Minsky had a conception of a robot that was based on Freud. He talked about mental correctors, suppressors, censors and taboos. He was a scientist who rejected the idea that the mind is rational and a logic machine. He wanted to simulate all of the cognitive machinery of the mind including all the irrational aspects.

One has to make things, constructs robots, to try to understand what it is to be human.

Liu talked about an encounter between Clarke (sci-fi author) and Shannon where Clarke noticed a machine called the Ultimate Machine that Shannon had. It was a machine that turned itself off.

During questions she talked about cybernetics and the idea that the mind (or brain) is a machine. The computer is a symbol processing machine that works with discrete symbols.

We may think that the mind-as-machine idea is a metaphor, but the scientists are inspired by trying to model the mind. We learn as much about about our ideas about the mind in such modelling as we do about the potential for AI.

Is there an ethical dimension to the modelling?

Art, Desire and Techno-Entanglements

We then had a session on art and technology.

Renate Ferro: From the Pleasure Principle to the Technological Drive

Ferro shared three projects. She is a conceptual artists working in new and old media. She creates hybridities that mobilize humour and irony.

  • Fort Da: a video installation that evolved over installations.
  • Private Secrets Public Lies is an interactive that needs Safari and Java.
  • Remote Sensing is a project just starting

Matthew Burtner, Sonic Physiographies of a Time-Stretched Glacier

Burtner, who was the composer of the music that was part of the dance performance we saw the night before, played some examples of his ecoacoustics. He is from Alaska and works with the sounds of ice and glaciers. Burtner talked about listening to the glaciers and glaciers listening to us.

Ecoacoustics is an entanglement of data from nature and human. They use sonification - data into sound and field recording and live interaction with natural materials.

He showed us how he created a work of ecoacoustics abut a glacier. He actually slows the sound of the glacier to listen to its resonance. He combines the processed sound with someone playing the vibraphone (?). He has software that puts the human performer into an interactive relation with the slowed sound of the glacier. He was playing with time and the glacial.

Respondent: Anjali Prabhu

Prabhu brought Heidegger back in to reflect on the two artists. The essence of technology (according to Heidegger) is not technology as the two artists show.

Experimental Humanities: Humanities Labs

After lunch we had a workshop on experimental humanities labs. It was moderated by Eric Hayot.

James Evans (Knowledge Lab, University of Chicago): Algorithmic Abduction: Robots for Alien Reading

Evans started by showing the images you get when you Google Algorithmic Abduction - all robots kidnapping women. He argues that all sciences are humanities. Sciences behave as the humanities. Funding the sciences is funding problem sets but these problem sets seem to have to do with the dynamics of fields. They are interested in collective intelligence. As teams get bigger the less they are likely to surprise.

Human machine collective intelligence (IA (intelligence amplification) vs AI). How do cyborgs work?

He talked about surprise and how surprise is important, but often hidden because people want to build on what their audience knows.

We should break down of a false human/machine dichotomy - it is always an ensemble. But the idea of AI dominates when the most successful artificials are cyborgs (collective intelligences) - the KWIC is a cyborg.

He talked about how ensembles rule. Cyborgs will outperform AIs which calls into question the singularity.

He talked about what our objectives are - we in the digital humanities keep on trying to get machines to do what we do. Instead we need to architect machines to surprise us. How to train a bad robot rather than a bad graduate student.

If we expect robots to match us then they will be boring. Instead we want them to surprise us.

Nicole Coleman (Humanities + Design Lab): ‘Data is a Medium’ and other Discoveries from the Laboratory

Coleman talked about the relationship to data. It is uncomfortable to fit rich evidence into discrete data. At their lab they want to create tools to let people model information rather running models on data. They want to build for humanistic method, drawing on Johanna Drucker.

She concluded with our compulsion to objectivity and the separation of aesthetics and accuracy. She talked about the challenge of working with designers who want to solve a problem. Likewise AI is seen as a solution, which doesn't help when we are exploring new ideas in the humanities.

Jack Chen and Camilla Fojas: Humanities Informatics: Lab as Imagined Community

Chen started by talking about the social aspects of a lab. Labs are a way of interacting across departments. They are embedded in various systems. His lab gets funding from a strategic fund, for example. Their lab is focused on information. It is hard to break into digital humanities so the lab was a way of building a network of people interested in similar things. They wanted to have a humanist long historical scope. Too much to know as a trope goes way back in China.

The lab has four working groups on things like Human and Machine Intelligence, Network-Corpus, Smart Environments, and Surveillance and Infrastructure.

Fojas talked about what a lab does. It abducts something and brings it into an artificial space (the lab.) The lab is an assemblage of people, materials, instruments and so on. Their lab is on surveillance and they have been talking a lot about failure. She talked about the the film "The Feeling of Being Watched" which is about the FBI surveillance of muslims in Chicago. How do you foil (fail) data capture. They are trying to go beyond just research to see how they can bring new knowledge into the university through things like course.

Chen closed on images of the cyborg. He mentioned Clark and Chalmers, Extended Mind, Katherine Hales, and Norbert Wiener.

Somebody accused the speakers of all being charming. There was a question about data as rhetoric. I was interested in a question about the lab as an epistemic function. Why do we gather in labs?

  • To develop agreement
  • To get funds together
  • For training and socialization (of grad students)
  • To bring together the variety of skills needed for a project
  • Work socially

Wendy Hui Kyong Chun: Critical Data Studies, or How to Desegregate Networks

Chun is taking up a Canada 150 chair at Simon Fraser University. She wanted to think through the possibilities and limitations of critical data studies. How does bias persist through blind algorithms. She told us that she planed to make four arguments:

  1. Being blind to race promotes racism.
  2. To redress this we need to expose the default assumptons and axioms that ground algorithms
  3. We need to read the racists results against the grain, and lastly
  4. We need to create new algorithms. Her new research lab will try such experiments.

She talked about critical data studies. It responds to the hype of big data. The Economist talked about data as the new oil. Data, for some, changes how we make arguments - from causation to correlation. What we have are proxies - I could call them surrogates.

She talked about the future and past. Knowledge was once understanding of the past, but now is prediction of the future. She talked about the article "The Spread of Obesity in a Large Social Network over 32 Years" that seemed to suggest that obesity was a virus that spread.

Given that any correlation be shown to be real, what's true? She talked about discriminating algorithms that made it possible for people to advertise to "jew haters." She mentioned O'Neill, "Weapons of Math Destruction." The problem is that things like race are often in data through proxies. Even if race isn't tracked, you find it in other categories. An example is the Chicago police heat maps.

Data analytics systems are sold as something that would not have individual bias. Alas they have community bias. And they hide bias. We need to address the fundamental assumptions baked into data science.

Cyber-genic moments. She showed a bit of the Cambridge Analytica video. They used proxy information. She talked about how we don't know if Cambridge Analytica really worked.

The goal was to get people to take the red pill and go down a path that would transform people. The goal was to place people in clusters (ghettoes) that reinforced ideas that were wanted by advertisers. Authenticity is a deviation from the norm. CA wanted to figure out the deviations and leverage them.

She then talked about homophily and the article "Birds of a Feather: Homophily in Social Networks". What is so insidious about the models is the assumption of clusters/neighborhoods. In effect discrimination leads to assumptions about neighborhoods that then leads to discrimination. Our ideas of homophily that influence social network analysis are based on discrimination.

She than looked at the links between eugenics and the statistical techniques being deployed. The Pearson Correlation Coefficient and Galton's linear regression was developed for eugenics. Chun made a number of connections between the development of various algorithms/statistical practices and eugenics.

The methods were designed to predict future based on an unchanging past. The assumption is that the future can calculated but not really changed. If the past is racist then the predicted future will be.

What can we do? The proliferation of echo-chambers and discrimination is not inevitable. Lets treat them like climate change so we can imagine a different future not the predicted future. What if we used these models as calls for action.

Can we change habits and culture, the way Bannon and the right is trying to do. Perhaps we want data made free to all. Can we develop public rights for data.

She is trying to reach into the deep history of the neighbour which is at the heart of these algorithms.

Last Day

Epistemic Accelerations and Algorithmic Cultures

Jennifer Rhee: Cultures of Dehumanization: Drone Warfare, Drone Art, and the Limits of Algorithmic Identification

Rhee started by talking about the history of AI funding. It has gone from mostly defense funding to commercial funding. Militarized technology have various assumptions about labour. She argues that AI and its history of cybernetics is racialized. Gallison talks about the cybernetic other. There was the German enemy other and the Japanese enemy other who was racialized. Weiner's wartime work that led to cybernetics is entangled with racialized ideas of the Japanese.

Drone warfare shares a lot with predictive policing. Drones dehumanize the other by collapsing differences to codes. Any military aged male is killable. This leads to very different calculations of civilian casualties from drone strikes.

She then talked about drone art that challenges our ideas. One work was #Notabugsplat which puts large pictures in Pakistan that can be seen from drones. See https://notabugsplat.com/

She talked about a film that raised questions the colonialist logic of drone violence. What if what were happening over there were happening over here? Such a question itself raises questions.

Chad Wellmon: On the Opacity of Knowledge: Machine-Human Learning

Wellman started by talking about "The Unreasonable Effectiveness of Data" by Google authors that . Naive machine learning techniques based on large data sets now seem baked into the infrastructure. Machine learning produces an opaque knowledge that is hard to hold accountable.

What do we mean by transparency and accountability? Should they be our . The idea that legitimate knowledge can be held by a human knower. The ideals, norms, and assumptions of knowledge are being challenged. Wellmon is working on a project on Googling Before Google, https://chadwellmon.com/2016/07/31/googling-before-google-a-brief-history-of-search/ .

Wellmon goes back to the Meno and the difference between knowledge and right opinion. What makes something knowledge? There is a basic notion that it is something in the human - which leads to the capacity to explain. What then do we make of deep learning where you get knowledge that can't be accounted for. And yet scholars have often depended on extensions to know. By 1800 knowledge became a relationship between ideas - consistency supported by infrastructure became important in German research university. Knowledge was more than human. Print and experimental data became objective - outside of human heads. People like Kant worried depending on books to think for him. The journal, scholarly society, the lab, became the infrastructure of knowledge. Knowers now needed to be able to use the objective knowledge and its infrastructure. Knowers needed to learn how to search.

Knowledge in this system became opaque for many. Knowledge wasn't fully transparent or personable. Big data obviates the need for traditional apparatus of knowledge like a theory. He described a project on the visibility of knowledge in the 19th century. They came up against limits like the epistemic challenge of prediction. If all evidence is for prediction then what happens to interpretation?

The second limit is the practice. It is a lot harder to train a machine than it appears. The scholar has to formalize their knowledge. They have had to decide what is a footnote and what isn't and what features define. Attacks on machine learning are often naive when they say that machine learning just encodes the bias of the trainers. The practice of training forces us to think about what we assumed we knew. He went further and said that new knowledge has often evolved from existing expert knowledge - that's what we do when we develop archives, editions, translations - we develop materials upon which others can build. We encode our learning into both traditional processes. Training data can be prepared well or poorly.

Wellmon talked about how it's hard to reconstruct the conditions of judgement in both a machine and human. Machine learning confounds notions of knowledge as justified true belief. The justification in machine learning Wellmon argued that this is true of traditionally humanities knowledge where scholarly practices hide all sorts of tacit knowledge.

Human knowledge has always been entangled with technology. We need to be careful to assume that what is happening now is epistemologically different.

My sense is that line between knowledge-that and knowledge-how is being elided by machine learning which takes lots of facts and turns them into tacit knowledge. There are also questions about reproducibility of knowledge.

In response to a question, Wellmon asked, "What are the grounds of the distinction we would like to be able to make between the humanities and other disciplines?" He feels that the distinction in the US was historically moral not epistemological - that there may not be such a difference. We need to face up to our philosophical anthropologies that undergird the humanities.

Someone else asked about our notion of dehumanization. Are we really dehumanizing things and animals? Or are we hyperhumanizing? We need to ask what we think it is to be human.

Michael Witmore: What Should Humanists Think About in the Age of the Algorithm?

Witmore is a pioneer in the analysis of Shakespeare. We should check out his talk, Shakespeare from the waist down online.

Witmore began with a Newsweek story on how Google's AI could predict the next sentence of dead authors. Why did Google choose Shakespeare? Because he is viewed as the culmination of English writing and the humanities. When Ray Kurzweil wants to show that AI can emulate the humanities he picks the purported best example.

Google's AI is not trying to convince itself that it is as good as Shakespeare. It is trying to convince an expert humanist. One of the truths is that you need a ground truth - someone in the humanities to say that the AI is as good as Shakespeare.

He then turned to his projects and specifically "and, if, but". You have to start with features - you have choose these. He then showed a visualization of the if, and or but space. These three features seem to sort genre. Plays with more "and" are more "history plays." He is linking a feature as a proxy for a interpretative category. "And" is a proxy for genre.

He joked about what an anthropologist would say about humanists at a conference. We trade in examples. The examples are powerful and complex tokens for ideas. He then went from the 3D visualization to an example from Othello.

He then went on to Word Embeddings. This is a technique that Google developed that make a difference in translation. Witmore has used it on an early modern corpus. He is taking advantage of the bias of a text (in this case Shakespeare.) The very problem of analytics in other situations becomes a virtue for studying the textual record. He showed how word embedding could be used. He then asked if this was new knowledge and answered yes.

He then switched to talking about translation and Google translate. They developed vector embeddings in individual languages then they aligned the maps between languages. He talked about how the to be or not to be speech is important and then showed different Eng->German->English translations over time. From a translation point of view, what is Google translate getting wrong - the areas that they get often wrong is probably the culturally interesting areas. Where translation fails, there history and culture lies.

He then returned to the Newsweek example and talked about how the work was actually interesting, but we want to be able to see into the model developed by training. Again the challenge of opacity. He argued that what we want is a mirror on what we don't know and see about our style. What does "Shakespearian" mean? He described a large scale Turing test where humanists may get asked to verify the accuracy of algorithmic results. But that isn't really enough for humanists.

What should we think about in the age of the algorithm?

  • Labels and descriptions - we should think about the complex judgements of labeling, summarizing, and descriptions. The humanities are a sustained conversation about things we cannot really say. Our judgements are also
  • Algorithms offer the quantitative redescription of categories that humanists care about and judge. We should look at trying to model complex humanist judgements. He argued that translation is something we should think about.
  • Examples are something we use a lot and they bear a lot of tacit knowledge. There is a high level of tacit knowledge (bias) in the humanities. The tacitness is interesting to Witmore.

Our to do list:

  1. We should become problem factories (ie. run humanities centers)
  2. Train A.I. on a humanities curriculum (ie. teach the humanities)

This might mean that:

  1. Exemplarity becomes a shared problem space
  2. Re-description becomes part of our critical process
  3. The line between translation and analysis will become harder to draw and many analytical tasks can be reframed as translation tasks

He seemed to be suggesting that we begin to think about how to train a robot. We should get to work on a canon for robot education. I don't think he believes we will get there, but we should try.

I couldn't help worrying that this could become an argument for humanities connoisseurship - ie. that we privilege the tacit knowledge that comes from our particular training, but are we sure it is the right training? Our training has a history of elitism and the enforcement of elitism. It goes back to Plato's story of writing and the idea that there should be a "legislator" and we are that guardian.

Information Wars, Impossible Democracies

Amanda Anderson introduced a panel on the crisis of democracy. Despite the longstanding critique of democracy in the humanities there has been practical support for democracy. Now we need to look more closely at the democracy we took for granted.

Jonathan Albright: Truth Shaping: Understanding the Role of ‘Suggestions’ in Reliable Information Surfacing and Exposure

Albright works on journalism and is the author of Pew's recent project on The Future of Free Speech, Trolls, Anonymity and Fake News Online. A lot of his work is about developing tools. He looks at cross-platform hyperlinks.

Albright started by talking about YouTube which is above Facebook in penetration. It is a huge time sink - it is a big part of the move from video to online video. He then talked about premptive mediation: how queries are autocompleted and that shapes how facts are considered. This happens on YouTube and Google. What does this mean for truth? When algo-toxic information gets in the way before citizens can formulate ideas and fact-check. He talked about how the "crisis actor" idea was able to de-center the shooting discussion. He was able to use the YouTube API that is relatively open and was able to show the network of conspiracy theories.

The attention economy is at the center of this. They have gotten really good at keeping our attention. (All of his graphs are available on Medium.)

Network analysis has a role to play in showing people the environment they are experiencing.

He talked about how platforms like YouTube responds when bad things come up. They try to avoid being seen as media companies. They talk about us not as citizens. They talk about management of content as not knowing what they will get and free speech. They are protecting their economic values not citizens. What does this rise in populism mean for freedom of speech as a societal value? Populism may not be a silent majority, but is being enabled and provoked by people.

We should consider these as media companies and monitor them. We need to look at some of the alt platforms like discord or gab.

Siva Vaidhyanathan: Antisocial Media: How Facebook Disconnects Us and Undermines Democracy

Vaidhyanathan wrote "The Googlization of Everything". He started with how there is almost nothing we can do about the problems of social media. All the evidence points to our powerlessness. The political, cultural, technological, and other types of solutions probably won't work.

The two poles of social media are Facebook on the one hand, and Google/YouTube on the other. (How about Twitter?) He talked about the La Lega app for following Spanish soccer. The app changed under the new GDPR of the EU. The app turns on the microphone to listen in if there is background noise of a soccer game that may be pirated. The GDPR forced them to reveal what they were doing.

All this stuff are tools of surveillance. Zuckerberg wants us to believe that all this surveillance is good for us. It leads to better choices and transparency. We will be more authentic. He then argued that the panopticon doesn't explain what is happening. The panopticon was an instrument of control where the instruments were visible. The new system is more efficient and it doesn't show us the surveillance. The new system is a cryptopticon - a system discrete overlapping systems of surveillance. The defaults are that you aren't necessarily watched. They don't want us to know, unlike the panopticon. This is like the state that also doesn't want us to know. We are being binned (categorized) and then pre-emptively caught.

He then started talking specifically about Facebook. They are took big to govern. Their algorithmic system that generates emotion - from puppies to hate. The systems are designed to undermine rational debate and careful thought.

The advertising systems are now so well calibrated that they can pick out groups do 20 people. You can exclude all sorts of people - Jews or Lutherans etc. Zuckerberg talks about connecting us for free and the flow of ideas - he believes this will improve our lives - this is why Facebook won’t question itself.

Facebook scrambles content - mixes stuff over time and social context - they choose who and what we see. And they manage our political engagements. What's worse, is that we lack contextual integrity - we can’t manage our own social relations - something we learn young - we are thus addicted to Facebook.

There are 2.2 billion on Facebook- our study is often on the US - but the world is the future - to become dominant in world markets Facebook has hugged dictators - the dictators figure they can use the advertising system then to nudge voters - Duarte in the Philippines - the same systems are used to cheaply spread rumors about opposition. In other words - Facebook is helping world dictators

The third part of the authoritarian playbook is harassment - swarms of threats - make opposition anxious and afraid. Finally, there are all the forms of surveillance - Facebook and WhatsApp groups are easy to penetrate and use to monitor people. Dictators are in a permanent campaign - like Trump - they use Facebook to permanently manage their countries.

Now Facebook has made it easy for poor countries to have Facebook data - they drive people into the Facebook ecosystem - Myanmar is an example.

The problem with Facebook is Facebook - Cambridge analytica is not an anomaly.

Discussion

The academy is doing a great job, but it is outside that no one is paying attention. Vaidhyanathan went on to talk about the problem of democracy as being not an issue of fake news, but an unwillingness to sensibly debate issues. We can't even agree on enough to have a debate and to try to solve things together. We can't think collectively. He feels the rest of the world needs to stop and think (following Arendt.)

I can't help feeling that everything has been gamified. No one cares about the truth, it is about winning. Traditional forms of authority, including the university, have been undermined (perhaps for the right reasons) by a system that amplifies certain

There was a good question on bricollage. Are media consumers smart and messing with what they get or just cows to be advertised to.

Panel on #Charlottesville August 11 & 12

Michael Bérubé chaired a final session that focused on the events in Charlottesville last summer. He talked about coming to UVA 35 years ago from New York. He found Charlottesville a lot more conservative. It is now an anomaly - a blue dot in an angry red area. The white supremacists have discovered college towns. He talked about the change in name of the park that brought the right to C'ville. The assault was much more than about a name change and statue. It was ideas - the ideas of the enlightenment. These ideas were founding ideas for the nation and were betrayed. Freedom of the mind means freedom of the body. Thus the ideas of enlightenment were betrayed and are under attack.

Symbolic violence being addressed symbolically when structural violence isn't addressed.

Louis Nelson, School of Architecture and Vice Provost for Academic Outreach, University of Virginia. Nelson's position is about how the university reaches out. The events of August have shaped the position and activities. They are destabilized and are working through what it means to be a university and to be scholar. But, for many August 11-12 was not new. For many, the Uni is still not listening. Its partly about the agency and complicity of the University.

  • Who has voice and agency?
  • How is the University complicit?

Nelson's research is about the built environment and how architecture frames/influences things in ways not in the documentary record. Despite this work he hadn't thought about C'ville, so now a group has been looking at the local environment and looking for the evidence of slavery on campus. Only in the last 10 years was there a conversation about inequity on campus. This continuous press has been inadequate to dealing with the continuous preference for its legacy.

The march on C'ville were known to be coming. They planned a dialogue on race and inequity. There was a robust framework which was closed when the Govenor of Virginia. Many were upset that the University avoided the events downtown for an intellectual event. There are moments when academics need to be citizens first. When do we stand with our neighbours?

Confronting a University complicit with racism. The Uni was the largest land owner, employer after 1865. What did it do then? What is it doing now? There is now flash funding to support initiatives that lead.

What is the right response if you think August 11-12 is a water-shed moment? We have a direct responsibility to use our positions towards issues of social responsibility.

Deborah McDowell, Director, Carter Woodson Institute of African and African American Studies, University of Virginia. McDowell talked about the illusion of progress. She talked about how difficult it is to engage in critique in institutional contexts. She talked about how her remarks are intentionally tepid.

Her reaction to the events were visceral. She understood the event through a context of intergenerational racial terror. She mentioned a history of violence. She placed the events in historical perspective. Her reaction at the time of the Unite the Right event was to call on academic activities - to teach history and perspective. She reaffirmed the moral and intellectual value of knowledge and tolerance.

Is it enough to know?

She showed a story map online at http://illusion.woodson.as.virginia.edu/ . The events of August 11-12 was not just about the statues, but one still needs to address the statues. She talked about the unveiling of the statue of Lee with clan marches and festivities and cross burning. UVA was one of the leading centers in eugenics that provided research used to sterilize people.

The Illusion of Progress document has been successful, but what next?

The University response has been generous, but it is difficult to have certain conversations within the institution. Has UVA healed?

What can be said? What can be claimed? under the framework of reconciliation? Is reconciliation a transformative fiction or a fiction of transformation? Is anger and frustration a legitimate response? Can we view reconciliation as a problem? How can knowledge be produced that will disrupt practices, institutions and knowledges? More knowledge is not what is needed, disruptive knowledge is. Does the expression of conflict with institutional structures constitute a betrayal? Knowledge and intellectual activity is not enough - it doesn't transform by itself.

Discussion

There was a question about student responses. Students have confronted the institution. They wanted to talk about this in class.

Are the questions of race national? Could they be framed globally? Is humanism a failed project. The UVA approach has been to focus on the local.

There was discussion about despair. What if the humanities simply aren't adequate. What if we are left feeling helpless. What if reconciliation just can't happen or shouldn't happen or doesn't follow truth. Why should faculty of colour and students of colour have to take on all the emotional labour when these things happen?

And that was the end!

Navigate

PmWiki

edit SideBar

Page last modified on June 16, 2018, at 04:12 PM - Powered by PmWiki

^