Main »

AI Ethics And Society

These notes are on the AI, Ethics and Society conference the Kule Institute for Advanced Study organized. They will have all sorts of gaps and mistakes.

Day 1: Wednesday, May 8th

The conference was opened by Reuben Quinn who reminded us of the importance of the land.

Randy Goebel: Explain Yourself

Randy started the conference talking about what AI is. It is the science of what is involved in gathering and using data to be intelligent. We are not building artificial humans. We know how to do that. What we want to do is to create new types of intelligences. Deep learning has drawn attention to AI, but it is not the only .

He talked about Ted Hewitt's article in the Globe and Mail where he talked about the dangers "economizing on thinking itself", "How much of the black-box working do we need to understand to trust it?"

He talked about the cost of inefficiency in legal decision making and the Jordan case which has led to a whole mess of cases that have gone on too long being thrown out. Randy's lab has been developing AIs that interpret legal cases and summarize cases. They are trying to create AIs that will help speed up the process of law. The problem they are facing is that lawyers don't want to rely on AIs. He talked about how Google Translate has not solved the language problem.

Randy talked about how explanation is debugging. Deep learning is not debuggable.

Without progress in language processing we won't be able to trust the systems we are using to speed up interpretation (information extraction, summarization). This is an ethical issue.

Explanation is essential. We can't progress if we can't get explanations. He talked about the importance of creating explanations and showed a visual explanation. He gave an example a rule-based AI - expert systems from the 1980s. The problem back then was capturing expertise - it had to be done by experts and was time consuming. Machine learning was to build models by machines rather than by experts. They replaced the human developed model with data-driven models, but those models can be hard to debug.

He quoted a paper from McCarty to the effect that one way to debug is to tell the AI that it is wrong or we can provide more labelled data.

For Randy we need to be able to solve the language and representation problems so we don't only have more training as a way of debugging.

We need to make sure people are knowledgable of AI to be able to understand when and how to debug the systems. We need to spend more time on the explanatory methods because those that we have don't scale. We need AIs that work better as instructors.

Day 2: Thursday, May 9th

Rafael Capurro: The Age of Artificial Intelligences

What does AI mean in a broad historical perspective. AI is the spirit of our time that conditions but does not determine our action. He told a story based on his life as he was involved.

The question evolved from "Whether machines can think" to "Whether it was possible for machines to show intelligence."

The ability of the artificer to produce a simularcrum of the human has fascinated people. The automaton has evolved from the Golem to what we have now. The term Artificial Intelligence was first used in Dartmouth in a workshop in 1956(?). Capurro talked about cybernetics and ideas about automata. He quoted Marx and Weiner. The automatic machine is slave labour which will produce labour problems.

Joseph Weizenbaum published his book 10 years after Eliza and it was an example of self-criticism. Capurro talked about the number of philosophers like Dreyfus who were inspired by phenomenology to critique the claims artificial intelligence. Information ethics back then was new. It was seen as a subset of engineering ethics.

He heard Anscombe give a paper on "Man and Essences" that provided a proof of God that was based on a language. Anscombe asked him about angels and Capurro went back to think about angels as intelligences that are separated from matter. Angels are intelligences that don't need data? The etymological root of angel is angelos or messengers. In a secular age, the idea of creating artificial intelligences, is that of creating angels. Capurro went on to describe how creating angels is, for Pascal, also the creating of beasts.

In the 1990s there was a discussion about strong and weak AI. The strong dualistic thesis came under attack. The matter matters - the physical medium of human intelligence became important. The classic articulation of AI as symbol processing was also questioned as it became clear that humans did not process symbols.

AI in the 1990s became an issue of distributed intelligence. AI became a matter of connecting human intelligence into the human network. Information ethics took off. Robotics became important. Who and how should responsibility be assigned? Does the possibility that the body can be enhanced change human responsibility?

He talked a report where they talked about much research is human-centred and that it should look at other types of intelligence like animal intelligences.

Community of values - "We shall not lay hand on the body". The individual should be respected and their data in some ways is part of their body. Habeas data extends habeas corpus. We have to look at the issues around the digitization of the human and how to protect.

Building robots becomes a social and moral issue. We need critical reflection on this and intercultural reflection. How do robots become part of the shared Japanese space and mind.

Now we have the Internet of Things (IoT). IoT is in the process of evolving from an idea to becoming a thing. The insatiable desire for data by industry is behind IoT. Likewise humans are designing their lives to be always online (as data.) Ideas are powerful. Capurro talked about how tools carry ideas - in designing tools we design ways of being. (Winograd and Flores) The way modernity conceives objects external to us is changing.

AI is about ontology - what it means to be - to the being of beings.

Artificial intelligences - in the plural can demystify AI.

The project of enlightenment - the digital is part of this, but we are all beginning to have concerns. The task of taming this is in early stages. It is universities, research organizations, standards organizations, and governments who need to step up to this. He gave the example of the IEEE Ethically aligned design report as an example.

Jason Lewis: Indigenous Protocol and AI Working Group

Lewis started by reminding us that the settlers that settled this land thought themselves as ethical. Lewis sees the discussion about AI ethics as a form of bug hunt. Just making better what we have. He believes instead that we have some fundamental category errors. The assumption is that we just have to deal with bias. Algorithmic bias is the new redlining - it is the logical expression of a culture that ensures that a certain set of ideals come out on top.

The people that Jason works with want to start from a different point. The IPAI (the Indigineous Protocol and AI working group) are a group that are excited about the potential, but worried about the assumptions of the culture of AI.

How can we conceptualize AI systems based in indigenous values. They use the term indigenous protocol to emphasize the idea of procedure that does things the right way. The worry is AIs based on the same values that fostered genocides and climate change. The same biases of creators with less accountability.

They have written essays like Making Kin with the Machines. They create art work that could establish healthy relations with non-human intelligences. They imagine new forms of AI systems.

Their long term plan includes:

  1. Use the workshop to grow the conversation with their own communities.
  2. Participate in the larger discussion about AI - the conversations need indigenous people. When people say they are creating conversations and want brown people - but there are few of them. The conversations need them because indigenous people think about things better.
  3. They are creating systems like Hawaiian programming languages and operating systems

Lewis has been told that computing has no culture. Computing folk are trained to separate culture from computing; yet we are making AIs based on white ideas about intelligence. Lewis wants to build systems that come from indigenous perspectives.

He pointed us to http://www.indigenous-ai.net/ .

Kim Tallbear: Decolonizing Science & Technology

Tallbear has worked mostly with genome science and talked about how colonial science can be. There are ideas of hierarchies of man and race even today. Race got reconfigured as population. This is science, not pseudoscience. She gave examples of Canadian research questions around nutrition and indigenous bodies. Science, when it engages indigenous peoples, treats them as objects. She then gave examples of collaborations that are much healthier. She then talks about training indigenous people as scientists. She talked about good governance and the importance of understanding histories. It is naive to think the tech is just a neutral tool.

She then read from a statement that came from Indigenous Studies to SETI. She advised SETI and talked about some of the recommendations that she thought they needed to consider:

  • There should be protocols of care around encounters
  • There should be clear intention of benevolence and thought given to encouters
  • SETI researchers should consider their positionality
  • Cultures are not intelligent and there is a history of misunderstanding other cultures - intelligence is contextual
  • Contact between cultures is based on the fantasies and anxieties
  • What does more or less advanced mean? Be careful of being using ourselves as standard

They had one actionable item - a mission statement of intent with ethics, goals and best practices.

Responsibility to respect life that doesn't want contact and that contact is not

Jonathan Cohn: "Sunspring" and the Wilful Incoherence of Algorithms and Digital Culture

Cohn started by talking about his recent book The Burden of Choice that talks about recommendation engines. They don't really deal with biases and inequalities. They seem to reinforce them. He asked why the engines are so bad. The companies say that they just reflect ourselves back to us.

Cohn is now looking at algorithmically generated art. He wants to make the case for studying AI art. He looked specifically at Sun Spring. The artists fed lots of sci-fi scripts and then generated a new script which they then had acted out.

Sunspring is intriguingly weird, but why talk about it? Cohn talked about different types of readings, but all these readings assume that there is a real reading. What does one do with a text generated by an AI that may not have a meaning or reading at all? How should one interpret incoherent texts?

Ethically the only thing we can't do is not interpret. Many theorists talk about the encounter with the other. Wouldn't encounters with incomprehensibility be therefore an ethical experience with otherness. These encounters make us aware of our limitations. Otherness may be creativity.

The film shows the script at the beginning. Cohn compared the script to how the human film-makers/actors interpreted the script. Characters read phrases like "I don't know" with knowing looks - the "I don't know" is one place where the script really knows.

Cohn talked about scripts and how they are read. Are robo-scripts like Sunspring a mirror to our culture? If so, how so? So many systems use our data to mirror ourselves in commodified forms. Sunspring is a different type of mirror. We read Sunspring trying to prove it mirrors us - looking for something we recognize. Sunspring really challenges us.

What does it mean to recognize the otherness of AI as valuable instead of trying to recognize ourselves in it?

Jill Clayton: Assessing Privacy and Ethics in Big Data Projects: A Regulator's Perspective

Clayton is Alberta's Privacy Commissioner. She talked about how important the issues around AI are to privacy commissioners around the world. She talked about how she had given a talk to tech startups, most of whom had ideas that involved personal information. She asked if we should use technology, just become we can. Privacy used to be seen as a barrier. Now things have changed and privacy is being seen as a driver that can create opportunities. She feels that 2018 was a watershed moment in the privacy community. Cambridge Analytica being in the news has made privacy and data something everyone is aware of. It has brought the value of information to the public's attention.

She talked about the recent uber breach where millions had their data hacked. The ways that information is safeguarded has consequences. Some companies have failed. Some are talking about breaking up companies like FaceBook. Regulation is being talked about. The issues are not limited to the private sector. StatsCan got into trouble with their plan to gather banking information. Smart cities is a topic of discussion and we are getting public backlash.

GDPR is another reason 2018 was a watershed. GDPR requires business to be accountable for how they manage our information. There are penalties and enforcement. Breaches have to be followed up by contacting those affected. Clayton talked about how other juristictions are measuring themselves against the GDPR. So ... how do we move forward now that we have the attention businesses? Ethics is not mentioned at all in the legislation.

  1. We need to ask how our regulations measure up. Fairness is rarely mentioned. She talked about how data holders have to submit privacy impact statements in cases that involve health information.
  2. She talked about a document her office created on [[https://www.priv.gc.ca/media/2102/gl_acc_201204_e.pdf| Getting Accountability Right with a Privacy

Management Program]]. They are consulting on ethical assessment tools for businesses.

She talked about a resolution from privacy commissioners. Commissioners are discussing how universal are the ideas about ethics. The European Commission has issues guidelines on trustworthy AI. There is an encouragement of interdisciplinary discussion.

In sum, it is important to go beyond just privacy to think about ethical considerations. We need strong regulations as we can't rely on big businesses. We should keep our minds open to different viewpoints.

She talked about being surprised how many breaches there are in Alberta. So much snooping.

Jason Fung: Artificial Intelligence and Public Policy Development

Fung is a legal officer in the Government of Alberta. He joked that his job is often to be nay sayer in the room. He was originally going to talk about six types of public policies that are affected by AI: research, expertise, labour impacts, data infrastructure, data-driven policy and governance. He focused on the last two.

He gave some examples of how governments are using AIs on things like managing immigrants to modelling disasters. Applications will probably bubble up without being seen as AIs. He talked about what might make us trust . He talked about a Michigan system (MiDAS) for finding fraud. Those falsely accused of fraud were hit with massive penalties despite a high error rates.

Fung also talked about a NewYorker article on New York city's flawed attempts to make algorithms accountable.

The Canadian Federal Government has a recent directive to ensure that automated decision-making systems are implemented well. He talked about this ambitious directive. The system distinguishes different levels of impact. Then there are different forms of assessment depending on the impact. Level 1 no assessment; the higher levels have significant levels. Humans don't need to be in the loop for lower levels (because the impact is not considered significant.)

He closed by talking about AI regulation bill that has been proposed to get at the larger companies.

He talked about what it means to get peer review when we don't have standards. The standards need to evolve.

There was a discussion about whether AIs could or should be certified. There was a discussion whether an AI is really making a decision or whether that is an anthropomorphizing of the machines. It is really humans making decisions differently.

Victor Silva: The Influence of Social Networks and Algorithms in Political Campaigning

Silva is interested in how algorithms affect elections. He talked about the Mueller report and the Russian attempts to influence the US election. The Russians created bots to do this. Regarding Tay he asked if the bot was bad or the humans that interacted with it?

Silva then talked about how one can try to influence using social media. One spams large numbers of people which is cheap. He then gave as an example the Brazilian election. How did the incumbent win? Cambridge Analytica opened a branch in Brazil. Large amounts of tweets and posts were posted by bots creating the impression of trending ideas. They created a cyborg (human) network of people too. In reaction people talked about Bolsonaro under other names (he who should not be named.) This created a situation wher Bolsonaro was only being talked about explicitly by his backers.

Critics of Bolsonaro went to the street and backers then tried to highjack the street protests. Twitter and Facebook created "war rooms" to tried to control the situation. Backers moved to other apps like WhatsApp.

We have extensive studies that show how armies of bots have been set up by countries.

To finish he talked about challenges ahead. He thinks we need to ask what it means to be an e-citizen? Are bots e-citizens? How can we make social media more robust?

Silva's slides are available at: https: //webdocs.cs.ualberta.ca/~vsilva/

Bettina Berendt: AI for the Common Good?! Pitfalls, Challenges, and Ethics Pen-Testing

Berendt started with some positioning. She is part of a project called VeriLearn that is trying to do software verification of machine learning systems. She also talked about the wave of racism in Rome.

We all want to be good - don't we! Of course AIs are good for someone. Good is not enough so now people talk about the common good. Our Asilomar Principles talk about shared values. But what is the common good? Is it the greatest good for the most people? Is it deontological? Is it procedural? Is it distributive or timed to facilities? The problem with utilitarian approaches is how to calculate the greatest good.

She talked about an example. There is lots of data science on drugs. She asked "what is the drug problem?" Who gets to formulate the problem? What problem are we trying to solve with data science? What role does knowledge play? AI folk think knowledge is important. (So do academics.) There is an ambivalent relationship to data at the heart of AI. Machine learning has a conservative bias - is it discrimination? Is the algorithm the problem.

Can we solve the problems. Berendt is proposing PEN testing (penetration testing) for ethics. The idea is to pay someone to break assumptions and to show that a system is not as good as we think it is. She argued that it should also be playful.

Because people said this sort of testing is already happening she looked at papers to see if it really was. She found that no one really looked at what the good was - they didn't discuss what knowledge was and they didn't really look at different perspectives.

We can also PEN test ethics discussions. She used it to test the Moral Machine Experiment. She got us to test the Experiment by asking questions of it. She pointed out that even with a simulated ethical experiment one can provoke a testing discussion.

She closed by mentioning the book Terror.

Jaco Du Toit: Ethical Best Practices for Industry and Government Developing Responsible AI Services

Du Toit talked about the UNESCO perspective. He talked about the African priorities when it comes to AI. There was nothing about jobs and ethics. Infrastructure was important as was skills. UNESCO consults with communities to understand how problems come up for different people. We in the international community should pay attention to the issues other regions have. UNESCO identified some actions:

  • Enhance technical infrastructure
  • Increase AI-related human capacity
  • Support broader stakeholder engagement
  • Facilitate ongoing knowledge exchange
  • Maintain strong ethical human rights guardrails

UNESCO has a range of instruments around ethics and emerging technologies like the Universal declaration on the human genome and human rights.

He talked about the convening power of UNESCO to talk about issues. They provide a platform for international policy discussions. They support member states and advise the Director-General. They work in close cooperation and provide input to UNESCO's WSIS actions.

He talked about the principles for development of AI from their preliminary study of AI.

  • Human rights
  • Inclusiveness
  • Flourishing
  • Autonomy
  • Explainability
  • Transparency
  • Awareness and literacy
  • Responsibility
  • Accountability
  • Democracy
  • Good governance
  • Sustainability

There are lots of lists, but theirs has the value of having been developed in an international context. They also developed ideas for best practices. They have the Rome Indicators that can help map the ecosystem in which development takes place and how to test it.

He talked about the importance of interdisciplinary discussion around artificial intelligence.

Ivor Cribben: Statistical Strategies for Improving Machine Learning and Artificial Intelligence

Cribben's message is that we need to understand what is happening in these models. He talked about the understanding of models in general - we don't all need to understand the models, but we want experts to understand them.

Cribben does work on fMRI data. They take brain images as people do tasks. Areas of the brain get activated or connected during tasks. He talked about how they can then try to build models of things like schizophrenia.

He talked about the advantages of certain models over others. High dimensional models can be hard to interpret. Ideally you get both prediction and information, but often you don't get both.

He also distinguished Algorithmic models from Statistical. The AMs create a model from lots of data, but the model may a black box. The statistical approach tries to develop a model that captures the phenomenon and which we understand.

Statisticians believe the statistical models are more interpretable. We can test the parameters. Alas, often the models are poor emulations of nature. Often algorithmic models are more accuracy.

In summary he thinks we should be flexible. Study the data first. Final a model that gives a good solution. Use the computational power.

Howard Nye: Technological Displacement and the Duty to Increase Real Incomes: From Left to Right

Nye started with what he thinks is the most pressing issue - the technological displacement of workers and the consequent stagnation and decline in their real wages. He feels there are three common's misunderstandings:

  • There is an assumption that it is in the future
  • Assumed that it will only happen with more advanced systems

Many economists argue that tech displacement has been chief cause of stagnant wages over the last 50 years. Our wages peaked in Canada in 1977. Our real wages are declining.

Strangely unemployment has also been low and it hasn't led to rising wages. Automation has been responsible for 80% of decrease of manufacturing jobs, not outsourcing.

There are winners and there are losers - the losers are the blue collar workers. This is called the Great Displacement. Many displaced workers get depressed and drop out of the workforce.

The Great Displacement effects have been devastating in many ways. Life expectancy is declining. Despair.

It is estimated that 83% of jobs that pay less than $20 an hour will be automated to some extent. A certain number of professionals may also have their work automated, but these groups have the power to mitigate the

Solutions include:

  • University basic income
  • Increased public investment in education
  • Increased minimum wages
  • More employee control
  • Invest in "Green New Deal"

His talk then shifted to the ethical issue of whether we have a responsibility to counteract displacement. He feels that both right and left would support such duties.

He started with Nozick's philosophical libertarianism which argues against interference in others. He showed how there are problems with pure libertarianism. Once you don't oppose infringements then how can you resist other forms of infringements for more than a minimal state. There may even be duties for beneficence. This leads to his principle of Enforceability of Easy Important Beneficence.

Libertarianism allows for massive inequalities of opportunity. People on the left feel that it is OK for the state to intervene to benefit those with less opportunity. Core moral idea, "All social values are to be distributed equally unless an unequal distribution of these values is to everyone's advantage. This is Rawls.

He then argued that each of these positions (right to left) should be comfortable with investing in ways to mitigate displacement. The hardest to make is to convince libertarians. He talked about a moral imperative to make some restitution for force and fraud. Almost all property if traced back leads to injustice. Further AI research was funded by the state and paid for the workers that are now being affected. The final argument was about a green new deal. The transition to green energy and agriculture is needed to prevent our inflicting more deaths on people. In Alberta we need to transition to new industries that don't harm people and funding a transition could deal with the great displacement.

Soraj Hongladarom: Online Reputation, Big Data, and Privacy: A Perspective from Asia

Hongladarom talked about different views in the East about privacy. In the West privacy is respected as a mechanism against the power of the state. It is a kind of shield between individual and the collective state. Individuals need privacy to protect themselves. In the East it is respected more as a function of power of the individuals themselves. Powerful people have privacy while normal people do not.

Hongladarom talked about how conceptions are changing. In the West the conception is that the individual comes first. In China the state comes first. Individuals are expected to contribute to the continuity of the state. In Thailand there is tension between these two positions. There is tension both in and outside the academy.

Hongladarom talked about 5G. Huawei has provided a large grant to his university to study potential uses of 5G. He has to use his part of the grant to look at how 5G can be used to improve introduction to philosophy.

Finally he focused on China and Thailand. He gave some examples of Chinese uses of AI including one where there are cameras in classes to see if students are paying attention and infer whether the teacher is good. They are monitoring ethnic groups. They have developed a social credit system. And they have encouraged the use of mobile payment apps that then can gather lots of data.

There are apparently attempts in China to question the ethics of these systems.

One can do comparative philosophy. Trust is a crucial concept that is different across cultures. Trust for authorities can be very different from country to country and that changes how people think about privacy.

Trust is not a given; it has to be earned.

Day 3, May 10th

Robbie Stamp: A Pragmatic Ethics Approach to AI. And a Little Bit of Marvin the Paranoid Android

Stamp was a collaborator with Douglas Adams of the The Hitchhiker's Guide to the Galaxy fame. He started with Marvin the Paranoid Android from the Guide. Stamp is interested in how to advise organizations to manage the ethical integration of AI.

Douglas Adams used to tell a story about a puddle waking up and thinking the hole it is in was made for it. We tend to think the world circles us. We also have no sense of deep time and how little time we have been here. We can imagine other minds, but we bring our biases to the imagination.

He then distinguished between accountable for and responsible for. An AI can be responsible for a decision, but not accountable. Accountable means that they can be punished if something goes wrong. If an AI doesn't feel pain then it can't be accountable because it can't be punished.

He then talked about story design. You only get a story if the heroine gets some sort of push back - some sort of challenge or risk or controversy. Stamp's company Bioss is looking at how putting an AI into the story changes things. Accountability is important in the changes his company is interested in. How do accountable bodies like boards or directors have to deal with AI?

He talked about oracles like Pythia (the Delphic Oracle). They give advice, but you make the decision. There is a gap between the oracle and the accountable person. This gap can be dangerous to the accountable. The advice has a family resemblance to advice from a human. But the AI's advice is not like human advice and one shouldn't confuse them.

The next question is whether an AI manages people. What authority does an AI have? Does it slip into authoritarianism.

He then asked about agency. What permissions does the AI have to commit resources without humans in the loop. How is that agency reviewed. The form of ethics that he has found most useful is consequentialism. Stamp has found Dewey quite useful.

  • The nature of advice
  • The nature of authority
  • The nature of agency

Can we program an AI to be ethical if we can't punish it? Have we seen ethical code ... really?

It is governance and review that can be ethical.

He then talked about abdication - we need to be very careful not to design systems that abdicate decision making to AIs. How could AIs put humans in a situation where they are tempted to abdicate control. Think of the self-driving car in Tempe. The human wasn't really ready to take over. What skills will people lose if AIs take over decisions.

He talked about a Wittgensteinian "family resemblance" network of concepts. At the heart of any ethical approach to AI are the

Finally, he argued that AI can reveal to us patterns and ideas that we can't see. It can enhance us. We need to separate intelligence and consciousness.

He came back to Marvin the paranoid android. Would we ever want a paranoid android? Would it be ethical to create AIs with meaningful pain receptors such that they could feel pain? He thinks it would be ethically monstrous to create AIs that can feel pain. And ... if we don't then we are left with us humans being accountable.

I find this an interesting paradox. I wonder if an AI has to have pain to be accountable? Why not just build in pleasure?

We had some great questions. Do we really need pain to have ethical decision making? Can we not have forms of goals? I asked about distributed cognition - do we not have a history of creating distributed decision making bodies (like limited liability "corp"orations) that don't really have accountability the way a human does.

Stamp talked about the consequentialist approach - that a pragmatic approach is needed to pay attention and look for unintended consequences.

No one wants AIs to be biased. The bias emerges further down the road in unintended consequences as systems integrated. How then do we maintain attention to accountability.

Samira El Atia and Donald Ipperciel: Higher Education at a Computational Crossroad: Ethics and Privacy in Using Learning Analytics and Educational Data Mining

El Atia started by talking about the dramatic changes and how they will affect higher education. Not long ago everyone was talking about MOOCs and how they would change education. How could all the prophets of MOOCs talk about changing education without any background in education.

El Atia looks at how people in different disciplines do evaluation of students. She talked about the ethics of gathering syllabi from colleagues. Are these open documents or does one need consent?

She then talked about the pyramid of knowledge. Think about how much data can be gathered by a university. The data is useless if it isn't turned into knowledge. The data has to be wharehoused and then analyzed. She mentioned how there are ethical issues just in building the databases of students. Then one needs to look at the ethics of data mining.

She showed a chart of how data mining of educational data can help change education which then changes the data that is mined. Learning analytics should be circular. The analytics should change how we teach which should then change the data. Ethics often kicks in when the data and results go outside the classroom. Students and administrators are more interested in the uses of the data than researchers.

Then Donald Ipperciel talked about privacy. Things have changed so much that they need to revisit the ethics of student data. We are logging lots of data on the Learning Management Systems. Do we really have the consent of students? Can they choose not to use the LMS.

He talked about the Canadian Standards Association Model Code for the Protection of Personal Information.

Students now have to give consent to be able to participate in many learning situations. When we click on Agree to use some service we have signed a contract that hands over our privacy for a service. Privacy becomes a currency. There are, however, limits to the contract law. We have rights to know what is being collected and some rights to control it.

Then Ipperciel talked about privacy as a political question. Many corporations may not be ethical, but they are legal.

Privacy is also cultural issue. The younger generation has a different approach to privacy. Youth don't seem to worry about privacy as much or they worry about it differently. Youth see benefit in transaction of

Cathy Adams: Toward an Ethics of Technology for Educators

Adams started by talking about hearing Kate Hayles and how she joked about how her IQ drops when she leaves her office. Adams talked about how vulnerable she and we are when our machines begin to have trouble. Our intelligence is co-extensive with our artifacts. When things break down the break down reveals much to us. The hidden technology becomes visible. The tools lose their readyness to hand and become manifest.

Her talk focuses on schooling. She wants us to think about enacted cognition and the classroom. The classroom extends the cognitive capacities of children. Students learn to use the technologies of the classroom. Adams has to teach teachers to use appropriate technologies in the classroom. Now there are all the computers and educational apps that teachers have to choose between. Teachers have to think about the

Adams has developed an ethical framework - technoethics for teachers:

  • Impact Ethics - "Technology is just a tool"
  • Disclosive Ethics - Technology is a socially constructed

She then shifted to Heidegger and the shift from the Cartesian thinker to the embedded, Dasein, thinker. To be conditioned is to learn to speak together. We are already in a whispering dialogue with the world. We are embodied and in a rapport with the world that is often intermediated by digital interlocutors.

Furthermore, the cyborgs or aggregations of people and technologies, can be passed down to others. They can be taught. In school a child is conditioned into extended cognitive systems. The child learns to incorporate these systems. A child in the loop can extend her ways of knowing. Again, we have to understand how teachers have to care about different technologies into the classroom. The curate a ecosystem of cognitive extenders. They have to constantly think about what school is for and how to manage it for the betterment of students.

She talked about an article on AI Extenders.

She then talked about how extenders can also diminish skills. A calculator helps extend calculation but diminishes certain math skills that it replaces. She studies the break downs to show us the diminishment. If you lose your calculator can you do calculations in other ways.

Interpassivity - is when we let things do things on our behalf.

There was a question about the ethics of all the online toys. There is an issue of the IRIE on this, see http://www.i-r-i-e.net/current_issue.htm

Toni Samek: Altered Academic State: Artificial Intelligence and Higher Education

Samek started by talking about the School of Library and Information Studies and how most of their students are women and online. They don't get to meet many students and many of them don't have the same access. She is contributing as an academic administrator who studies expressive freedom and has to deal with the challenges of expression between students and instructors. She has to deal with the realities of funding.

She talked about and showed examples of the media coverage of AI which runs from caution to issues like ethics washing and labour issues. We see articles on how ethics has to keep up with economics.

She talked about the ethical issues facing professionals. Academics do go missing. They get denied entry to countries. They choose, like Hinton, to not let their research be used by the military. We in education have a responsibility to teach ethics in ICT programs.

She talked about ethics washing and the lack of diversity in the AI field.

Then she talked about the Higher Education Professional Literature. She listed many of the AI projects in higher education. She then focused on the CAUT policies that are relevant. How can we protect scholars at risk using our information systems? How do we think about our email as data?

She reflected on what would happen if she was challenged about assigning people to teach online. 75% of their students are online - SLIS can't afford to have faculty not teach online. She quoted CAUT guidance on various issues.

She is also looking at all the university policies in terms of AI. There are a lot of them. How will higher education be influenced by AI?

She suggested that as an administrator she can't just write what she wants about academic freedom, but has to manage academics and students in ways that may infringe on their freedom. The unit doesn't have the capacity to do everything it should do so they are trying to leverage other forms of intelligence. The pace has changed.

Jennie Shin (Okan Bulut and Mark Gierl): Development Practices of Trusted AI Systems Among Canadian Data Scientists

AI systems have become an increasingly key element of decision making systems in our society. There is a certain skepticism because systems can't be trusted. Trust depends on things like accuracy, security, fairness, and explainability. She talked about the Kaggle survey of data scientists. Shin's study is to find ways to make systems more trusted. They used the Kaggle dataset and ran a cluster analysis.

They looked at Fairness, Interpretability and Reproducibility. The categorized the answers under these headings and compared the types of answers. They concluded that to have more trust we need:

  • More reproducable projects
  • For this researchers need more time

Ali Shiri: Exploring Artificial Intelligence and Ethics: A Metadata Analytics Approach

Shiri started by comparing google and Microsoft's AI ethical principles. He then described his method which was to gather metadata records of publications about artificial intelligence and ethics. He used Scopus as it is widely used and it covers peer-reviewed publications across disciplines. He got 1508 entries and downloaded the data into an Excel spreadsheet. He then uses OpenRefine, IBM Watson Analytics, Automap, and Voyant.

Then he presented his findings. Canada is the 6th greatest contributor. What was strange is that arts and humanities is only 9% and CS is 29.9%.

He did a topic analysis showing visualizations of key co-occuring terms. "Information ethics" and "data protection" are two key concepts. Lots of the terms have "information" and "data" in them.

Medical ethics and bioethics are two areas gaining traction. He showed facets of key terms.

It is a complex and emerging field. We need to talk about data and information. Conceptualization of AI ethics needs to build on computer ethics and information ethics.

Dongwoo Kim & Kai Valdez Bettcher: AI Ethics in East Asia and Developmental Statism: Implications for Canadian AI

Valdez-Bettcher and Kim work for the Asia Pacific Foundation of Canada. The Foundation does work

They talked about the importance of national policies. Everyone thinks they are ethical. Our sense of our ethics is partly set by the culture and national policies. Being ethical is in a context. They have visited and interviewed people through Asia.

China has a Next Generation AI Development Plan (2017) is a guide to how China thinks about AI. They want to be world-leading by 2030. Japan has formed a strategic council on AI under the cabinet. The government is working with companies to provide a basis for AI to be integrated into the economy. Korea has the idea of I-Korea to promote industrial innovation and to solve social problems through emerging technologies. I-Korea will address 12 areas from national defence to medicine.

What underpins the three models is the 20th century growth model of a strong bureaucracy collaborating with industry. All three countries have changed quickly from pre-industrial to modern cultures. It is a developmental state model. The state plays a much bigger role than in Canada. The state wants to promote industrial development - this is an ethical choice. The state can also tell companies what to do. The state is an adjudicator of ethics. Economic development is woven into these cultures. Finally, with a directive state one has a weaker civil society.

How does Canada compete with such a scenario. There is a discussion of AI as if it was a race and there will be winners and losers. In Canada we are trying to figure out how to compete without having a developmental state model. Canada has its own advantages and Canadians don't appreciate the prioritization of development and state intervention.

Canada lacks a tradition of state regulation of ethics.

"Data is the new oil" - If Canada can't have a balanced discussion about oil, why would we have a discussion about data? We do have a stronger civil society that may lead to arguments, but it may mean more robust results. Canada has a pluralism of citizenry - a diversity that can be an advantage.

They are developing a white paper on Canada's Asia Strategy for AI. They talked about formal diplomacy, trade and investment, and people-to-people engagement.

The investment side needs attention. There are concerns about East Asian investment in AI in Canada. Investment becomes a policy issue. We seem to have a technological iron curtain.

We know about trade in traditional commodities, but we don't know much about data flows.

There is a real opportunity for Canada. Canada is look to from many countries.

Navigate

PmWiki

edit SideBar

Page last modified on May 10, 2019, at 09:22 PM - Powered by PmWiki

^