philosophi.ca |
Main »
AI 4 AI 2020AI for Information AccessThese notes are on the AI for Information Access (AI 4 IA) conference. The conference was streamed on YouTube. The conference was organized to commemorate the International Day for Universal Access to Information. Note: these notes are written live and therefore will have typos and I will miss important points. Please email me with corrections. I'm proud that the Kule Institute for Advanced Study and the AI for Society Signature Area, both at the U of Alberta, facilitated this conference. OpeningCordell Green introduced and chaired the opening talks. Sadia Sanchez Vegas (UNESCO Cluster Office for the Caribbean, Kingston, Jamaica)Dr Vegas gave the opening talk for the conference. She talked about the vital importance of information and information technologies like artificial intelligence in this time of Covid-19. It is important that AI be guided by our values. AI could pose threats to human life. It raises questions about the ownership and use of our data. In Unesco we advocate for a humanistic approach with accountability to the workings of algorithms. She talked about the introduction of AI in a small island country like Jamaica. Tourism is important to many smaller countries and AI could contribute to tourism and reducing pollution. Or it may cause disruption to the employment market. She talked about a Caribbean AI initiative. For all these reasons, the issues around AI are highly important to small island states. Hubert Gijzen (UNESCO Multisectoral Regional Office for Southern Africa, Harare, Zimbabwe)Gijzen started by reminding us that today is the international day for information access, which is why this conference. He then reminded us that information access is a fundamental right. This is especially true in times of Covid when accurate information is important. Misinformation spreads even faster than the virus. Here is where the theme of the conference comes in. AI could help to fact check on a large scale. He then read the official remarks of the Director General of UNESCO which I paraphrase: In a world of Covid-19 has caused chaos information access is more important than ever. Unesco partners with the rest of the UN family to acknowledge the value of information. The online events will underline the importance of universal guarantees of rights to information. Governments must commit to the common good. They need to build resilient environments. UNESCO firmly believes that access to information should be recognized in law. Dorothy Gordon (Chair of the UNESCO Information for All Program)Gordon talked about the priorities of IFAP and how they apply to the current situation. She talked about how the average citizen was aware of the Cambridge Analytica but now, as Zuboff puts it, we are now in the age of surveillance capitalism. Private human experience can be taken without our knowledge and then analyzed using big data and AI techniques in order to predict and modify our behaviour. This has been called techno-colonialism. We have come a long way since thinking that access to information was simply a matter of building infrastructure "and they will come." Algorithms have been weaponized. Content matters. Language matters. Curation matters. And yet there is a sense of helplessness as we watch the speed of innovation/disruption while our institutions struggle to keep up. Technology acts a dimension of power allowing some people extract more than others. With the pandemic we have to take our lives online. It was not just governments that went online, but also criminal elements. It took the pandemic to realize that global resilience is dependent on accurate information. User generated content which used to be seen as a sign of greater participation is now seen as a threat. The pandemic has made people more aware of the importance of equitable access to information and information technology. We have an infodemic adding to the pandemic. She talked about the digital inclusivity paradox which tells us the vulnerable will be made more vulnerable. The digital divide will get worse. No country is exempt. She concluded that we need a fresh approach that is human centric. We must act to develop evidence-based policies and system. Jandhyala Prabhakar Rao (Director, India Centre of Excellence in Information Ethics)Professor Rao started by talking about a conference on Information for All in Covid Times. They came up with a Hyderabad Declaration which will soon be released. He then started his presentation talking about access to information in India. There are traditional means like the fine arts, festivals and texts. There are now technological means where we get access through technology. Information, knowledge and development are linked. He talked about how societies have always been knowledge societies using technologies. He then discussed how artificial intelligence is important to this infodemic. AI could be used to counter misinformation on the web in this crisis time. He talked about multi-dimensional data mining. He talked about the importance of translation and multi-culturalism. But ethical usage of these new technologies is dependent on there being a democratic society. The technology should contribute to the democratization of information and the strengthening of democratic structures. Session 1: Artificial Intelligence for Information AccessibilityEdson Prestes: AI - A domain that matters to us allPrestes talked about some of the problems of AI. One problem is we tend to trust AI thinking it is neutral. Instead we need to understand what AI is which also means understanding human intelligence and how little we agree about intelligence. He talked about different types of intelligence from linguistic to creative. Likewise artificial intelligence is not like any forms of human intelligence. It isn't more/less of our intelligence. UNESCO is developing a Recommendation on the Ethics of AI. The working group started in 2018 and have developed a draft Recommendation which is being discussed. He talked about the document (Recommendation). The goal is to guide the actions of states and individuals, to promote human dignity and gender equality. They define AI Systems as those that can process information in ways that resemble human ways. In the document they defined values and principles. He talked about how member states need to ensure that the governance of AI is ethical. They should assess the impact of AI on their citizens/environment. They should support improvements in equality and not exacerbate digital divides. They should promote education around AI and AI ethics. Special attention should be paid to economies that are labour intensive and therefore likely to be affected by AI. Now that the Recommendation is submitted the member states will need to discuss/debate the document for possible adoption. Alas I don't think the USA is part of UNESCO any longer. Kate Kallot: Scaling AI Beyond BordersKallot started by talking about the opportunities of AI. AI is driven by three factors: data, hardware, and algorithms. Most of the innovations were coming out a small number of regions. She talked about how AI is being developed and discussed in different regions. In Europe, for example, there is a lot of discussion around privacy. But what about Africa? In Africa there are a number of grass roots initiatives. She talked about emerging areas of innovation in Africa and emerging communities like Data Science Africa and Alliance AI. With Covid there are new challenges with the closing of universities and data scientists unemployed. She talked about a neat example of a challenge around predicting flooding in Mallawi where they tried to connect problems with data scientists. She gave examples of some of the innovative solutions emerging. She concluded that if we want ethical AI we need better collaboration with the private sector. Wendell Wallach: Access, Education and Inclusive GovernanceWallach started by pointing out that there three things that are needed:
Without all three citizens will fall under the digital poverty line. He gave an example of a startup that is trying to help farmers predict conditions and insure them. He talked about issues with access and how important education is. We need both the skills to build digital infrastructure and the broad digital literacy to know how to use technology appropriately. We need digital cadres of experts to challenge misuse and to imagine appropriate uses. We have challenges from weaponization to surveillance to technological unemployment. New technologies generally lead actually to more jobs than the number of people unemployed. Many skeptics think AI will be different in that it will automate more jobs than are created. The reason for this as capital it is of interest to automate as many jobs as they can because then all the profits go to them. Finally he turned to the need for inclusive governance. The speed of technological development outpaces the capacities of the government systems. We need new agile governance. Legislators need to rely on the experts much more. A second problem we encounter is that some entities have become much more powerful. Technology leaders can dictate things without input from those affected. This is true of national and international governance. We need multi-stakeholder discussions that have adequate representation from small nations and first peoples. We need representative representation. Again, build a cadre and select individuals who can represent collective needs. He ended by talking about principles. The principles tend to be only about AI. There is an opportunity to do more. We need a new social contract so that all of these technologies are deployed for the good of all. DiscussionThere was then a great, but short discussion around issues like digital imperialism and the tension between Session 2: AI as an enabler in accessibility and accessible educationCoetzee Bester: The essential relation between information ethics and artificial intelligenceBester started with a question, "can we trust AI or is it already corrupted by human beings?" He talked about how human information ethics and AI ethics need to converge. If machines can do what people do then these two type of ethics will need to address the same topics. We shouldn't let there be two different sciences. Algorithms play a crucial role in what information people get access to and how the data is analyzed. AI is therefore likely to have a significant impact. What then are the prejudices of the developers of the algorithms? How have developers framed AI in ways that influence the use? Algorithms are not objective. They are codified views and roadmaps to objectives. We have to understand the objectives toward which the algorithms. We can say then that information is filtered by the objectives of designers. Algo-ethics then calls for transparency so that we can unpack the bias of any algorithm. Only then can we trust any AI. Algorithms filter reality by filtering information, access to information and analysis. He
Isabella Henriques and Pedro Hartung: Children's rights by Design in AI developmentHenriques started by telling us about the Alona Institute that works on children's right in Brazil. They deal with the barriers that children face in developing their knowledge and independence. She talked about the barriers that Brazilian children face. Many access the internet only through cell phones. Some don't have access at all. There needs to be more digital literacy in the schools especially so that citizens understand the potential of AI. A recent report discussed how many companies are collecting data from children. She talked about the recent problem of the algorithm predicting British children's performance on tests. We need child-centric ethics. Hartung then deepened the discussion. He discussed the need for an ecosystem model not just problems and sevices. He talked about how often all the blame is put on parents, or on care givers, or schools, or on their peers. An eco-system approach looks at how families, governments, companies, and schools together. States and companies already violate children's rights. He talked abut a Unicef discussion paper on Children's rights by Design in AI Development. The document has a number of recommendations. The document is Policy Guidance on AI for Children. Mike Shebanek: AI for Accessibility at ScaleMike Shebanek is Head of Accessibility at Facebook. Shebanek started by talking about Facebook's mission and the need for accessibility. One in ten use the zoom feature on their screens. Facebook has a dedicated team looking at accessibility. They are excited to now be using AI. They are investing, for example, in automatic alt-text. They use computer vision and object recognition to recognize what's in an image when there is no Alt Text. This makes photos accessible to people who have vision issues. In conjunction with automatic recognition Facebook has implemented face recognition to name people in photos when permission has been given. AI is the only way to meet the challenge of 2 billion people uploading images and accessibility. A third example is captioning of video. They have introduced tools that let people caption video even in live streams. Without captioning it is hard for people who are hearing impaired to follow a video. Without AI they can't possibly provide automatic captioning in real time. These services are being provided free and now being applied to other systems like Instagram. He then turned to how innovations like this can be continued. Facebook has founded Teach Access a not-for-profit initiative to make sure that computer scientists and others are taught awareness of accessibility. They want future technologies to be born accessible. DiscussionPeople asked about how we can avoid the dominance of the global North in the development of AI culture? There can be a problem when algorithms developed in the North are imposed on other communities, especially communities that may not. We need to be aware of how algorithms can be misapplied. Another question was about how to protect children from content. Part of the issue is the model of children's exploitation - gathering their data and making money off them. We need to hold companies and governments accountable. We need to go beyond literacy that puts the onus on children and hold developers responsible. Then there was a question about how to bring accessibility into teaching. Part of it is to teach the professors. There was a question about how much companies collaborate with competitors for the public good. In the area of accessibility Session 3: AI and Digital Technologies: Implication for Human Rights, Justice and Inclusive DevelopmentFatima Roumate: Artificial Intelligence, Ethics and International Human Rights LawRoumate started by talking about how the pandemic is accelerating the use of AI while leaving human rights to states. She talked about human rights including the right to life. The development of AI weapons challenges this right. She then talked about freedom of expression when social media manipulate views. Our right to privacy is challenged by AI techniques like face recognition. The WHO says that free access to scientific information is important, but this can slide into surveillance. The right to work is challenged by automation. International law can't keep up. She then turned to soft law or ethics. There are various organizations from the G 20 to the African Union developing guidelines. But there are differences between different sets of principles and none of them are binding. Covid-19 is accelerating the development of new forms of slavery and inequality. States are prioritizing health over other rights. Human rights are seen as added value. We need a dialogue that bridges the many divides, including the divide between engineers and other types of expertise. Joan Barata: AI and the promotion of access in times of crisisBarata talked about how states have suspended rights to access to information. States are forcing their press to only report information from the state. How can individuals or health providers meet the challenge of a pandemic when states are restricting information access and circulating propaganda? It is vital that there a free press and free access to information. This is why public authorities should provide special protections to the freedom and access to information. He talked about how access to information requests have been affected by the disruption of civil servants having to go home. Access to information offices shouldn't be closed down. He talked about how governments should also tell us what they don't know. He asked about how AI can be used to promote good and reliable information. They used to focus on using AI to identify and limit the bad information. Now can we figure out how to identify and promote the vital information. Jill Clayton: Ethical Tech Development in the Context of Information AccessibilityCommissioner Clayton started by talking about her role protecting freedom of information and how her office In 2018 a conference of privacy commissioners released
She then switched to talking about algorithmic transparency. She would like to see governments commit to transparency when they use algorithms to make decisions about people. Governments are moving towards using more automatic decision systems. These can harm vulnerable groups. We need regulation to cover such systems. The GDPR in Europe deals with this. The Province of Quebec has a law being discussed that regulates automated decisions. She believes that Canadian laws are falling behind. She then discussed the importance of synthetic data as opposed to de-identification. Anonymization of data doesn't always work. Instead synthetic data might do better. This is especially important for health data where people are particularly concerned about their privacy. This doesn't get rid of the need for ethics review, especially when it comes to the original datasets from which synthetic data is developed. Legislative frameworks are required for AI to be legal, fair, and ethical. Isabella Ferrari: Algorithmic Justice: Risks and PerspectivesFerrari was the final speaker. She talked about the move to online courts. Technology is changing how justice is practiced. AI is being used by judges. There is such a backlog of court cases in many countries that we have to consider the use of AI in judging cases. There are two issues. The first issue is opacity. The second issue is that of implicit bias producing. She gave the example of the use of COMPASS to assess recidivism. The opacity is a result of the high dimensionality of the data, the complexity of the algorithms. There is opacity around the use of the algorithms and around the training of the algorithms. She is not arguing that we shouldn't use AI because humans have the same problems. Judges are opaque too. Perhaps the shift to hybrid decision making might be a chance to increase transparency. It is time to start discussing when and to what degree AI is used in justice. Session 4: Mis/Dis/Mal-information and AI for both a tool and a remedyLazarus Dokora: Remarks on the critical economy of informationDokora talked about the primacy of information in development. Information is not just important to individuals or states, but also to communities. He asked about NGOs that are often important to Africa. Most of the images taken in Africa comes from Western photographers - is there no way to get African voices. He also talked about how one can get situations where the people who own the media also "own" the politicians. He talked about how the whole world is undergoing a migratory experience as the pandemic imposes a new life style on us. This process will disempower many of us and our states. Countries at the receiving end of aid have to put up with information made about them. People have to change how they are represented to get aid. He talked about an epistemicide and how information can be used against a community. He talked about the negative publicity received by Zimbabwe. Anthony Clayton: The Role of AI in Regulating Abuses in the MediaClayton talked about how we are on the brink of a digital revolution. We have growing problems like cybercrime where criminals have shown lots of innovation. If we can't solve these then the new digital world He talked about the Islamic State as a new generation of terrorist organization that effectively used social media. He talked about the Christchurch massacre that was streamed online. These create copycat self-recruited terrorists. There is too much new material being uploaded that promotes violence. We need AI to screen the volume of materials being uploaded for materials the promote violence. But there are problems with AI for policing:
He talked about
Clayton believes that we will need a hybrid solution including more regulation to protect democracy while also permitting freedom of expression. Diogo Cortiz: The Technical Challenges of AI EthicsCortiz talked about the problem of regulating hate speech. In Europe companies depend on civil society to help. He went on to describe a project to develop tools to recognize hate speech as there is too much posted for current models. He is part of project to see if they can recognize misinformation or hate speech in Portuguese. They collected lots of examples to use to train a machine. The had a problem of privacy and a problem of representativity. They seem to be getting good results. DiscussionThere was a discussion about how one can recognize hate speech in the face of rapid change in street language and tone and so on. Also, what is acceptable changes over time. Session 5: Health rights and access to information-a reflection on the Covid-19 crisisNidhi Hedge: Privacy and health access in the digital transition due to the Covid 19 crisisHedge talked about public health and private information during a pandemic. Access information can be very important during a pandemic, but there can be serious privacy issues. How can we share information? One idea is K-Anonymity. Latanya Sweeny found that a high percentage of people could be identified with just three bits of information (date of birth, zip code, and gender). K-anonymity is achieved by generalization and suppression. Generalization might involve creating a much bigger cohort for a bit of information like sharing decade rather than year of birth. Suppression is suppressing columns of information. But does this work during a pandemic when information is changing. In a pandemic there might be group identification, ie. the identification of the hutterite community as having an outbreak. Another solution is to create synthetic databases, but this is a young technology. We don't know whether such databases will work in healthcare. Sensitive attributes might not be translated properly in the synthesis. Hedge then switched to access to healthcare in a pandemic. We have seen a transition to virtual care from physical appointments. What are the compromises to virtual care? Are there transcripts or recording taken by one of the participants. Are those recordings stored? Are they known to all? Also, videoconferencing takes place in someones home and others in the home might not want to be seen. We need guidelines and standards when it comes to the use of AI in virtual care. She concluded with how we need to think about these issues as there will probably be similar pandemics and needs in the future. Wendel Abel: Promoting Health Rights of Women in JamaicaAbel spoke about their experience in Jamaica which failed to meet two of its Millenium Development Goals. They weren't able to reduce maternal mortality or infant mortality rates. What is the relationship of this to human rights? Maternal and infant mortality is solvable, but connected to other issues. There were organizations involved in promoting health rights. The Partnership for the Promotion of Patient Rights worked to bring different organizations together and to make people aware of their health rights. They created a Civil Society Collaborative Forum to bring different civil society organizations together. Information is a key issue in health. Patients have a right to information and informed consent. Healthcare systems that are stressed can lead to shorter visits with less information and loss of consent. Patients might not be aware of complaint systems so they prepared materials. He then talked about how they got involved in reviewing policy and working with leaders like the public defender. He talked about how the programme can be sustained now that the funding is over. Session 6: Youth Researcher's Perspective on AI4IAWe closed on short presentations by two MA in Communications Technology students at the University of Alberta. I was chairing this so I didn't take any notes. ClosingRachel Fischer and Cordell Green closed off the conference thanking the many who supported us behind the scenes. Clare Peters, Grant Wang, and Casey Germain at the U of Alberta deserve mention. |
Navigate |
Page last modified on September 28, 2020, at 03:42 PM - Powered by PmWiki |