Algorithms and artificial intelligence are augmenting and replacing human decision-making in Canada’s immigration and refugee system, with alarming implications for the fundamental human rights of those subjected to these technologies, says a report released today by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy.
The 88-page report, titled “Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System” (PDF), details how the federal government’s use of these tools threatens to create a laboratory for high-risk experiments. These initiatives may place highly vulnerable individuals at risk of being subjected to unjust and unlawful processes in a way that threatens to violate Canada’s domestic and international human rights obligations, implicating decisions on multiple levels.
“Our legal system has many ways to address the frailties of human decision making,” said Dr. Lisa Austin, professor at the University of Toronto’s Faculty of Law and an advisor on this report. “What this research reveals is the urgent need to create a framework for transparency and accountability to address bias and error in relation to forms of automated decision making. The old processes will not work in this new context and the consequences of getting it wrong are serious."
The ramifications of using automated decision-making in the sphere of immigration and refugee law and policy are far-reaching. Marginalized and under-resourced communities such as residents without citizenship status often have access to less robust human rights protections and less legal expertise with which to defend those rights. The report notes that adopting these autonomous decision-making systems without first ensuring responsible best practices and building in human rights principles at the outset may only exacerbate pre-existing disparities and can lead to rights violations including unjust deportation.
Since at least 2014, Canada has been introducing automated decision-making experiments in its immigration mechanisms, most notably to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Recent announcements signal an expansion of the uses of these technologies in variety of immigration decisions that are normally made by a human immigration official. These can include decisions on a spectrum of complexity, including whether an application is complete, whether a marriage is genuine, or whether someone should be designated as a “risk.”
The report provides a critical interdisciplinary analysis of public statements, records, policies, and drafts by relevant departments within the Government of Canada, including Immigration, Refugees and Citizenship Canada, and the Treasury Board of Canada Secretariat. The report additionally provides a comparative analysis to similar initiatives occurring in similar jurisdictions such as Australia and the United Kingdom. In February, the IHRP and the Citizen Lab submitted 27 separate Access to Information Requests and continue to await responses from Canada’s government.
The federal government has invested greatly in positioning Canada as a leader in artificial intelligence, and the report acknowledges that there are many benefits to be gained from such technologies. However, without proper oversight, automated decisions can rely on discriminatory and stereotypical markers—such as appearance, religion, or travel patterns—as erroneous or misleading proxies for more relevant data, thus entrenching bias into a seemingly “neutral” tool, says the report. The nuanced and complex nature of many refugee and immigration claims may be lost on these automated technological decision-makers, leading to serious breaches of internationally and domestically protected human rights, such as the right to privacy, the right to due process, and the right to be free from discrimination.
“We have often seen that when governments deploy new technology intended for systemic use, lack of thoughtful safeguards or understanding of potential impacts can quickly spiral into harmful consequences,” said Prof. Ron Deibert, Director of the Citizen Lab. “The Canadian government should not be test-driving autonomous decision-making systems on some of our most vulnerable, and certainly not without first putting in place publicly reviewed algorithmic impact assessments and a human rights-centered framework for the use of these tools in such high-stakes contexts.”
The report recommends Ottawa establish an independent, arms-length body with the power and expertise to engage in comprehensive oversight and review of all uses of automated decision systems by the federal government; publish all current and future uses of AI by the government; and create a task force that brings key government stakeholders alongside academia and civil society to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.
The report notes that Canada presently faces a unique opportunity to become a global leader in the development and usage of AI that protects and promotes human rights principles, setting an example for other countries also experimenting with similar tools and systems.
Media
- "Ottawa’s use of AI for immigration a ‘high-risk laboratory’: report," The Globe and Mail, Sept. 26, 2018
- "Researchers raise alarm over use of artificial intelligence in immigration and refugee decision-making," Toronto Star, Sept. 26, 2018
- "Federal use of A.I. in visa applications could breach human rights, report says," Victoria Times-Colonist/Canadian Press, Sept. 26, 2018
- "Artificial intelligence at border could infringe on human rights: report," iPolitics, Sept. 26, 2018
- "Artificial intelligence used in immigration systems raises human rights concerns, report says," The Lawyer's Daily, Sept. 26, 2018
- "U of T’s Citizen Lab, international human rights program explore dangers of using AI in Canada’s immigration system," U of T News, Sept. 26, 2018
- Op-ed, "Ottawa’s use of AI in immigration system has profound implications for human rights," by Petra Molnar and Ronald Deibert, The Globe and Mail, Sept. 26, 2018
- "What happens when artificial intelligence comes to Ottawa," Maclean's, Sept. 26, 2018
- "Experts warn of the creep of AI in Canada’s immigration system," Vice, Sept. 26, 2018
- "Canada’s use of artificial intelligence in immigration could lead to break of human rights: study," Global Television, Sept. 26, 2018 (includes video clips from press conference)
- "Report finds Canada's use of AI in immigration could lead to break in human rights" (audio), interview with Petra Molnar, Radio 640 Toronto, Sept. 26, 2018
- "Report says use of AI could be violating human rights," Canadian Lawyer, Sept. 26, 2018
- "Un rapport met en garde contre l’utilisation de l’IA en immigration," Le Devoir/Canadian Press, Sept. 26, 2018
- "Des experts s'inquiètent des dangers de l'IA en immigration," Journal de Montréal, Sept. 26, 2018
- "Des robots au poste-frontière," La Presse, Sept. 27, 2018
- Op-ed, "What if an Algorithm Decided Whether You Could Stay in Canada or Not?" by Petra Molnar and Samer Muscati, Refugees Deeply, Sept. 27, 2018
- Video: "Petra Molnar: Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System," University of Toronto Centre for Ethics event, Sept. 28, 2018.
- "Inteligência artificial na imigração cresce, mas com críticas" OiCanadá, Oct. 4, 2018
- Epoch Times
- "Using AI in Immigration Decisions Could Jeopardize Human Rights" by Petra Molnar, CIGI online, Oct. 11, 2018.
- "Petra Molnar: Automated Decision-Making and Immigration — The Human Rights Impact (Ep. 158)," WashingTech Tech Policy Podcast, Oct. 16, 2018
- "Governments’ use of AI in immigration and refugee system needs oversight," by Petra Molnar, Policy Options, Oct. 16, 2018
- "How artificial intelligence could change Canada's immigration and refugee system," CBC Radio Sunday Edition, Nov. 16, 2018