Federal Use Of A.I. In Visa Applications Could Breach Human Rights

author-image
Priyadarshinee N
Updated On
New Update
NULL

A new report is warning about the federal government's interest in using artificial intelligence to screen and process immigrant files, saying it could create discrimination, as well as privacy and human rights breaches.

The research, conducted by the University of Toronto's Citizen Lab outlines the impacts of automated decision-making involving immigration applications and how errors and assumptions within the technology could lead to "life-and-death ramifications" for immigrants and refugees.

The authors of the report issued a list of seven recommendations calling for greater transparency and public reporting and oversight on government's use of artificial intelligence and predictive analytics to automate certain activities involving immigrant and visitor applications.

"We know that the government is experimenting with the use of these technologies ... but it's clear that without appropriate safeguards and oversight mechanisms, using A.I. in immigration and refugee determinations is very risky because the impact on people's lives is quite real," said Petra Molnar, one of the authors of the report.

"A.I. is not neutral. It's kind of like a recipe and if your recipe is biased, the decision that the algorithm will make is also biased and difficult to challenge."

ai a-i-in-visa-applications visa-applications human-rights
Advertisment