The UN called Wednesday for a moratorium on artificial intelligence systems like facial recognition technology that threaten human rights until “guardrails” are in place against violations. UN High Commissioner for Human Rights Michelle Bachelet warned that “AI technologies can have negative, even catastrophic effects if they are used without sufficient regard to how they affect people’s human rights.”
She called for assessments of how great a risk various AI technologies pose to things like rights to privacy and freedom of movement and of expression.
She said countries should ban or heavily regulate the ones that pose the greatest threats.
But while such assessments are under way, she said that “states should place moratoriums on the use of potentially high-risk technology”.
Presenting a fresh report on the issue, she pointed to the use of profiling and automated decision-making technologies.
She acknowledged that “the power of AI to serve people is undeniable.”
“But so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility,” she said.
“Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”
The report, which was called for by the UN Human Rights Council, looked at how countries and businesses have often hastily implemented AI technologies without properly evaluating how they work and what impact they will have.
The report found that AI systems are used to determine who has access to public services, job recruitment and impact what information people see and can share online, Bachelet said.
Faulty AI tools have led to people being unfairly denied social security benefits, while innocent people have been arrested due to flawed facial recognition.
“The risk of discrimination linked to AI-driven decisions –- decisions that can change, define or damage human lives –- is all too real,” Bachelet said.
The report highlighted how AI systems rely on large data sets, with information about people collected, shared, merged and analysed in often opaque ways.
The data sets themselves can be faulty, discriminatory or out of date, and thus contribute to rights violations, it warned.
For instance, they can erroneously flag an individual as a likely terrorist.
The report raised particular concern about the increasing use of AI by law enforcement, including as forecasting tools.
When AI and algorithms use biased historical data, their profiling predictions will reflect that, for instance by ordering increased deployments to communities already identified, rightly or wrongly, as high-crime zones.
Remote real-time facial recognition is also increasingly deployed by authorities across the globe, the report said, potentially allowing the unlimited tracking of individuals.
Such “remote biometric recognition technologies” should not be used in public spaces until authorities prove they comply with privacy and data protection standards and do not have significant accuracy or discriminatory issues, it said.
“We cannot afford to continue playing catch-up regarding AI –- allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact,” Bachelet said.