最新动态

新闻中心





Artificial intelligence risks to privacy demand urgent action – Bachelet

العربية | Français | Español

GENEVA (15 September 2021) – UN High Commissioner for Human Rights Michelle Bachelet on Wednesday stressed the urgent need  for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place. She also called for AI applications that cannot be used in compliance with international human rights law to be banned.

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said.

 

As part of its work* on technology and human rights, the UN Human Rights Office has today published a report that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” the High Commissioner said.

The report looks at how States and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition.

The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analysed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways. 

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Bachelet said.

The inferences, predictions and monitoring performed by AI tools, including seeking insights into patterns of human behaviour, also raise serious questions. The biased datasets relied on by AI systems can lead to discriminatory decisions, and these risks are most acute for already marginalized groups. 

“The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real. This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,” Bachelet said.

 

There also needs to be much greater transparency by companies and States in how they are developing and using AI.

“The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society,” the report says.

“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact. The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us,” Bachelet stressed.

ENDS

Read the full report here

See also: High Commissioner’s statement on the implications of the Pegasus spyware to the Council of Europe on 14 September 2021

*Visit the OHCHR page on the Right to Privacy in the Digital Age: http://www.ohchr.org/EN/Issues/DigitalAge/Pages/DigitalAgeIndex.aspx

For more information and media requests, please contact:
Rupert Colville + 41 22 917 9767 / rupert.colville@un.org or
Ravina Shamdasani - + 41 22 917 9169 / ravina.shamdasani@un.orgor
Liz Throssell + 41 22 917 9296 / elizabeth.throssell@un.org or
Marta Hurtado - + 41 22 917 9466 / marta.hurtadogomez@un.org


会议及事件
关注
联系方式

如需采访,请发送英文电子邮件至:media​@ohchr.org

订阅新闻提醒(英文)

新闻代言人:鲁珀特·科尔维尔(Rupert Colville)
电话:+41 22 917 9767
rcolville@ohchr.org

媒体干事:杰里米·劳伦斯(Jeremy Laurence)
电话:+ 41 22 917 9383
jlaurence@ohchr.org

媒体干事:莉兹·斯罗塞尔(Liz Throssell)
电话 + 41 22 917 9296
ethrossell@ohchr.org

媒体干事:玛塔·乌尔塔多(Marta Hurtado)
电话:+ 41 22 917 9466
mhurtado@ohchr.org