Header image for news printout

Terrorism High-level Conference

Statement by Ilze Brands Kehris, UN Assistant Secretary-General for Human Rights

29 June 2020

My sincere thanks go to the UN Office of Counter-Terrorism (OCT) for convening this High Level Conference. OHCHR is honored to co-host this side event with Japan, the European Union, UNOCT/UNCCT, UNICRI, and CTED. 

We live in a digital age where, increasingly, States, the private sector, and even individuals, have enormous power at our fingertips. The emergence of technologies that can harvest, synthesize, and analyze massive volumes of data offers huge potential to address the pressing problems of our times.

We have already seen significant benefits of the use of artificial intelligence (AI) technologies in vaccine development, for disaster recovery and humanitarian assistance, and it similarly holds potential in other areas such as climate change.

AI also has the potential to aid in the fight against terrorism, greatly amplifying the power of existing tools in the counter-terrorism tool box. Applications of AI are already deployed or being explored to support border controls, predict violence, in surveillance, and to address terrorism content online.

Yet, with great power comes great responsibility. This power must be harnessed to protect against the harmful impacts on rights and freedoms that may result from overreach, or inadvertently.

Too often have we have seen that in the context of countering terrorism, human rights are sidelined. If we look back on the almost twenty years since 9/11, the lessons are clear that such an approach is only counter-productive.

Artificial intelligence poses particular challenges because its increase in technological and analytical power can exponentially amplify any human rights concerns that may already exist in relation to counter-terrorism, heightening the risks of discrimination and bias and the infringement of an array of rights.

As we know, the terrorist label is often applied far too loosely and broadly, targeting dissidents and human rights defenders in addition to violent individuals and groups involved in terrorism-related violence: in a context where online civic space is already increasingly restricted, AI applications enabling the surveillance and censorship of such voices are a growing concern.

Automated content moderation tools are known to have significant accuracy issues, often blocking content as “terrorist” that was in fact very valuable reporting of serious human rights violations; while overbroad application of the terrorism label can restrict online free speech and oppress, rather than protect.

Moreover, an overreliance on massive data collection and analysis threatens hard won protections for the right to privacy, potentially enabling mass surveillance. It is also worrying that a range of AI tools based on scientifically shaky grounds, such as emotion recognition, are being deployed at borders, prisons, and in other contexts.1

OHCHR has been engaging in a set of activities designed to provide guidance on these thorny issues—in particular how to ensure human rights compliant applications of AI, including in counter-terrorism—and where to draw red lines—to assist governments, the private sector, and other stakeholders.

This work includes research and analysis exploring the challenges that the right to privacy faces in the digital age, including the impact of new technologies on peaceful protests and assemblies. OHCHR has also published a series of papers aimed at assisting the private sector—who play a key role in developing and applying AI technologies in counter-terrorism—on how to implement their obligations under the UN Guiding Principles on Business and Human Rights.

As my colleague, Alex Moorehead, will explain later, OHCHR is also engaged, in partnership with UNICRI and UNOCT on producing practical guidance on the human rights aspects of the use of AI in counter-terrorism, which will be published later this year. We hope this will form the backbone of a larger joint UN effort on operationalizing the guidance and have included this in the UN’s counter-terrorism multi-year appeal launched today.

A growing number of governments—as well as international and regional bodies—are grappling with how to regulate the use of AI (as Mr. Voronkov reminded us in his introduction), including in the area of counter-terrorism, and I look forward to hearing from the EU on its initiatives in this area today.

We welcome these efforts. As these initiatives proliferate, it is crucial that human rights protections are thoroughly integrated—ensuring, for example, effective monitoring and oversight, transparency, requiring human rights impact assessments and due diligence, and proper regulation of the procurement and use of AI technologies by public sector bodies. Doing this now will allow us to properly harness the power of AI for good, before the technology has spread and developed to such an extent that its regulation becomes excessively difficult.

1. https://www.theguardian.com/global-development/2021/mar/03/china-positive-energy-emotion-surveillance-recognition-tech