Europe To Ban AI That Is A Threat To The Safety And Rights Of People

Europe-is-set-to-ban-artificial-intelligence-that-is-a-threat-to-the-safety-and-rights-of-people

The European Union looks set to ban some of the most concerning forms of Artificial Intelligence (AI), such as the ‘social credit’ surveillance system used in China, according to draft AI regulations published by the bloc.

The proposed regulations, which will be reviewed by elected representatives before passing into law, will also bring some comfort to those outraged by instances of bias and discrimination generated by AI.

These include hiring algorithms found to systematically downgrade women’s professional profiles and flawed facial recognition technology that has led police to wrongfully arrest black people in the United States. Such AI applications are regarded by the EU as high-risk and will be subject to tight regulations, with hefty fines for infringement.

This is the latest step in the European discussion of how to balance the risks and benefits of AI. The aim appears to be to protect citizens’ fundamental rights while maintaining competitive innovation to rival the AI industries in China and the US.

The regulations will cover EU citizens and companies doing business in the EU and are likely to have far-reaching consequences, as was the case when the EU introduced data regulations in 2018. The proposals are also likely to inform and influence the United Kingdom, which is currently developing its own strategic approach to this area.

Strong new laws

Most strikingly, the draft legislation would outlaw some forms of AI that human rights groups see as most invasive and unethical. That includes a broad range of AI that could manipulate our behaviour or exploit our mental vulnerabilities – as when machine-learning algorithms are used to target us with political messaging online.

Also Read: The Tale of Facial Recognition Technology

Likewise, AI-based indiscriminate surveillance and social scoring systems will not be permitted. Versions of this technology are currently used in China, where citizens in public spaces are tracked and evaluated to produce a trustworthiness ‘score’ that determines whether they can access services such as public transport.

The EU also looks set to take a cautious approach to a number of AI applications identified as high-risk. Among these technologies are large-scale facial recognition systems – considered easy to deploy using existing surveillance cameras, which will require special permission from EU regulators to roll out.

(With inputs from agencies)