Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
The European Parliament’s proposed compromise amendments to the list of high-risk AI applications, banned uses and concept definitions. EU lawmakers are striving to close the negotiations on the Artificial Intelligence Act
Copy Page Link
Euractiv
Luca Bertuzzi
April 1, 2023

https://www.cybernewscentre.com/plus-content/content/ai-act-eu-parliaments-crunch-time-on-high-risk-categorisation-prohibited-practices

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

The European Parliament’s co-rapporteurs proposed compromise amendments to the list of high-risk AI applications, banned uses and concept definitions.

EU lawmakers Brando Benifei and Dragoș Tudorache are striving to close the negotiations on the Artificial Intelligence Act in the coming days. The Act is the world’s first attempt to regulate AI based on its potential to cause harm.

Among the pending issues the two lawmakers are trying to close is the list of AI uses that pose significant risks, the prohibited practices and the definitions of the key concepts used in the draft law, according to documents presented in Brussels by the European Commission.

The Areas of high Risk

The AI Act’s Annex III lists critical areas with specific use cases.

On Monday (6 February), the co-rapporteurs extended the notion of biometric identification and categorisation to biometric-based systems like Lensa, an app that can generate avatars based on a person’s face.

As the co-rapporteurs want live biometric identification in publicly accessible spaces to be banned altogether, the high-risk use case has been limited to ex-post identification. For privately-accessible spaces, both live and ex-post identification have been added to the list.

Moreover, the use cases include remote biometric categorisation in publicly-accessible spaces and emotion recognition systems.

The co-rapporteurs also included critical assets,  if the system’s failure is highly likely to lead to an imminent threat to such supply, this aims to to ensure the safety of water supply, gas, heating, energy, and electricity would only qualify in this category.

In the field of employment, the high-risk category was expanded to include algorithms that make or assist decisions related to the initiation, establishment, implementation or termination of an employment relation, notably for allocating personalised tasks or monitoring compliance with workplace rules.

In the educational sector, the wording has been amended to include systems that allocate personalised learning tasks based on the students’ personal data.

A new risk area was for  AI systems that may seriously affect a child’s personal development and also added for systems meant to be used by vulnerable groups. This vague wording might result in covering social media’s recommender systems.

Lawmakers expanded the wording in the law enforcement, migration and border control management to avoid the high-risk classification being circumvented using a contractor.

The EU legislators applied amendments into  AI applications that could influence people’s voting decisions at local, national or European polls is considered at risk, together with any system that supports democratic processes such as counting votes.

A residual category was introduced to cover generative AI systems like ChatGPT and Stable Diffusion. Any AI-generated text that might be mistaken for human-generated is considered at risk unless it undergoes human review and a person or organisation is legally liable for it.

Prohibited practices

Additional bans were added in the AI rulebook as part of the proposed changes by EU Lawmakers.

According to another compromise seen, CNC staff  subliminal techniques used by AI models using beyond a person’s consciousness are to be banned except if their use is approved for therapeutic purposes and with the explicit consent of the individuals exposed to them.

Applications that are driven by AI computing are also prohibited, if they are destined to be used intentionally for manipulation or designed to exploit a person’s vulnerability, like mental health or economic situation, to materially distort his or her behaviour in a way that can cause significant physical or psychological harm.

The co-rapporteurs are proposing expanding the ban on the social scoring not only of individuals but also to groups over inferred personal characteristics that could lead to preferential treatment.

The ban on AI-powered predictive policing models was maintained.

More definitions have been added concerning data, profiling, deep fakes, biometric identification and categorisation, subliminal techniques and sensitive data, bringing more clarity to these concepts and aligning them to the EU’s General Data Protection Regulation.

The European Parliament’s co-rapporteurs proposed compromise amendments to the list of high-risk AI applications, banned uses and concept definitions.

EU lawmakers Brando Benifei and Dragoș Tudorache are striving to close the negotiations on the Artificial Intelligence Act in the coming days. The Act is the world’s first attempt to regulate AI based on its potential to cause harm.

Among the pending issues the two lawmakers are trying to close is the list of AI uses that pose significant risks, the prohibited practices and the definitions of the key concepts used in the draft law, according to documents presented in Brussels by the European Commission.

Get access to more articles for free.
Create your free account
More Cyber News