Saturday, April 27, 2024
Advertisement
  1. You Are At:
  2. News
  3. Explainers
  4. What are EU AI regulations and how do they impact generative AI models? Explained

What are EU AI regulations and how do they impact generative AI models? Explained

The recently enacted legislation sets out requirements for AI applications based on their potential risks and impact. Its objective is to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI.

Om Gupta Written By: Om Gupta New Delhi Published on: March 14, 2024 15:49 IST
European Union
Image Source : FREE PIK EU AI regulations

Nearly three years after the draft rules, the European Parliament has finally approved legislation to regulate artificial intelligence. After reaching an agreement in December, the legislation received 523 votes in favour while 46 parliamentarians were against it. The legislation also saw 49 abstentions.

The newly passed legislation defines obligations for AI applications based on potential risks and impacts. It also aims to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field," as per the EU. 

When will the act become law? 

The legislation is likely to come into force ahead of the next EU parliamentary election in early June. Most of the provisions take effect 24 months after the AI Act becomes law, but bans on prohibited applications apply after six months.

What is not allowed? 

This act prohibits certain practices that the EU believes have the potential to harm the rights of citizens. The usage of biometric categorization systems, which operate on the basis of sensitive characteristics, will be outlawed. In addition to this, the untargeted scraping of facial images from both CCTV footage and the internet to create facial recognition databases will also be banned.

The act also prohibits certain applications of artificial intelligence deemed harmful to individuals or society. These include social scoring, emotion recognition in schools and workplaces, and AI that manipulates human behavior or exploits vulnerabilities. Predictive policing based solely on profiling an individual or assessing their characteristics will also be banned. While the use of biometric identification systems in law enforcement will be largely prohibited, exceptions will be made with prior authorization, such as in cases of finding missing persons or preventing terrorist attacks.

High-risk application of AI

Applications that are considered high-risk, such as the use of AI in law enforcement and healthcare, must adhere to certain conditions. These applications must not discriminate and must comply with privacy regulations. Moreover, developers must demonstrate that their systems are transparent, safe, and explainable to users. 

Conversely, for AI systems that the EU categorises as low-risk, such as spam filters, developers are still required to inform users that they are interacting with AI-generated content.

The law requires clear labeling of AI-generated media, including deep fakes, and respect for copyright laws.

Who is covered under this law? 

According to the rules, AI models that are trained using a total computing power of more than 10^25 FLOPs and are considered to be the most powerful and generative, pose systemic risks. It is believed that OpenAI's GPT-4 and DeepMind's Gemini fall into this category. 

Providers of such models are required to assess and mitigate risks, report serious incidents, provide details of their system's energy consumption, ensure they meet cybersecurity standards, and conduct state-of-the-art tests and model evaluations.

What if a company does not comply with the Act?

The AI Act enforces strict regulations for companies operating within the EU. Any model operating in the EU will be subject to fines for non-compliance, with penalties ranging up to €35 million ($51.6 million) or up to seven percent of their global annual profits, whichever is higher.

How will the Act be enforced? 

In order to ensure compliance with the law, every member country will establish its own AI oversight body. Meanwhile, the European Commission will establish an AI Office, which will be responsible for developing evaluation methods and monitoring potential risks associated with general-purpose models. Providers of such models that are found to pose significant risks will be asked to collaborate with the office in order to establish codes of conduct.

ALSO READ: What is proposed Digital Competition Law and how does it impact digital enterprises? Explained

 

Advertisement

Read all the Breaking News Live on indiatvnews.com and Get Latest English News & Updates from Explainers

Advertisement
Advertisement
Advertisement
Advertisement