Since the release of the highly anticipated draft Artificial Intelligence (AI) Act (the “AI Act”) over a year ago, the third and final of the three EU core institutions, the European Parliament, has approved its draft AI Act. The next step is for EU lawmakers and member states to negotiate the final details of the bill. This represents the most ambitious attempt to regulate AI technologies globally to date, setting out a cross-sectoral regulatory approach to the use of AI systems across the EU.
In order to comply with the draft AI Act in its current form, any company that develops or deploys AI systems must prepare to enact a risk analysis of each of their AI applications. Any AI practices identified that fall under ‘unacceptable’ will be prohibited by the draft AI Act and ‘high risk’ systems will need to comply with certain mandatory requirements, or otherwise face penalties. The draft AI Act’s inherent aim is to regulate the use of AI systems that may impeach on fundamental rights of natural persons. In the meantime, BigTech firms such as Microsoft and Google are driving for further clarity around the meaning of ‘high-risk use’.
Recent amendments and updates to the Act
The approval follows European Parliament’s committees approval of the compromise amendments to the draft AI Act (found here). Since we reported on this topic in our previous blog, some of amendments in the draft AI Act include:
- amending the definition for AI to be technology neutral so that it covers the ever-growing developments of AI systems and to align it with the definition agreed by the Organisation for Economic Co-operation and Development;
- amending the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy);
- expanding the classification of ‘high-risk’ areas to include:
- harm to people’s health, safety, fundamental rights or the environment;
- AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act);
- adding the additional requirement that the systems must pose a ‘significant risk’ to qualify as ‘high risk’;
- adding the obligation on those deploying a ‘high-risk’ system in the EU to carry out a fundamental rights impact assessment including a consultation with the competent authority and relevant stakeholders;
- including obligations on providers of foundation models to ensure a robust protection of fundamental rights, health, safety, the environment, democracy and the rule of law. Such providers would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database;
- including obligations for generative foundation AI models, such as Chat GPT (which use large language models to generate art, music or other content) to comply with additional, more stringent transparency requirements, such as disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training;
- strengthening national authorities’ competences and establishing an EU AI Office, a new EU body which would be tasked with the harmonised supervision and application of the AI Act, providing guidance and coordinating joint cross-border investigations.
- in order to support innovation, adding exemptions from compliance with the Act consisting of rules for research activities and the development of free and open-source AI components, which would be largely exempted.
The next step is for the three separate drafts (of the European Parliament, European Commission and European Council) to be merged into a final text with the AI Act being voted into force by the end of 2023.
Find our previous blog on the AI Act here.
Find out more about the developments in the UK’s approach to AI here.
The material contained in this article is only for general review of the topics covered and does not constitute any legal advice. No legal or business decision should be based on its content.
This article is written in English language. Preiskel & Co LLP is not responsible for any translation of all or part of its content into any language.