Top Stories



An Expert Interview with Marco Imperiale on the impending AI Act within the EU

Posted: 4th August 2023 10:54
Marco Imperiale is the head of innovation at LCA, a leading Italian law firm, and a visiting researcher at Harvard Law School. He has extensive experience in legal design, legal tech, and in the interplay of copyright law and the entertainment industry. Whenever he finds time, he also works as teaching fellow (CopyrightX course) and as mindfulness trainer. Marco is an avid passionate of innovation in its broader meaning, and he is the co-author of the first Italian book on legal design, published by Giuffré Francis Lefebvre. 
 
Could you shed light on the significance of the EU AI Act draft?
 
The AI Act undeniably stands as a potentially groundbreaking legislation, primarily for its societal ramifications, and secondarily for its economic and political impacts. Echoing the global influence of GDPR, the Act projects the European Union's unique approach towards AI, setting a benchmark for other countries and marking a distinction from the U.S. and China's stance.
 
The ubiquity of generative AI – exemplified by the international success of ChatGPT – alongside influential opinions like those of Elon Musk and Sam Altman, has indeed escalated the urgency for swift legislative action to address AI risks and opportunities.
 
Could you walk us through the legislative process?
 
The Commission proposed a harmonised legal framework for AI applications in April 2021, risk-based in nature. As the legislative process evolved, the Council of the EU and the Parliament drafted their positions, which, however, started diverging from the Commission’s original proposal. Consequently, the three legislative bodies now face the task of reaching a consensus on the AI Act's integral aspects. The upcoming “trilogue” meetings between the three bodies promise intense debate to align their views on the final text. For practicality, it's anticipated that the text will be adopted before the next European elections in May 2024, implying the earliest applicability of the Act would be in 2025.
 
What contrasts theCommission's proposal from the Council's draft?
 
Though broadly similar, certain aspects distinguish the two; primarily concerning the scope, prohibition or high-risk innovations, and innovation support.
 
In your view, what are the crucial components of the legislation?
 
Considering the amount of pages of the Act, it's challenging to encapsulate all. However, I would argue that the key elements include the risk-based approach, proposed sanctions, prohibited practices, and obligations for providers of foundation models.
 
Could you elaborate more on these points?
 
The AI Act divides AI systems into five categories based on risk levels: prohibited, high-risk, low-risk, minimal-risk, and general-purpose AI systems. With escalating risk levels, the corresponding measures also increase, with prohibitions applied to the riskiest systems and transparency obligations for less risky ones. High-risk AI systems specifically are either products or safety components of products subject to third-party conformity assessment or AI systems purposed for uses identified by the AI Act.
 
Prohibited practices entail the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, predictive policing systems, biometric categorisation systems, emotion recognition systems in specific contexts, and the creation of facial recognition databases from indiscriminate scraping of biometric data.
 
The Act empowers member states to lay down rules on penalties, including administrative fines, for AI Act infringements. The fines for most severe breaches can reach up to €30 million or six per cent of a company's total worldwide annual turnover. In cases of noncompliance with obligations for high-risk AI systems and general-purpose AI systems, as well as transparency obligations for low or minimal risk AI systems, fines can reach up to €20 million or four per cent of a company's total worldwide annual turnover. Incorrect, incomplete, or misleading information provision could lead to fines of up to €10 million or two per cent of a company's total worldwide annual turnover.
 
Foundation model providers, referring to AI systems designed for a broad range of tasks, are required to demonstrate risk identification and mitigation, incorporate datasets with appropriate data governance measures, achieve certain performance levels, prepare extensive technical documentation, establish a quality management system, and register the foundation model in an EU database. In generative AI systems, they are required to disclose AI-generated content and ensure safeguards against the generation of illegal content, along with publishing a summary of used training data.
 
And finally, could we delve into your personal perception of the AI Act draft?
 
Indeed. Crafting legislation for such intricate and sensitive matters is an intricate endeavour. Although regulators frequently refer to a ‘human-centric’ approach, I would have preferred an attempt towards applying legal design thinking methodology. The exponential pace of innovation is not aligned with the conventional, slow legislative processes marked by extended dialogues and multiple drafts. The world is evolving rapidly, and a regulation expected to impact only from 2025 or later seems inefficient. The introduction of regulatory sandboxes is a welcome initiative but insufficient. We need a transformation in our approach to law-making – an ambitious venture but worth pursuing.
 
Marco Imperiale has a considerable experience in the fields of legal tech, legal design, and in the interplay between copyright and entertainment. Beyond his role as Head of Innovation, he provides legal advice in fields which have in common the innovation factor. Among them, sustainability, third party funding, and new technologies.

Marco can be contacted on +39 02778875284 or by email at marco.imperiale@lcalex.it

Related articles