Pivotal moment for EU AI Act: Who will lead the GPAI code of practice?


Pivotal Moment for EU AI Regulation: Who Will Lead the GPAI Code of Practice?

Published on 26/08/2024 – 15:38 GMT+2 by Kai Zenner, Head of Office, MEP Axel Voss; Cornelia Kutterer, Managing Director, Considerati.

Within the next three weeks, the AI Office will likely appoint an essential group of external individuals that will shape the implementation of a key component of the EU AI Act: the chairs and vice-chairs of the Code of Practices for General-Purpose AI (GPAI) models.

“The (vice) chairs’ expertise and vision will be crucial in guiding GPAI rules for the future and ensuring that the European way to trustworthiness in the AI ecosystem will endure.”

Recent developments in generative AI, including popular applications like OpenAI’s ChatGPT, have brought both economic disruption and political challenges during the AI Act trilogue negotiations. Member states such as France, Germany, and Italy expressed concerns that regulatory initiatives at the foundational level of the AI stack might stifle EU start-ups like Mistral or Aleph Alpha. On the other hand, the European Parliament, worried about market concentration and potential fundamental rights violations, advocated for a comprehensive legal framework for generative AI, now described in the final law as GPAI models.

In response to these contrasting views, EU co-legislators chose a co-regulatory approach, defining the obligations of GPAI model providers through codes and technical standards. Commissioner Thierry Breton notably proposed this strategy, drawing from the 2022 Code of Practice on Disinformation.

While codes of practice can provide flexibility for the fast-evolving AI landscape, critics argue they may lead companies to commit only to minimum standards.

However, the AI Office has lessons to draw from past experiences, such as the review process of the original EU Code of Practice on Disinformation in 2018, which resulted in increased accountability through civil society involvement and the appointment of independent academics to lead discussions.

Technically Feasible and Innovation-Friendly

The AI Office plans to utilize this prior experience in the development of GPAI, proposing a robust governance system for drafting the Codes of Practice through four Working Groups. Multiple stakeholders will be invited to contribute, particularly via public consultations and plenary sessions. Notably, GPAI companies will have enhanced roles in the drafting process, being invited to additional workshops.

Although these codes will be voluntary, the AI Office should prioritize appointing chairs with strong technical and governance expertise related to GPAI models to ensure high-quality outcomes.

The appointed individuals will hold significant influence in drafting texts and chairing working groups. An additional coordinating chair could help balance ambitious regulatory rules with the need for technically feasible and innovation-friendly obligations.

A Choice of Paramount Importance

The selection process is intricate, as the field of AI safety is still developing and characterized by trial and error. The AI Office must balance diverse professional backgrounds and interests while adhering to EU standards for country and gender diversity. Given the global nature of AI, numerous esteemed international experts have expressed interest in these roles, adding to the complexity of ensuring strong EU representation.

Ultimately, the selection of (vice) chairs for the GPAI Code is crucial. Their leadership will shape the co-regulatory exercise as it addresses complex socio-technical challenges, including sensitive policies such as intellectual property rights and content moderation.

In conclusion, the upcoming appointments will play a pivotal role in determining the effectiveness and legitimacy of the GPAI Code and its alignment with EU values.

About the Authors:
Kai Zenner is the Head of Office and Digital Policy Adviser for MEP Axel Voss (Germany, EPP) and played a key role in the AI Act negotiations at the technical level. Cornelia Kutterer is Managing Director of Considerati, an adviser to SaferAI, and a researcher in AI at the University of Global Administrative.

© Singularity Chamber of Commerce (SChamber) All Rights Reserved.