Insights from IAPP AI Governance Global 2024

data governance

In early June, Euranova's Chief Technology Officer, Sabri Skhiri, attended the IAPP AI Governance Global 2024 conference in Brussels, a seminal event designed to facilitate dialogue and collaboration among business leaders, technology professionals, and legal experts involved in AI development and implementation. The conference provided an unparalleled opportunity for participants to step out of their operational silos and engage with the multifaceted realities of AI regulation and data protection.
We propose you dive into the key takeaways with Sabri:


Before starting, let me mention that as always, I will not go into technical details in this article, but feel free to consult my longer blog post on our research website, where I go deeper into the keynotes, insights from the panels discussions, and my favourite talks.

Bridging Knowledge Gaps in AI and Law
The IAPP conference delved deep into the complexities of data protection and the relationships with the AI Act, a critical legislative framework within the EU.  This provided a much-needed chance to bridge the knowledge gap and gain a deeper appreciation for the legal landscape surrounding data and AI. 

This is also an important market opportunity. During lunch, a data and AI law professor from the University of Hong Kong raised a thought-provoking point. She pointed out the significant (and abnormal) presence of software vendors at the conference and humorously questioned whether the AI Act might be primarily intended to boost the EU's AI and data market economy. That is not completely wrong when we look at the market projections for the next three years.

Standardising AI Practices 
A recurrent theme at the conference was the call for standardisation in AI practices to ensure efficiency and regulatory compliance. Many speakers justified the current state of AI practices by calling it a "nascent technology." However, I respectfully disagree. AI has been around for over two decades, and data scientists have the ability to evaluate and profile its capabilities. What's truly nascent is the widespread adoption of AI. Today, anyone with the ability to call an API can integrate AI – that's the novelty. This is precisely why standardised AI practices within organisations are essential.  While principles like privacy by design, compliance, transparency, and safety are crucial, we need to normalise the development and use of AI. Standardisation will not only ensure compliance with regulations but, more importantly, boost productivity and efficiency in AI development and deployment.

Navigating the Murky Waters of the AI Act
The conference acknowledged a significant experience gap in handling the AI Act, compared to the 25 years of established data protection law. While data privacy regulations represent familiar territory for legal professionals, the AI Act presents a new and complex challenge. This lack of experience directly impacts the application of the AI Act, particularly in situations where legal interpretations are unclear. 

For example, we can take the ongoing debate around deployer vs. provider classification for models like Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) or fine-tuning. Over the next few years, keeping pace with regulatory guidance on the AI Act will be crucial for effectively navigating these grey areas. 
Finally, while the GDPR protects individual data rights, the AI Act sets standards for how AI systems are built—conformity. This is exemplified by the principle of transparency in both the GDPR and the AI Act, which are implemented in very different ways. 

AI Governance: A Familiar Yet Evolving Landscape
AI governance was another focal point of the conference, drawing parallels with data governance. Both disciplines aim to secure decision-making, with data governance focusing on data-driven decisions and compliance. Similarly, AI governance focuses on securing decisions based on model outputs and services, also laying the foundation for compliance. Several sessions delved into the specific elements of AI governance, emerging best practices, and how to integrate them with existing governance models. In conclusion, the main takeaway is: don’t consider AI governance as a non-valuable obligation but rather a boost to productivity and efficiency in AI.

The conversation expanded to address not just the critical need for governance in high-risk AI models but also its relevance for lower-risk ones. A thought-provoking question arose: can we ensure that the principle of purpose limitation is maintained when applying fine-tuning or Retrieval Augmented Generation (RAG) techniques to general-purpose AI models? This scenario underscores the importance for organised and operationalised AI governance across all types of models. This involves documenting the purpose of each AI model, its operational context, associated risks, legal basis, accuracy, robustness, and potential impact on individual rights.

Transparency and Risk Management: Beyond Compliance Software
While the conference featured software vendors offering compliance solutions, the key takeaway was that simply purchasing a tool isn't enough. Organisations still require a comprehensive approach. This includes:

  • Integrating Legal Expertise: Close collaboration with internal legal departments is crucial.
  • Customised Impact Assessments: Organisations need to conduct tailored impact assessments specific to their needs.
  • Defining Minimum AI Governance: Establishing minimum standards for responsible AI governance within the organisation.
  • Model Lifecycle Management: Integrating AI model lifecycle management practices.
  • Role & Responsibility Definition: Clearly defining roles and responsibilities for AI development and deployment.
  • Process and Workflow Development: Building and implementing processes for data and AI governance.

This emphasises (again) a key point: a foundational level of AI governance is vital, even if organisations aren't deploying high-risk models. Raising market awareness about this need is crucial. 

The IAPP AI Governance Global 2024 conference in Brussels highlighted the critical need for a unified approach to AI governance that transcends compliance and leverages it as a driver for business efficiency and innovation. At Euranova, we are committed to bridging the gap between technological advancements and regulatory requirements, empowering organisations to harness the full potential of AI while adhering to the highest standards of governance and compliance.

Euranova: AI Governance and Compliance
As expert consultants in AI, data infrastructure, data governance, and legal compliance with a deep understanding of the evolving AI landscape, Euranova is uniquely positioned to help organisations navigate the complexities of the AI Act and implement robust AI governance frameworks. Should your organisation need guidance to operationalise effective data and AI governance practices, we are available to help. 

All blog articles