Europe’s Imperative to Shape a Safe AI Future

Max Reddel | November 28, 2023

In the rapidly evolving landscape of technology, the European Union stands at a crossroads with the Artificial Intelligence Act. A key decision looms: whether to exempt foundation models from this regulatory framework or not. This choice is not a matter of European competitiveness – it represents a foundation stone for responsibly governing our technology-infused future.

Understanding Foundation Models

Foundation models are the underlying technology of advanced AI systems that already underpin a broad spectrum of applications, and are continuously evolving. These versatile ‘master tools’ of AI are trained on extensive data to interpret language, images, and more. Their development is accelerating at a pace faster than anticipated, often outpacing the full understanding of their creators, and they are rapidly approaching a level of sophistication that could soon pose a serious risk of misuse and perhaps even loss of control. With this technology set to dislocate millions of jobs and impact our societies, a growing chorus of AI developers and politicians are calling for the development of responsible and safe governance frameworks as they set out to dislocate millions of jobs and impact our societies. They are also asking political leaders to prepare for the moment these systems start to outsmart humans. The AI Safety Summit, which recently saw the United Kingdom gather world leaders in Bletchley Park, discussed the next iteration of foundation models as we know them: so-called frontier models that could pose serious risks. 

A recent last-minute proposal by France, Germany, and Italy in the ongoing negotiations of the EU AI Act suggests an approach that focuses on AI applications and exempts foundation models, the most powerful and advanced form of AI out there. Under such a framework, derived AI applications would be subject to government regulations, primarily impacting everyday AI usage and smaller businesses. Conversely, foundation models, which are developed by a handful of large tech companies worldwide, would be allowed to self-regulate. This could create a paradoxical situation where the very entities capable of creating the most impactful and potentially risky AI advancements are the least regulated. Does that sound right?

It is perplexing, especially considering that European companies like Mistral AI and Aleph Alpha, which enjoy substantial international investment, are still struggling to compete with their predominantly American counterparts. This disparity in regulatory oversight might not only influence the dynamics of AI innovation but could also further skew the competitive landscape in favor of those with fewer regulatory constraints.

The recent upheaval at OpenAI, involving the dismissal and subsequent rehiring of key figures, highlights a critical aspect of the global AI race: the fragility of governance structures in major AI entities. This incident, marked by internal power struggles and rapid changes in leadership, underscores the potential volatility in the AI sector. Such instability can exacerbate the race dynamics toward developing highly capable AI systems, as organizations may prioritize rapid development over thorough safety and ethical considerations to maintain a competitive edge or respond to internal pressures. This situation serves as a cautionary example of how governance and leadership disputes within major AI entities can inadvertently fuel a global race towards more powerful AI, heightening the need for comprehensive regulatory frameworks like the AIA.

Strengthening European AI

Including foundation models in the AIA, in particular through the especially disputed Article 28b, in fact offers Europe a chance to enhance its global AI market position. This step is supported by industry leaders and over 45,000 digital SMEs, who recognize the importance of these models for European innovation.

Article 28b introduces a tiered regulatory approach, with larger models facing more scrutiny. This approach doesn’t stifle innovation but promotes responsible, safe, and values-aligned AI development. It levels the playing field for European innovators against larger global competitors.

A development-focused regulatory approach simplifies compliance for secondary vendors and smaller enterprises, boosting innovation and diversity in the European AI ecosystem. Regulating foundation models sends a clear message to international markets about Europe’s commitment to responsible AI development.

The Necessity of Regulation

Due to their far-reaching impact and foundational nature, foundation models are set to shape societal norms, knowledge work, and potentially democracy itself. This calls for a regulatory approach that focuses on foundational elements rather than just the applications built upon them. Just as a building’s design dictates its functionality and safety, the core algorithms and data of these models dictate how AI applications function and interact with the world.

Regulating at the application level is insufficient; it’s like inspecting individual rooms while ignoring the building’s overall design. If the foundation model – the blueprint – is flawed, application-level regulation cannot fully prevent potential harm.

Setting a Global Standard

Beyond European borders, the focus on foundation models also aligns with a growing global consensus on tightening AI regulations at their foundational level. By regulating foundation models, the EU can set a global standard for digital ethics and governance, positioning itself as a trailblazer in the digital age.

Appropriately regulating foundation models ensures that our digital spaces are designed to be safe, inclusive, and reflective of shared values. It aligns a largely digital future with European principles of democracy and human rights.

A Call to Action

Retaining the regulation of foundation models in the AIA is pivotal: it is a step towards a future where AI is not only technologically sophisticated but also kept in check. The world is on the cusp of a technological revolution. The AIA has the potential to lead the way for a global approach to technology and society, embedding the fundamental principles and values that should guide us into the future.

Scroll to Top