Advanced AI: Technical State of Play

Advanced AI: Technical State of Play

Daan Juijn | June 2024

Advanced AI State of Play - cover image
Download the report here

Executive Summary

Advanced AI systems are evolving rapidly, revolutionizing industries and societies with capabilities like sophisticated language processing, text-to-speech conversion, and lifelike imaging. This surge is primarily fueled by exponential increases in computing power used to train advanced AI systems: the largest AI models of today are trained using billions of times more mathematical operations than previous state-of-the-art systems required back in 2010.

Over the next five years, compute, alongside algorithmic advancements, is expected to continue to drive the exponential progress in advanced AI. With AI companies increasing their compute budgets by more than fourfold annually, breakthrough AI systemssuch as highly proficient autonomous agentscould soon emerge. These systems have the potential to rejuvenate European economies but also pose significant new risks, such as widespread cyber-attacks or large-scale accidents resulting from poor understanding of the systems’ inner workings.

In response to these rapid developments, the EU Parliament recently approved the landmark AI Act in an effort to govern (advanced) AI systems. However, in its current form, the AI Act may be insufficient to curb the risks of models that will be released in the near-future – even as soon as next year. Given the pace of improvement in AI capabilities, there is a critical need for more robust measures that leverage compute for effective AI governance. The following five recommendations suggest avenues to future-proof EU efforts.

Key policy recommendations

  1. Strategic allocation of EU compute resources. The European High Performance Computing (EuroHPC) Joint Undertaking should shift its focus away from inadequate endeavors aimed at training competitive foundation models from scratch. Instead, EuroHPC’s should double down on the other pillars of its AI Factories program by:
    1. Enhancing the understanding, safety, and control of advanced AI models through compute-intensive research. This kind of research can also help spur a thriving European AI insurance sector.
    2. Developing large but specialist AI systems that can help tackle societal problems in e.g. medicine, energy or climate science and which are not expected to be taken up by the leading advanced AI companies. 
  2. ​​Extension of the AI Act’s GPAI regulation. The European Commission should prepare the addition of a third tier to the AI Act’s GPAI regulation that addresses the severe systemic risks posed by the next generations of GPAI models. This extension would include:
    1. An appropriate additional compute threshold above which GPAI models carry the presumption of severe systemic risk.
    2. Requirements that mitigate the formation of dangerous capabilities during training and prevent pre-deployment proliferation of inherently hazardous model weights.
  3. Compute-based enforcement scaling and prioritization. The AI Office should scale and prioritize enforcement efforts in alignment with compute trends. More specifically, the AI Office should:
    1. Strengthen its resolve to prioritize evaluation of GPAI models with the largest compute budget in case of (temporarily) limited personnel capacity.
    2. Conduct or commission detailed capacity requirement projections based on compute trends, and hire/seek collaborations in line with those projections. 
  4. Establishment of an EU AI foresight unit. The European Commission should create a dedicated AI foresight unit within the AI Office to better deliver on the Office’s task to keep track of the evolution of AI markets and technologies. Studying (effective) compute trends would enable this unit to:
    1. Discern several quantitative scenarios of future training compute budgets and inference capacities.
    2. Work together with academia and civil society to map out what types of capabilities and accompanying risks might arise in the coming years for each of these scenarios.
  5. Implementation of a multilateral compute oversight system. The EU should start international dialogues to implement a multilateral compute oversight system that builds on the monitoring requirements of the EU AI Act and the recent US Executive Order 14110. This oversight system could start out as a bilateral agreement between the EU and the US and could afterwards be extended to other G7 countries. Monitoring requirements would:
    1. Focus on the location of large AI clusters (theoretical maximum of >10^20 FLOP/s and >100 gbit/s networking) and planned or ongoing very large training runs (>10^26 FLOP).
    2. Apply within each individual jurisdiction, with participating governments committing to sharing decision-relevant high-level information with each other.


Recommendation summary slides_Compute Report


“When it comes to developing the next generation of AI systems, all the leading companies are betting on compute to push them over the edge, so the race is on to secure as much of it as they can,” said Daan Juijn, Emerging Technology Foresight Analyst at ICFG. “For the next five years, compute is expected to drive exponential progress in AI – and that’s why focusing governance efforts on this hardware is so critical.”

“Even though the EU’s AI Act is a major step, we cannot afford to rest on our laurels – regulation alone won’t make the EU competitive,” said Max Reddel, ICFG’s Advanced Artificial Intelligence Program Lead. “The eight EuroHPC supercomputers house some 32,000 specialised AI chips. Microsoft is targeting 1.8 million AI chips by the end of 2024. That’s roughly a 100x difference in AI compute resources.”

The International Center for Future Generations (ICFG), headquartered in Brussels, is a newly established, independent think tank committed to advancing public policy concerning emerging technologies such as advanced AI, biotechnology, neurotechnology, quantum computing, and climate interventions.

This brief is part of a broader series of State of Play reports on advanced AI scheduled for release throughout 2024. ‘Advanced AI’ refers to highly capable foundation models (such as GPT-4, Claude Opus and Gemini Ultra) or systems built on top of foundation models that can possess capabilities sufficient to pose serious risks to public safety. This particular document zeroes in on the role that computational power (or ‘compute’) plays in advanced AI developments. More specifically, it collects the most policy-relevant facts and trends on compute in AI and tries to present these in a non-technical manner. Additionally, it provides five specific policy recommendations to assist EU policymakers in utilizing compute to foster responsible AI governance.

Scroll to Top