Enhancing Global AI Governance through Compute Resource Management
Eva Behrens, David Janků, Bengüsu Özcan, Max Reddel | May, 2024
This piece was written in response to an invitation by the Instituto Affari Internazionali on the occasion of the Italian presidency of the G7. It was originally published here.
In October 2023, the G7 countries published the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, in which they suggest the implementation of measures to “identify, evaluate and mitigate risks across the AI lifecycle”. As artificial intelligence (AI) technology becomes increasingly powerful and integrated into society, safety measures and protocols that ensure these systems function safely and predictably become increasingly important. Many current AI governance efforts, such as the UK’s AI Safety Institute, chiefly address the deployment and post-deployment stages of the AI lifecycle through model evaluations, leaving out the pre-development and development phases. But these safeguards are not enough; current AI safety evaluation regimes lack rigorous methodologies to predict and mitigate risks due to persisting gaps in expert’s understanding of the inner workings of AI models, making it difficult to generalise experimental results.
Nonetheless, ever-more powerful advanced AI systems are released every few months, while AI experts and developers are sounding the alarm bells about very advanced AI systems posing extreme risks on a global scale. Their concerns range from the large-scale deployment of lethal autonomous weapons or malicious actors destabilising governments through advanced AI-driven misinformation campaigns, to smarter-than-human, out-of-control AI accidentally causing human extinction in the future. Considering these high stakes, we need transparent, robust governance mechanisms that address the pre-development stage of the AI lifecycle to safeguard against the development of high-risk advanced AI systems.
Developing advanced AI systems requires three key components: algorithms, massive high-quality datasets and access to compute resources (powerful microchips). Of the three, managing access to and use of compute resources is the most convenient lever for pre-development AI governance. Empirical data suggests that the larger and more powerful an AI model is, the more compute was used to train it, with the most powerful models requiring tens of millions of dollars worth of cutting-edge microchips to train. Conveniently, compute is also detectable, quantifiable, is produced via a highly concentrated supply chain, and access can be granted or limited physically.
Therefore, we propose that the G7 countries back an international institution that designs international standards for the responsible management of compute resources. Such standards would help expand AI governance to cover the entire AI lifecycle, as appropriate for such a high-risk technology, and as proposed in the 2023 Hiroshima Guiding Principles.
The International Compute Governance Consortium
We suggest that the G7 countries support the establishment of an International Compute Governance Consortium (ICGC) tasked with developing standards for the responsible use and distribution of compute resources in AI research and development.
To design well-informed standards, this new institution would initially focus on gathering information on present compute ownership and use by the public and private sectors within the jurisdiction of its member states, tracking compute use and assessing its impact. By collecting such data, the ICGC would create transparency on the questions of who controls the compute, and who has access to it, fostering accountability and informed policymaking. This process would also lay the groundwork for a potential future multilateral organisation that manages the access to compute to ensure malicious actors or those following unsafe practices do not have access to enough compute to cause significant damage.
While the G7 countries would support and aid with the founding of the ICGC, it would be an international institution open to all countries and co-governed by its member states. The internationalisation of compute governance is necessary because advanced AI systems pose extreme risks on a global scale, making AI safety a global challenge. Thus, internationally cohesive action will be impossible without the participation of some major non-G7 stakeholders like China.
However, national AI governance interests and priorities vary. France for example plans to massively invest in domestic AI innovation to unlock economic growth, and to become a global leader in AI. Other countries, like the US and China, are introducing laws and policies requiring developers to disclose information about the training of their advanced AI models to mitigate risks. The challenge in building an international compute governance framework will be finding a solution that respects national interests while remaining effective.
Creating Transparency: The Global Compute Registry
To fulfil its mission, the ICGC would create a Global Compute Registry, which would track the ownership and use of compute resources. Any entity possessing large-scale computing clusters located or operating in the member states would be required to report such possessions, including its location and compute capacity. Changes of possession should also be reported, especially if some part of the cluster is transferred to a non-member state. This idea is not without precedent; the 2023 US Executive Order on AI already introduced some reporting requirements on location and total capacity for owners of large compute clusters in the US.
Furthermore, owners would be required to report provisions of access to these clusters to any domestic or foreign entity, including the type of use (for instance, training specific or general AI models, foreseen use cases and risks, etc.) and verification of the user’s identity. This approach mirrors Know Your Customer policies in the financial sector, which require companies to verify the identity of their clients to prevent illegal activities. The Global Compute Registry should also publish an annual report presenting data on the amount of compute resources globally available and project its growth.
This would allow the ICGC to record and make transparent large concentrations of computing power and gather information about their use for the development of safety-first compute use standards. The vast majority of AI models and their applications would remain untouched by such standards since they do not require high compute concentrations for training, nor pose extreme risks.
Evaluating the Impact: The Compute Resource Impact Assessment
A second important function of the ICGC would be assessing the impacts of compute use by establishing a Compute Resource Impact Assessment protocol. This protocol would evaluate the economic, environmental and societal impacts and risks of compute resource allocation, providing crucial context for the ICGC’s development of compute use standards.
The protocol would define compute thresholds above which training a powerful AI system would be considered high-risk as the EU did in its AI Act, and re-adjust them regularly. Since skewed compute distribution can limit beneficial, low-risk AI innovation and research in under-resourced regions, exacerbating disparities in economic growth, education, and employment, the protocol would also assess the societal impact of compute distribution, access and use by analysing the allocation of compute resources across various sectors, populations and geographical regions. Finally, under this protocol, the ICGC would examine environmental effects, such as the carbon footprint of compute clusters and chip manufacturing.
Integrating Existing Compute Governance Frameworks
The proposed International Compute Governance Consortium would not only help expand AI governance to cover the entire AI lifecycle, as proposed in the 2023 Hiroshima Guiding Principles. It would also build upon and integrate other existing efforts to establish information disclosure regimes for high-risk AI systems, such as those in the US and the EU, as well as international cooperation efforts like the Bletchley Declaration. The 2023 US Executive Order on the safe development and use of AI requires reporting ownership of large-scale computing clusters and provisions of access to any foreign entities. Similarly, the EU AI Act considers models trained on compute resources above a given threshold to potentially pose systemic risk. Developers of such models must notify the Commission and comply with several safety precautions. Furthermore, the ICGC would complement the OECD’s AI Principles on Robustness, Security and Safety, promoting international cooperation on AI governance.
To increase participation in the ICGC, the G7 countries could cooperate with other international forums, such as the AI Safety Summit series started by the UK in 2023 and the G20. The G20 could be a fitting partner; the G20 AI Principles state that “AI systems should be robust, secure and safe throughout their entire lifecycle”, which matches the G7’s Hiroshima Principles, and its membership includes some major countries not represented in the G7, such as China and India.
By supporting ongoing efforts of international cooperation on AI governance, the ICGC would enhance existing national AI governance frameworks, standardising compute data collection and impact assessments. As such, it would increase the transparency of compute use and lay the groundwork for the design of international compute resource management standards that ensure the development of safe, beneficial AI.