The Case for a Global AI Observatory (GAIO), 2023

image

The Artificial Intelligence & Equality Initiative (AIEI) is an impact-oriented community of practice seeking to understand how AI impacts equality for better or worse. AIEI works to empower ethics in AI so that it is deployed in a just, responsible, and inclusive manner.

The authors of the following GAIO proposal are Professor Sir Geoff Mulgan, UCL; Professor Thomas Malone, MIT; Divya Siddharth and Saffron Huang, the Collective Intelligence Project, Oxford University; Joshua Tan, Executive Director, the Metagovernance Project; Lewis Hammond, Cooperative AI

Here we suggest a plausible, and complementary, step that the world could agree on now as a necessary condition for more serious regulation of AI in the future (the proposal draws on work with colleagues at UCL, MIT, Oxford, the Collective Intelligence Project, Metagov and the Cooperative AI Foundation as well as previous proposals).

A Global AI Observatory (GAIO) would provide the necessary facts and analysis to support decision-making. It would synthesize the science and evidence needed to support a diversity of governance responses and answer the great paradox of a field founded on data in which so little is known about what’s happening in AI, and what might lie ahead. Currently no institutions exist to advise the world, assessing and analysing both the risks and the opportunities, and much of the most important work is kept deliberately secret. GAIO would fill this gap.

The world already has a model in the Intergovernmental Panel on Climate Change (IPCC). Established in 1988 by the United Nations with member countries from around the world, the IPCC provides governments with scientific information they can use to develop climate policies.

A comparable body for AI would provide a reliable basis of data, models, and interpretation to guide policy and broader decision-making about AI. A GAIO would have to be quite different from the IPCC in some respects, having to work far faster and in more iterative ways. But it would ideally, like the IPCC, work closely with governments providing them with the guidance they need to design laws and regulations.

At present, numerous bodies collect valuable AI-related metrics. Nation-states track developments within their borders; private enterprises gather relevant industry data; and organizations like the OECD’s Artificial Intelligence Policy Observatory focus on national AI policies and trends. There have also been attempts to map options for governance of more advanced AI, such as this from governance.ai. While these initiatives are a crucial beginning, there continues to be a gulf between how the scientists think about these issues, and how the public, governments, and politicians do. Moreover, much about AI remains opaque, often deliberately. Yet it is impossible to sensibly regulate what governments don’t understand.