Tags
articlereportpodcast
Date
Aug 29, 2023 2:03 PM
The Artificial Intelligence & Equality Initiative (AIEI) is an impact-oriented community of practice seeking to understand how AI impacts equality for better or worse. AIEI works to empower ethics in AI so that it is deployed in a just, responsible, and inclusive manner.
Podcast Achieving an international framework for the governance of AI, published on July 7th, 2023. - The rapid deployment of generative artificial intelligence has created an inflection point for the responsible use of technology and presented society with a critical question. How can we promote the benefits of innovative technologies while simultaneously addressing potential disruptions and ensuring public safety and security. We argue that this balancing act is achievable, but not without a robust framework in place for the international governance of AI. The international community must act now to establish potent governance mechanisms that provide effective ethical and legal oversight of AI. Drawing on ideas and concepts discussed in two June 2023 multidisciplinary expert workshops organized by Carnegie Council's AI and Equality Initiative and IEEE SA and hosted by UNESCO in Paris and ITU in Geneva, a formal framework for the international governance of AI was developed. The framework proposes five symbiotic components, three of which we mentioned here. First is the need to immediately create a global AI observatory, a GAIO, to facilitate communication, cooperation and a degree of coordination among the many initiatives entering the AI governance space. A GAIO should perform an array of tasks from producing annual reports to maintaining critical registries. Second is the development of a normative governance capability with limited enforcement authority that legitimizes global oversight of AI, understanding this may take more time to develop. Finally, a variety of tools, some technical will need to be developed to ensure the safety and transparency of AI applications and facilitate the ability of states and stakeholders to govern them. Our aim is for these recommendations to have an immediate effect. Therefore, we have purposefully directed attention to practical ways to implement a governance framework that builds on existing resources to promote understanding, collaboration and implementation. Such a framework would enable the constructive use of AI and related technologies, while helping to prevent immature uses or misuses that cause societal disruption or threaten public safety and international stability. And now the framework. Overview. Promoting the benefits of innovative technologies requires addressing potential societal disruptions and ensuring public safety and security. The rapid deployment of generative artificial intelligence applications underscores the urgency of establishing robust governance mechanisms for effective ethical and legal oversight. This concept note proposes the immediate creation of a global AI observatory supported by cooperative consultative mechanisms to identify and disseminate best practices, standards and tools for the comprehensive international governance of AI systems. Purpose. This initiative directs attention to practical ways to put into place a governance framework that builds on existing resources and can have an immediate effect. Such a framework would enable the constructive use of AI and related technologies while helping to prevent immature uses or misuses that cause societal disruption or pose threats to public safety and international stability. From principles to practice. Numerous codes of conduct or lists of principles for the responsible use of AI already exist. Those issued by UNESCO and the OECD G20 are the two most widely endorsed. In recent years various institutions have been working to turn these principles into practice through domain specific standards. A few states and regions have made proposals and even enacted constraints upon specific uses of AI. For example, the European Commission released a comprehensive legal framework, the EU AI Act, aiming to ensure safe, transparent, traceable, non-discriminatory, and environmentally sound AI systems overseen by humans. The Beijing artificial intelligence principles were followed with new regulations placed upon corporations and applications by the Cyber Space Administration of China. Various initiatives at the federal and state levels in the United States further emphasize the need for a legislative framework. The UN Secretary General also recently proposed a high level panel to consider IAEA, the International Atomic Energy Agency, like oversight of AI. Proposed framework. Governance of AI is difficult because it impacts nearly every facet of modern life. Challenges range from interoperability to ensuring applications contribute to and do not undermine the realization of the SDGs. These challenges change throughout the lifecycle of a system and as technologies evolve. A global governance framework must build upon the work of respected existing institutions and new initiatives fulfilling key tasks such as monitoring, verification, and enforcement of compliance. Only a truly agile and flexible approach to governments can provide continuous oversight for evolving technologies that have broad applications, with differing timelines for realization and deployment, and a plethora of standards and practices with differing purposes. Mindful of political divergences around issues technology policy and governance, it will take time to create a new global body. Nevertheless, specific functions can and should be attended to immediately. For example, a global observatory for AI can be managed with an ingesisting, neutral intermediary capable of working in a distributed manner, with other non-profit technical bodies and agencies qualified in matters related to AI research and its impact on society. to establish an effective international AI government structure, five symbiotic components are necessary. Number one, a neutral technical organization to sort through which legal frameworks, best practices and standards have risen to the highest level of global acceptance. Ongoing reassessments will be necessary as the technologies and regulatory paradigms evolve. Number two, a global AI observatory, a GAIO, tasked with standardized reporting at both general and domain specific levels. On the characteristics functions and features of AI and related systems released and deployed. These efforts will enable assessment of AI systems compliance with existing standards that have been agreed upon. Reports should be updated in as close to real time as possible to facilitate the coordination of early responses before significant harm has been affected. The observatories which already exist such as that at the OECD do not represent all countries and stakeholders nor do they provide oversight, enable sufficient depth of analysis or fulfill all the tasks proposed below. GAIO would orchestrate global debates and cooperation by convening experts and other relevant and inclusive stakeholders as needed. GAIO would publish an annual report on the state of AI which analyzes key issues, patterns, standardization efforts and investment that have arisen during the previous year and the choices governments, elected leaders and international organizations need to consider. This would involve strategic foresight and scenarios focused primarily on technologies likely to go live and it's exceeding two to three years. These reports will encourage the broadest possible agreement on the purposes and application norms of AI platforms and specific systems. GAIO would develop and continuously update four registries. Together, a registry of adverse incidents and a registry of new emerging and where possible anticipated applications will help government and international regulators to attend to potential harm before deployment of new systems. The third registry will track the history of AI systems, including information on testing, verification, updates, and the experiences of states that have deployed them. This will help many countries that lack the resources to evaluate such systems. A fourth registry will maintain a global repository for data, code and model provenance. Number three, normative governments capability with limited enforcement powers to promote compliance with global standards for the ethical and responsible use of AI and related technologies. This could involve creating a technology passport system to ease assessments across jurisdictions and regulatory landscapes. Support from existing international actors such as the UN would provide legitimacy and a mandate for this capability. It could be developed within the UN ecosystem through collaboration between the ITU, UNESCO and OCHR supported by global technical organisations such as IEEE. Number four, a conformity assessment and process certification tool box to promote responsible behaviour and assist with confidence building measures and transparency efforts. Such assessments should not be performed by the companies that develop AI systems or the tools used to assess those systems. Number five, ongoing development of technological tools regulation in a box, whether embedded in software or hardware or both, is necessary for transparency, accountability, validation and audit safety protocols, and to address issues related to the preservation of human, social and political rights in all digital goods, each of which is a critical element of confidence building measures. Developed with other actors in this digital space, these tools should be continuously audited for ronious activity and adapted by the scientific and technical community. They must be accessible to all parties at no cost. Assistance from the corporate community in providing and developing tools and intel on technical feasibility is essential, as will be their suggestions regarding norms. However, regulatory capture by those with the most to gain financially is unacceptable. Corporations should play no final role in setting the norms, their enforcement, or to whom the tools should be made available. We are fully aware that this skeleton framework begs countless questions as to how such governance mechanisms are implemented and managed, how their neutrality and trustworthiness can be established and maintained, or how political and technical discrements will be decided and potential harmful consequences remediated. However, it is offered to stimulate deeper reflection on what we have learned from promoting and governing existing technologies, what is needed, and the next steps forward. Emerging and converging technologies. This framework has significant potential applications beyond the AI space. If effective, many of the components proposed could serve as models for the governance of as yet unanticipated fields of scientific discovery and technological innovation. While generative AI is making it urgent to put in place in statical governance, many other existing emerging and anticipated fields of scientific discovery and technological innovation will require oversight. These fields are amplifying each other's development and converging in ways difficult to predict. This proposal, developed by Carnegie Council for Ethics and International Affairs in collaboration with IEEE, draws on ideas and concepts discussed in two June 2023 multi-disciplinary expert workshops organized by Carnegie Council's AI and Equality Initiative, and IEEE SA and hosted by UNESCO in Paris and ITU in Geneva. Participation in these workshops however does not imply endorsement of this framework or any specific ideas within the framework.