Alongside the rapid development of artificial intelligence, we’ve seen a proliferation of AI “principles,” or guidelines for how AI should be built and used. Governments, companies, advocacy groups, and multi-stakeholder initiatives have all advanced perspectives. This project emerged from our curiosity about these principles. Were they wildly divergent, or was there enough commonality to suggest the emergence of sectoral norms? Some were framed as ethical in nature; others drew from human rights law. How did that impact their content? We wanted a way to look at the principles documents side by side, to assess them individually and identify important trends, so we built one.
Our current dataset includes 32 such principles documents. We collected up to 80 data points about each one, including the actor behind the document, the date of publication, the intended audience, and the geographical scope, as well as detailed data on the principles themselves. A large variety of actors are represented, from individual tech companies’ guidelines for their own implementation of AI technology, to multi-stakeholder coalitions, to publications from national governments that incorporate ethical principles as part of an overall AI strategy. We expected to find some key themes, and indeed we uncovered eight: accountability, fairness and non-discrimination, human control of technology, privacy, professional responsibility, promotion of human values, safety and security, and transparency and explainability. Many of the documents address all of these themes; all hit at least a few. We also collected data on whether and how the principles documents referenced human rights, which just under half did.
It is our hope that the Principled Artificial Intelligence project will be a starting point for further scholarship and advocacy on this topic. To that end, we have created a data visualization that summarizes our findings. We are excited to share this visualization in draft form and invite you to provide feedback and ask questions by filling out this form.
This summer, we will publish the final data visualization along with the dataset itself and a white paper detailing our assumptions, methodology and key findings. If you would like to be notified when the white paper is published, sign up here. You can also visit ai-hr.cyber.harvard.edu to view publications related to the Principled Artificial Intelligence Project and the Berkman Klein Center’s previous project Artificial Intelligence and Human Rights: Opportunities & Risks.
For more information, feel free to get in touch with Jessica Fjeld at jfjeld@law.harvard.edu.