See what guidance different countries and regions recommend.
News stories from the last few years show that AI can be discriminatory—the most widely known examples have occurred in banking and financial services.2 However, other sectors are not immune. For example, an online retailer chose to disband an internal team because of a controversy involving the algorithms they used in a hiring process. The algorithms that were used to vet potential employees were said to be biased because they were trained on mostly male resumes, which means they could potentially identify more male candidates than female candidates and perpetuate a gender bias.3 Similarly, in the public sector, a study revealed that UK police officers questioned whether using algorithms to predict future crime could result in bias and discrimination.4
AI governance practices and regulations have been adopted by a number of countries to prevent the advent of bias or discrimination in the algorithms.
To reduce risk from factors such as AI bias, many countries and regions have adopted guidance on how to govern AI.
US: SR-11-7
SR-11-7 is the US regulatory model governance standard for effective and strong model governance in banking. The regulation requires bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation, or recently retired. Leaders of the institutions also must prove their models are achieving the business purpose they were intended to solve, and that they are up-to-date and have not drifted. Model development and validation must enable anyone unfamiliar with a model to understand the model’s operations, limitations and key assumptions.5
In the US, SR-11-7 guides US bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation, or recently retired.
Canada: Directive on Automated Decision-Making
Canada’s Directive on Automated Decision-Making describes how that country’s government uses AI to guide decisions in several departments. The directive uses a scoring system to assess the human intervention, peer review, monitoring and contingency planning needed for an AI tool built to serve citizens. Organizations creating AI solutions with a high score must conduct two independent peer reviews, offer public notice in plain language , develop a human intervention failsafe, and establish recurring training courses for the system.6 As Canada’s Directive on Automated Decision-Making is a guidance for the country’s own development of AI, the regulation doesn’t directly affect companies the way SR 11-7 does in the US.
Europe’s evolving AI regulations
In 2019, the European Commission’s incoming president said she planned to introduce new legislation governing AI.7 The new legislation on AI would require high-risk AI systems to be transparent, traceable and under human control. Authorities would check AI systems to make sure data sets were unbiased. The commission also wanted to launch a debate throughout the European Union (EU) about when and whether to use facial recognition and other biometric identification.8
AI governance guidelines in the Asia-Pacific region
In the Asia-Pacific region, countries have released several principles and guidelines for governing AI. In 2019, Singapore’s federal government released a framework with guidelines for addressing issues of AI ethics in the private sector. India’s AI strategy framework recommends setting up a center for studying how to address issues related to AI ethics, privacy and more. China, Japan, South Korea, Australia and New Zealand are also exploring guidelines for AI governance.9
2 Anthony Macciola, “Bad, biased, and unethical uses of AI.” The Enterprisers Project, 29 Aug. 2019. Accessed 14 June 2020.
3 “Amazon scraps secret AI recruiting tool that showed bias against women.” Reuters, 9 Oct. 2018. Accessed 14 June 2020.
4 Alexander Babuta and Marion Oswald, “Data Analytics and Algorithmic Bias in Policing.” Royal United Services Institute for Defence and Security Studies, 2019. Accessed 14 June 2020.
5 “SR 11-7: Guidance on Model Risk Management.” Board of Governors of the Federal Reserve System Washington, D.C., Division of Banking Supervision and Regulation, 4 April 2011. Accessed 15 June 2020.
6 “Canada's New Federal Directive Makes Ethical AI a National Issue.” Digital, 8 March 2019. Accessed 15 June 2020.
7 Oscar Williams, “New European Commission president pledges GDPR-style AI legislation.” NS Tech, 28 Nov. 2019. Accessed June 15, 2020.