AI is probably one of the most awe-inspiring technologies we’ve seen since the first internet browser. By 2026, it is believed that 75% of large enterprises will infuse AI into various processes to enhance asset efficiency, streamline supply chains, and augment product development.
However, despite its immense potential, AI has a serious flaw baked in by design—it makes inferences based on observing learned patterns of data instead of understanding the logic behind those patterns and creating something truly unique. Essentially, it regurgitates the info it discovers from its training.
The significant risks and dangers of AI have been well documented, many of which concern accuracy, accountability, transparency, fairness, privacy, and security. To mitigate these risks and harness AI’s full potential, organizations must establish AI governance—a process that ensures AI risks are identified, assessed, measured, and managed effectively and consistently.
BEST PRACTICES FOR AI GOVERNANCE
Earlier this year, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF), a set of guidelines organizations can adopt to help manage AI-related risks. Formed in 1901, NIST is a non-regulatory agency of the US Department of Commerce. Per NIST, if your organization is in the business of “designing, developing, deploying, evaluating, or acquiring AI systems,” the agency recommends the following best practices for AI governance:
1. CREATE TRANSPARENT DOCUMENTATION, POLICIES, AND PROCEDURES
Having clear and transparent documentation (policies, procedures, best practices) helps set the tone in the organization and communicates the vision, mission, goals, and priorities to all stakeholders. One cannot expect employees to meet the needs, expectations, and mandates of the organization if there is a lack of understanding about AI risks as well as the characteristics of trustworthy AI. The policy document should list things such as key contacts, roles, and responsibilities within the organization; legal and regulatory requirements concerning AI; frequency of periodic review of AI systems; procedures to onboard, decommission, or phase out of AI systems; and processes for tracking, responding to, and recovering from incidents and errors.
AI systems are usually socio-technical in nature, which means they are influenced by societal factors relating to how a system is used, who operates it, and the social context in which it is deployed. Therefore, to identify AI risks and biases, and to evaluate controls, performance, and AI’s impact, one needs human judgment and oversight and a diverse governance team.
2. EMPOWER EMPLOYEES THROUGH TRAINING
Personnel and third-party partners should receive training regularly so that teams and individuals can be reminded of the evolving risks and expectations of the organization. Accountability structures and lines of communication must be put in place so that employees can perform their duties to the best of their abilities and within the scope of existing policies, procedures, and agreements. That said, AI governance is the responsibility of executive teams. They must own the risks associated with the deployment and development of AI systems.
3. COMMIT TO A CULTURE THAT CONSIDERS AND COMMUNICATES AI RISK
Culture is the conduit that can change behaviors, norms, and attitudes. Behaviors and beliefs cannot be altered overnight. Organizations must deploy consistent repeatable processes and conduct regular training that promotes critical thinking and reminds employees of what a security-first organization behaves like. Furthermore, organizational teams must communicate the risks and their impact more broadly. This promotes information sharing among employees which in turn, helps build a transparent and collaborative culture.
4. INTEGRATE FEEDBACK MECHANISM INTO SYSTEM DESIGN AND IMPLEMENTATION
Users often have varying experiences, both positive and negative, with AI technologies. Keeping a record of this feedback is crucial as it helps to analyze and identify any potential risks specific to the context while also assessing the trustworthiness of the AI system. Over time, AI developers can leverage this feedback to gain a better understanding of the risks involved, effectively communicate them to the organization, and incorporate the insights back into the system design to enhance AI decision-making processes.
5. KEEP A TAB ON THIRD PARTIES
While third parties can accelerate R&D processes, they also accelerate risk. Security risks can emerge from third-party data, software, or hardware. The organization developing the AI system may not be transparent about its processes, methodologies, and algorithms. The measurement and management of AI can complicate things, and how customers integrate AI into their products can open the door to more vulnerabilities. It’s advisable to have a documented approach that addresses AI risks presented by third parties (security, privacy, infringement of intellectual property, etc.) and have contingency plans in place that handle third-party incidents and failures.
Organizations can establish a framework for AI governance where AI risks are identified, assessed, measured, and managed. Following best practices for AI governance such as creating transparent documentation, empowering employees through training, committing to a culture of risk management, integrating feedback mechanisms, and keeping a tab on third parties will help organizations lay a foundation for ethical and responsible AI.
Stu Sjouwerman is the Founder and CEO of KnowBe4 Inc., the world’s largest Security Awareness Training and Simulated Phishing platform.