Artificial intelligence (AI) is a part of everyday life. Daily schedules are organized using voice assistants. Streaming services offer recommendations for movies to watch. AI helps manufacturers create new and better products. It can also be used to predict and prevent the spread of bushfires.
In a world where AI is everywhere, it is crucial to have corresponding governance and compliance processes in place. This helps ensure transparency, address personal privacy and data privacy considerations, and fosters a commitment to ethical AI.
The discourse surrounding AI ethics and governance has advanced in recent years, and governments and international organizations have begun issuing principles, frameworks and recommendations accordingly:
- Singapore issued the Model AI Governance Framework, a sector-, technology-, scale-, business model- and algorithm-agnostic framework that converts relevant ethical principles to practices that can be implemented throughout an AI deployment process. This enables organizations to operationalize the principles.
- The Australian government released the AI Ethics Framework that guides organizations and governments in responsibly designing, developing and implementing AI.
- The European Commission proposed what would be the first legal framework for AI, which addresses the risk of AI and aims to provide AI developers, deployers and users with a clear understanding of the requirements for specific uses of AI.
- The University of Turku (Finland), in coordination with a team of academic and industry partners, formed a consortium and created the Artificial Intelligence Governance and Auditing (AIGA) Framework, which illustrates a detailed and comprehensive AI governance life cycle that supports the responsible use of AI.
There are several elements that the mentioned AI governance frameworks and principles have in common. These components can be used by an organization to inform its own AI governance strategy.
Internal Governance Structures and MeasuresFirst and foremost, effective AI governance requires that internal organizational structures, roles and responsibilities, performance measures and accountability be defined for the outcomes of the AI systems. It is important to understand how AI technologies can inspire innovation within the organization and maximize productivity and return on investment (ROI). It is equally important to consider the ethical aspects of AI technology (i.e., privacy), since as advancements in AI continue to be made, it is likely that a point will be reached during which computers will be able to program themselves and absorb even more vast quantities of new information. Similarly, ever-increasing use of AI in the privacy domain has enabled organizations to collect data on individuals including their shopping patterns, social media posts, location information and more. Because of the highly sensitive nature of this information, organizations must define clear roles and responsibilities pertaining to AI, encompassing all staff, from senior management to middle management and developers. Personnel should be made aware of the ethical and governance considerations of AI and be provided with the resources and guidance needed to discharge their duties pertaining to AI governance and ethics.
Human InvolvementOrganizations must take a risk-based approach to ensure that humans have a say in AI-augmented decision-making. A critical aspect of any AI system is that it benefits individuals, society and the environment. AI systems should respect the dignity, privacy, diversity and autonomy of individuals. Systems should be inclusive and accessible and should not involve any unfair discrimination against individuals, communities, or groups.
Operations ManagementAny mature AI governance framework should address the operational management of AI. Organizations must ensure that AI systems respect and uphold privacy rights and data protection to ensure the security of data. Enterprises should also be able to identify potential security vulnerabilities and implement resilience measures that are proportionate to the magnitude of potential risk, to fare better against adversarial attacks. AI systems should be monitored and tested to ensure that they continue to meet requirements without compromising the ethics and governance of the organization.
Stakeholder Interaction and CommunicationIt is important for any AI governance framework to address transparency in communication with all relevant parties—including but not limited to employees and stakeholders—so that they can understand when they are being significantly impacted by AI. There should be a timely process to allow people to challenge the use or outcomes of the AI system. Organizations should develop a policy for governing AI and communicate with customers and stakeholders about how AI works, its expected outcomes and benefits, and how it is used to make decisions that impact customers.
ConclusionAs AI technologies evolve, ethical and governance issues will do so in tandem. Enterprises can reap considerable benefits by committing to ethical AI. Such a commitment can enhance trust in the product or brand, drive consumer loyalty and help prevent negative experiences with AI-enabled services. At the same time, governments need to engage with the community and clarify how AI is developed and used. Organizations must ensure alignment and compliance of their respective AI governance models and frameworks with international standards such as the International Organization for Standardization (ISO) standards and guidelines, best practices, and respective laws and regulations. Organizations need to engage with competent auditors and assessors periodically to assess the effectiveness of governance and management of AI systems.
Hafiz Sheikh Adnan Ahmed
Is a futurist and technology/information security leader with more than 17 years of experience in the areas of information and communications technology (ICT) governance, cybersecurity, resilience, data privacy and protection, risk management, enterprise excellence and innovation, and digital and strategic transformation. He is a strategic thinker, writer, certified trainer, global mentor and advisor with proven leadership and organizational skills in empowering high-performing technology teams. He is a certified data protection officer and earned Chief Information Security Officer (ACISO) of the Year awards in 2021 and 2022, granted by GCC Security Symposium Middle East and Cyber Sentinels Middle East, respectively. Ahmed is a public speaker and conducts regular training, workshops, and webinars on the latest trends and technologies in the fields of digital transformation, information and cybersecurity, and data privacy. He volunteers at the global level of ISACA® in different working groups and forums. He can be contacted through email at hafiz.ahmed@azaanbiservices.com and LinkedIn at https://ae.linkedin.com/in/adnanahmed16.
Have you ever realized – suddenly and in the middle of a conversation – that you’re on a totally different wavelength from the person you’re talking to? As an example, I was once involved in a conversation about “AV” where I realized about 20 minutes in that I had been talking about “anti-virus” while the other person had meant “assessment and verification.” Whoops.
I recently observed something similar happening at an industry conference that caused me to spend some time rethinking a few of my assumptions. This was a “mixed audience” event that had practitioners from a variety of disciplines. I wound up sharing a lunch table with two other attendees: one a technology auditor for a large financial services firm and the other a technical cybersecurity practitioner from a software company.
At some point in the conversation speaking about one of the talks earlier in the day, the auditor made a point that there were “first and second line” impacts for a particular suggestion that the technical security person had made (it had to do with authentication if I recall correctly.) Now, if you’re familiar with the three lines of defense model or if you work in the technology audit field, you probably know right away exactly what this person meant (i.e., that the control requires both operation and oversight). As the eavesdropper in this particular exchange, I thought the observation made perfect sense and was perfectly insightful. But the security practitioner was at a complete loss; it was clear he had never heard of the three lines. It was also apparent he didn’t feel comfortable saying as much and admitting his ignorance about it.
From this point on, the conversation devolved into something much more adversarial and unproductive. In fact, it was precisely this change in the tenor of the conversation that made it stick with me. What started as two passionate folks from different-but-related disciplines sharing ideas collaboratively ended with miscommunication and drama with one participant (I assume) feeling prickly about a point going over his head and the other (again assuming) feeling frustrated that a point he tried to make wasn’t fully understood.
This in turn got me thinking about the three lines concept generally and how/whether it’s the best way to express certain ideas when engaging with a professionally diverse audience. Specifically, why this model has been so valuable, but also how it can be misunderstood and used in contexts where it might be unfamiliar.
What are the three lines of defense?Before we can get into the nuance, we should recap what the three lines model is in the first place. For those unfamiliar, the “three lines” concept refers to a 2013 position paper from the Institute of Internal Auditors (The IIA) entitled, “The Three Lines of Defense in Effective Risk Management and Control.” This paper argues that there are (as the name would imply) three lines of defense in organizational risk management:
- First line, operational management – the first line of defense is the controls established by management and put in place. The first line is the business and functional areas that own and manage the risks. This includes establishing of controls as well as day-to-day operation of controls.
- Second line, risk management and compliance – the second line refers to functions that oversee and monitor risk. For example, this could include a risk management oversight committee, the compliance office or any other function responsible for oversight.
- Third line, internal audit – the third line of defense refers to those who provide independent assurance and validation that things are operating correctly.
Further documentation, such as The IIA’s follow-on expansion in the 2020 paper “The IIA’s Three Lines Model: An update of the Three Lines of Defense,” goes beyond this original concept to establish principles of the three lines, key roles and business functions involved in the three lines, etc. – in essence, building upon the original intent of the 2013 paper and adding additional depth and context.
The concept has become well-accepted in the audit community; I’d argue for good reason. First of all, it does an excellent job of articulating exactly why independent assurance is so valuable because if a control fails in addressing risk, and the monitoring of that control doesn’t flag that there’s an issue, it really is the assurance function that can help draw attention to that fact for remediation.
Beyond that, it highlights how establishing and directly overseeing a control implementation really is a different exercise than the ongoing monitoring and systematic review of that control. Looking at these two things as different tasks entirely can be advantageous precisely because sometimes those in a hurry will overlook the one (management, monitoring, and oversight) to invest more heavily in the other (getting it operational.)
Tailoring for the audienceAs useful as the three lines model is, though, it’s important that we take our audience into account in bringing it forward. What really derailed the conversation in the earlier anecdote was not the reference to the three lines on its own. Instead, it was the citing of it while also failing to understand the audience to whom that person was speaking. The seed of doubt found purchase in the unwillingness of the other party in the conversation to admit ignorance. These two factors together shut down what could have otherwise continued to be a collaborative discussion.
In my opinion, there are three points that are important to keep in mind as we socialize the three lines concept with other disciplines. First and most importantly, not everyone will be familiar with it. It may go without saying, but the concept is so entrenched that a lot of us just use it without realizing that other disciplines may not have ever encountered it before. This is true even of professional disciplines in overlapping areas like compliance, cybersecurity and privacy. Most of these practitioners haven’t heard of it, let alone have a deep understanding the implications.
This in turn means that, to the extent that we intend to draw upon the three lines of defense as a support or discussion point, we need to be ready to explain it, and we need to be able to explain it in a way that doesn’t make others feel like their role in the risk management equation is diminished. In the anecdote above, if the person that cited the three lines of defense had immediately been on the alert for lack of understanding, he might have been able to course correct by explaining right away.
Second, it’s important to keep in mind that there is a bit of a timing nuance in the three lines model. Specifically, while the three lines of defense can be thought of as overlapping within the broader context of risk management, at the micro level when thinking about a specific risk, only a single line might be operative at a given point in time. For example, if you have a control related to preventing cross-site scripting on websites (for example, maybe you’re using a WAF), the operation of that control (first line) is what is preventing that particular threat while second and third line evaluations come later: the second line as we monitor and observe the performance of that control and third line when it comes time to validate operation.
Lastly, the fact that there are three lines that can be logically thought of as independently filling a niche in the broader risk management function doesn’t mean that there needs to be (or even should be) limitations on interactions between auditors and other functional areas. Some of the most effective collaborations I’ve been involved with have been where audit and other teams (e.g., security, IRM, compliance, etc.) have come together to work on a task – a key concept in ISACA’s digital trust initiative. Anyone citing the “three lines” as evidence of why it’s undesirable for internal audit to work together with stakeholders having primarily first or second line responsibility is, I believe, misreading the intent of the model. In my opinion, the model exists to foster collaboration rather than suppress it.