AI Blindspot: A Discovery Process for preventing, detecting, and mitigating bias in AI systems

Tags
researchreport
Date
Aug 29, 2023 7:00 PM

What are AI Blindspots?

AI Blindspots are oversights in a team’s workflow that can generate harmful unintended consequences. They can arise from our unconscious biases or structural inequalities embedded in society. Blindspots can occur at any point before, during, or after the development of a model. The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities. Like any blindspot, AI blindspots are universal -- nobody is immune to them -- but harm can be mitigated if we intentionally take action to guard against them.

Organize an "AL" workshop

image
image

Download AI Blindspot cards (PDF)

PLANNING

In the initial stages of your project, it is important to think critically about: why you want to use a particular technology (Purpose); how accurately your data reflects affected communities (Representative Data); what vulnerabilities your system might expose (Abusability); and how to safeguard personal identifiable information (Privacy).

BUILDING

Vulnerable populations can be harmed due to the performance metric you choose (Optimization Criteria) or variables that act as proxies (Discrimination by Proxy). Depending on the sensitivity of the use case, you may need to understand and explain how the algorithm makes determinations (Explainability).

DEPLOYING

You should be vigilant about monitoring for changes that might affect the performance and impact of your system (Generalization Error), and ensure that individuals have mechanisms to challenge decisions (Right to Contest).

MONITORING

Organizations using AI systems should institute inclusive processes for stakeholders input (Consultation) and independent risk assessment (Oversight). The best way to catch blindspots is to genuinely engage with experts and affected communities as equals to define and track progress towards collective goals (Purpose)..

What do we mean by AI?

Artificial intelligence has become a catch-all category of automated decision making systems that derive patterns, insights, and predictions from big datasets. While they might aspire to emulate and automate intelligent human-like judgment, most algorithms referred to as AI are in fact imperfect models susceptible to making erroneous inferences and rendering biased decisions.

The risk of delegating high-stakes social and commercial decisions to AI exposes everyone to unequal treatment because these seemingly impartial algorithms are produced by computer scientists, engineers, and companies whose data and practices may amplify historical biases in society.

Fairness requires thoughtful vigilance across all sectors, especially from researchers inventing, engineers building, organizations deploying, and advocates tracking AI systems. Above all, we need to safeguard and uplift people whose lives are affected by AI.

Meet AL (resources for leading a workshop)

View this post on Instagram

A post shared by AI Blindspot (@aiblindspot)

image

AI systems should make the world a better place. Defining a shared goal guides decisions across the lifecycle of an algorithmic decision-making system, promoting trust amongst individuals and the public. View Card

image

For an algorithm to be effective, its training data must be representative of the communities that it may impact. The way that you collect and organize data will benefit certain groups while excluding or harming others. View Card

image

AI systems often gather personal information that can invade our privacy. Systems storing confidential data can also be vulnerable to cyberattacks that result in devastating data breaches to access personal information. View Card

image

An algorithm can have an adverse effect on vulnerable populations even without explicitly including protected characteristics. This often occurs when a model includes features that are correlated with these characteristics. View Card

image

The technical logic of algorithms is complex, which make recommendations unclear. People involved in designing and deploying algorithmic systems have a responsibility to explain high-stakes decisions that affect individuals' well-being. View Card

image

There are trade-offs and potential externalities when determining an AI system's metrics for success. It is important to balance performance metrics against the risk of negatively impacting vulnerable populations. View Card

image

Between building and deploying an AI system, conditions in the world may change or not reflect the context in which the system was designed, such that training data are no longer representative. View Card

image

Ethical principles, standards, and policies are futile unless monitored and enforced. A diverse oversight body vested with formal authority can help to establish and maintain transparency, accountability, and sanctions. View Card

image

ABOUT

The AI Blindspot cards were developed by Ania Calderon, Dan Taber, Hong Qu, and Jeff Wen during the Berkman Klein Center Assembly program.

Learn more about the team.