Artificial intelligence (AI) is both omnipresent and conceptually slippery, making it notoriously hard to regulate. Fortunately for the rest of the world, two major experiments in the design of AI governance are currently playing out in Europe and China. The European Union (EU) is racing to pass its draft Artificial Intelligence Act, a sweeping piece of legislation intended to govern nearly all uses of AI. Meanwhile, China is rolling out a series of regulations targeting specific types of algorithms and AI capabilities. For the host of countries starting their own AI governance initiatives, learning from the successes and failures of these two initial efforts to govern AI will be crucial.
Matt O’Shaughnessy is a visiting fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he applies his technical background in machine learning to research on the geopolitics and global governance of technology.
When policymakers sit down to develop a serious legislative response to AI, the first fundamental question they face is whether to take a more “horizontal” or “vertical” approach. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.
Neither the EU nor China is taking a purely horizontal or vertical approach to governing AI. But the EU’s AI Act leans horizontal and China’s algorithm regulations incline vertically. By digging into these two experiments in AI governance, policymakers can begin to draw out lessons for their own regulatory approaches.
The EU’s Approach
The EU’s approach to AI governance centers on one central piece of legislation. At its core, the AI Act groups AI applications into four risk categories, each of which is governed by a predefined set of regulatory tools. Applications deemed to pose an “unacceptable risk” (such as social scoring and certain types of biometrics) are banned. “High risk” applications that pose a threat to safety or fundamental rights (think law enforcement or hiring procedures) are subject to certain pre- and post-market requirements. Applications seen as “limited risk” (emotion detection and chatbots, for instance) face only transparency requirements. The majority of AI uses are classified as “minimal risk” and subject only to voluntary measures.
Matt Sheehan
Matt Sheehan is a fellow at the Carnegie Endowment for International Peace, where his research focuses on global technology issues, with a specialization in China’s artificial intelligence ecosystem.
The AI Act vaguely defines “essential requirements” for each risk tier, placing different constraints on each category. The easiest way for developers to satisfy these mandates will be for them to adhere to technical standards that are being formulated by European standards-setting bodies. This makes technical standards a key piece of the AI Act: they are where the general provisions described in legislation are translated into precise requirements for AI systems. Once in force, years of work by courts, national regulators, and the technical standards bodies will clarify precisely how the AI Act will apply in different contexts.
In effect, the AI Act uses a single piece of horizontal legislation to fix the broad scope of what applications of AI are to be regulated, while allowing domain- and context-aware bodies like courts, standards bodies, and developers to determine exact parameters and compliance strategies. Furthering its ability to act in more context-specific ways, the EU is also pairing the requirements in the AI Act with co-regulatory strategies such as regulatory sandboxes, an updated liability policy to deal with the challenges of AI, and associated legislation focused on data, market structures, and online platforms.
This framework strikes a balance between the dual imperatives of providing predictability and keeping pace with AI developments. Its risk-based approach allows regulators to slot new application areas into existing risk categories as AI’s uses evolve, providing a balance between flexibility and regulatory certainty. Meanwhile, the AI Act’s definition of relatively flexible essential requirements also alleviates the key precision challenge posed by purely horizontal frameworks, allowing compliance strategies to be flexible across sectors and as technology evolves.
But the EU’s broadly horizontal approach faces several risks that other countries should watch closely. Individual regulators tasked with enforcing requirements might differ in their interpretations or capacity to regulate, undermining the key capacity and harmonization benefits of a horizontal approach. Another factor is whether the proposed central and horizontal European AI Office will be effective in supplementing the capacity of national and sectoral regulators. Delegating the creation of precise technical requirements to expert standard-setting bodies allows more technical expertise and precision to be channeled into precise requirements. However, the standards process has historically been driven by industry, and it will be a challenge to ensure governments and the public have a meaningful seat at the table.
China’s Approach
Over the past year, China has rolled out some of the world’s first nationally binding regulations targeting algorithms and AI. It has taken a fundamentally vertical approach: picking specific algorithm applications and writing regulations that address their deployment in certain contexts. The first two regulations in this camp targeted recommendation algorithms and deep synthesis technology, also known as generative AI.
The recommendation algorithm regulation focused on its use in disseminating information, as well as setting prices and dispatching workers. It required that algorithm providers “vigorously disseminate positive energy” and avoid price discrimination or overly demanding workloads for delivery drivers. The second regulation targeted deep synthesis algorithms that use training data to generate new content, such as deepfakes. The regulation again focused on concerns around “harmful information,” but it also required providers to obtain consent from individuals if their images or voices are manipulated by the technology. These application-specific requirements stem from the vertical nature of the regulations. Neither of these requirements would make sense when applied to the other set of applications or to ones that would be covered by a horizontal regulation.
Yet China’s regulations do contain a horizontal element: they create certain regulatory tools that can be applied horizontally across several different vertical regulations. A prime example of this is the algorithm registry (算法备案系统, literally “algorithm filing system”). The registry was created by the recommendation algorithm regulation and reaffirmed by the deep synthesis regulation, both of which require developers to register their algorithms. It acts as a central database for Chinese officials to gather information on algorithms, such as their sources for training data and potential security risks. As such, the registry also serves as a vehicle for regulators to learn more about how AI is being built and deployed—a key goal of sectoral regulators around the globe.
Looking ahead, the registry may continue to act as a horizontal tool to gather similar information on algorithms that fall under a variety of vertical regulations. It could be adapted to require different kinds of information depending on the algorithm’s application, or it could simply provide a uniform baseline of information required on all algorithms subject to regulation.
China’s approach allows it to more precisely target regulatory requirements to specific technical capabilities. In many cases, this approach might risk rules falling behind quickly evolving technology. In China’s AI regulation, however, some requirements are defined so vaguely that they effectively function to shift power from technology companies to government regulators, which can wield their newfound regulatory power to force any changes on companies that they wish. China’s vertical regulations could also end up as building blocks for a more comprehensive AI governance regime, a pattern that played out in Chinese internet governance prior to the country’s Cybersecurity Law. But until that time, the main risk of such an approach is the creation of a patchwork of regulations that are, collectively, poorly considered and expensive to comply with.
Lessons
The EU’s and China’s approaches both have their critics. Business groups argue that the EU’s broad approach will stifle innovation, and analysts correctly assert that China’s targeted regulations will be used to tighten information controls. But by taking a step back to look at their fundamental approaches to regulating AI, policymakers can draw lessons for countries and institutions around the world.
One central lesson is that neither approach can stand fully on its own. A purely horizontal regulatory approach will not be able to set out meaningfully specific requirements for all applications of AI. Conversely, creating a grab bag of freestanding vertical regulations targeting each new application of AI will likely create a compliance mess for regulators and companies alike.
The most effective approaches incorporate both horizontal and vertical elements. Regulators taking a horizontal approach benefit from deferring the nitty-gritty work of creating specific compliance requirements to more vertically oriented organizations like sectoral regulators, standards bodies, or in some cases courts. Likewise, governments taking a vertical approach can create horizontal regulatory tools and resources that can be used across a variety of application-specific laws, reducing the burden on regulators and the unpredictability for businesses.
We can see versions of this in both the EU and Chinese approaches. While the EU AI Act creates four risk tiers and lays out broad requirements for each one, the heavy lifting of articulating specific compliance thresholds will be done by Europe’s main standardization bodies. And while China has issued vertical regulations targeted at recommendation engines and generative AI, those regulations lean on horizontal tools like the algorithm registry in order to bring coherence across them.
Beyond this overarching need for supplementing both approaches, horizontal and vertical regulatory regimes each present different pros and cons. The choice of which regulatory regime will work best often comes down to the structure and culture of each government. How nimble or gridlocked are a country’s legislative bodies? How empowered and coordinated are its sectoral regulators? The answers to these questions inform which approach will work best.
Horizontal approaches provide predictability to developers and businesses by laying out a fixed set of governance tools. Despite the diversity of AI applications, the risks they pose often involve similar themes of transparency, robustness, and accountability, and horizontal strategies can help governments concentrate limited resources on these repeating themes. Horizontal approaches can also reduce the chance of regulatory gaps emerging when overtaxed sector-specific regulators lack the capacity to consider new technologies such as AI.
But governments will need a few characteristics in order to reap the benefits of horizontal approaches. Their legislative bodies will need the ability to amend or add to its main horizontal regulation in order to keep pace with the technology. A static horizontal regulation for a fast-evolving technology is unlikely to hold up. In addition, when a horizontal regulation delegates the setting of precise compliance requirements to other institutions, those institutions need to be able to resist industry capture. For example, if industry-dominated standards bodies set weak standards for compliance, the regulation itself becomes weak.
Vertical AI regulations have the benefit of being tailored to mitigate the specific harms posed by different AI applications. Though more limited in their ultimate ambition, these individual regulations can tackle existing harms without the added burden of being generalizable to all applications of the technology. As regulators and legislators learn which interventions prove successful, they can incorporate those tools or requirements into future legislation targeting other harms. And if this piecemeal approach ultimately proves insufficient, they can bring these lessons to bear on shaping a more horizontal regulatory regime.
But this vertical approach requires a certain level of legislative and interagency coordination to minimize costs for both regulators and businesses. If agencies do not coordinate to build common regulatory tools, they risk reinventing the wheel each time a new department is tasked with regulating a specific application of AI. Similarly, if vertical regulations don’t come paired with the necessary technical and financial resources, sectoral regulators will be outmatched in their efforts to meaningfully constrain businesses applying AI.
The United States’ political structure has produced a motley set of forays into AI regulation. These efforts have brought about a unique blend of horizontal and vertical elements. The past two presidential administrations have laid out their own politically oriented guiding principles in the form of former president Donald Trump’s guidance on AI regulation and President Joe Biden’s Blueprint for an AI Bill of Rights. These have served as horizontal guidance meant to inform vertical sectoral regulators in the executive branch. Both horizontal, principle-based approaches attempt to harmonize high-level approaches to AI regulations while preserving flexibility to adapt to individual settings. However, if not coupled with resources and pressure to actually perform this sector-specific adaptation, this approach runs the risk of resulting in no meaningfully binding regulation at all.
Related analysis from Carnegie
In addition, regulation that relies on federal department rulemaking or rule interpretation runs the risk of being reversed during the next administration, as happened with explanation requirements for algorithms used for credit decisions between the Trump and Biden administrations. Meanwhile, bipartisan majorities in Congress have funded AI research efforts and best-practice guidelines such as the recent AI Risk Management Framework (RMF) from the National Institute of Standards and Technology. The RMF is nonbinding by design, but it does bear similarities to the kinds of horizontal tools that could be applied to supporting future vertical regulations.
There’s no single formula for AI regulation that can be applied across countries, but blends of horizontal and vertical elements used in the EU’s and China’s AI regulations offer a menu of policy options. By adapting lessons learned from these initial approaches to their unique contexts, other countries can take meaningful steps toward reducing the harms produced by AI systems.
End of document
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.