The Future Perfect 50: Ajeya Cotra, senior research analyst at Open Philanthropy

Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

Let’s say you have hundreds of millions of dollars. You want to help the world as much as possible. How do you know which causes to spend money on and how much to give?

This is exactly the situation that major charitable organizations, like Open Philanthropy, find themselves in. Should they prioritize saving kids from malaria? Preventing a manmade pandemic? What about runaway AI?

Ajeya Cotra, a senior research analyst at Open Phil, works on answering questions like these. Her investigations into specific causes, as well as her meta-investigations into how we can even think through such hard questions, are refreshingly nuanced and thoughtful.

AI risk is the specific cause that Cotra has devoted most of her time to thinking about lately. In 2020, she put out a report that aimed to forecast when we’ll most likely see the emergence of transformative AI (think: powerful enough to spark a major shift like the Industrial Revolution). The question of AI timelines is crucial for figuring out how much funding we should spend on mitigating risks from AI versus other causes — the closer transformative AI is to happening, the more pressing the need to invest in safety measures becomes.

Cotra came up with a way to estimate what might seem to be unknowable. She uses the human brain to estimate how much computation we’d need to train an AI that could perform as well as a human. By using this “biological anchor” for her work, she arrived at 2050 as her median estimate for transformative AI.

This year, though, she updated her timelines in light of the recent explosion in AI development. Her new median estimate is that transformative AI will emerge by 2040, which would make what had seemed to be a long-term risk now possibly right around the corner.

There’s plenty of room to debate whether Cotra’s biological anchor approach is the right one. But if she’s even anywhere in the ballpark, that’s a pretty eye-popping estimate.

And it’s got practical implications. “This update should also theoretically translate into a belief that we should allocate more money to AI risk over other areas such as bio risk,” Cotra writes, “[and] to be more forceful and less sheepish about expressing urgency when ... trying to recruit particular people to work on AI safety or policy.”

Beyond helping us think through specific causes like AI, Cotra has offered a way to think through the meta question of how to allocate resources between different causes. She calls it “worldview diversification.”

In a nutshell, it says that we shouldn’t just divvy up resources based on how many beneficiaries each cause claims to have. If we did that, we’d always prioritize longtermist causes because things that will shape the far future will affect the hundreds of billions who may live, not just the 8 billion alive today. Instead, we should acknowledge that there are different worldviews (some that prioritize current problems like, say, malaria, and some that are longtermist) and that each might have something useful to offer. Then we should divvy up our budget among them based on our credence — how plausible we find each one.

This approach, which Open Phil has been gravitating toward in practice, has clear advantages over an approach that’s based only on calculating which cause has the most beneficiaries. The charity world is better for Cotra having articulated that.

Most news outlets make their money through advertising or subscriptions. But when it comes to what we’re trying to do at Vox, there are a couple of big issues with relying on ads and subscriptions to keep the lights on.

First, advertising dollars go up and down with the economy, which makes it hard to plan ahead. Second, we’re not in the subscriptions business. Vox is here to help everyone understand the complex issues shaping the world — not just the people who can afford to pay for a subscription.

It’s important that we have several ways we make money. That’s why, even though advertising is still our biggest source of revenue, we also seek reader support.

If you also believe that everyone deserves access to trusted high-quality information, will you make a gift to Vox today? Any amount helps.

image