On in the present day’s episode of Decoder, we’re going to attempt to work out “digital god.” I figured we’ve been doing this lengthy sufficient, let’s simply get after it. Can we construct a man-made intelligence so highly effective that it modifications the world and solutions all of our questions? The AI trade has determined the reply is sure.
In September, OpenAI’s Sam Altman printed a weblog submit claiming we’ll have superintelligent AI in “a number of thousand days.” And earlier this month, Dario Amodei, the CEO of OpenAI competitor Anthropic, printed a 14,000-word submit laying out what precisely he thinks such a system might be able to when it arrives, which he says could possibly be as quickly as 2026.
What’s fascinating is that the visions specified by each posts are so comparable — they each promise dramatic superintelligent AI that can convey huge enhancements to work, to science and healthcare, and even to democracy and prosperity. Digital god, child.
However whereas the visions are comparable, the businesses are, in some ways, brazenly opposed: Anthropic is the unique OpenAI defection story. Dario and a cohort of fellow researchers left OpenAI in 2021 after turning into involved with its more and more industrial course and strategy to security, and so they created Anthropic to be a safer, slower AI firm. And the emphasis was actually on security till lately; simply final 12 months, a significant New York Occasions profile of the corporate known as it the “white-hot middle of A.I. doomerism.”
However the launch of ChatGPT, and the generative AI increase that adopted, kicked off a colossal tech arms race, and now, Anthropic is as a lot within the sport as anybody. It’s taken in billions in funding, principally from Amazon, and constructed Claude, a chatbot and language mannequin to rival OpenAI’s GPT-4. Now, Dario is writing lengthy weblog posts about spreading democracy with AI.
So what’s happening right here? Why is the top of Anthropic instantly speaking so optimistically about AI, when he was beforehand identified for being the safer, slower various to the progress-at-all-costs OpenAI? Is that this simply extra AI hype to court docket buyers? And if AGI is basically across the nook, how are we even measuring what it means for it to be protected?
To interrupt all of it down, I introduced on Verge senior AI reporter Kylie Robison to debate what it means, what’s happening within the trade, and whether or not we will belief these AI leaders to inform us what they actually assume.
In the event you’d wish to learn extra about among the information and matters we mentioned on this episode, try the hyperlinks beneath:
Decoder with Nilay Patel /
A podcast from The Verge about huge concepts and different issues.
SUBSCRIBE NOW!