According to Forbes, Meta’s Mark Zuckerberg has made creating smarter-than-human artificial general intelligence his new explicit goal, while OpenAI’s charter includes “planning for AGI and beyond.” Nearly 70,000 people including AI pioneer Geoffrey Hinton and Apple co-founder Steve Wozniak signed the Statement on Superintelligence calling for a prohibition on superintelligence development. Futurist Gregory Stock presented at the Beneficial AGI conference in Istanbul, arguing AGI could mean the death of death itself and the end of scarcity. The debate pits “AI doomers” worried about human extinction against optimists who believe superintelligence could solve disease, hunger, and poverty. Chinese President Xi Jinping recently suggested creating a global AI governance body, though international cooperation appears unlikely.
The optimism vs doomer divide
Here‘s the thing about AGI discussions – they quickly veer into either utopian fantasy or apocalyptic nightmare. On one side, you’ve got people like Stock talking about ending death and creating new forms of romance. On the other, nearly 70,000 experts signed that superintelligence statement warning about human “economic obsolescence and disempowerment.” The truth is probably somewhere in the middle, but nobody really knows. We’re talking about creating something that could rapidly become smarter than all of humanity combined. That’s either the best thing that ever happened to us or the worst. Maybe both.
The corporate control problem
Basically, we’re putting the future of humanity in the hands of a few tech companies. Meta and OpenAI are racing toward AGI, and OpenAI’s planning document sounds responsible enough, but let’s be real – these are corporations with shareholders and competitive pressures. The idea that they’ll develop something as powerful as AGI entirely for the benefit of humanity seems… optimistic. International governance sounds nice in theory, but when China suggests a global AI body, the US and Europe aren’t exactly jumping to join. So we’re left hoping these companies will be benevolent. History suggests that’s not how power works.
How we’ll change, not just machines
Stock makes an interesting point that often gets lost in these discussions. The most profound changes might not be what the machines become, but how humanity changes in response. Think about it – if AGI solves all our basic survival problems, what becomes of human purpose? If digital clones can attend meetings for us (as this piece explores), what happens to human connection? We’re already seeing AI reshape culture and economy, and that’s with relatively dumb systems. AGI would force us to confront fundamental questions about what it means to be human when machines can do everything better than we can.
What happens next
So where does this leave us? We’ve got massive corporations racing toward AGI, thousands of experts warning about existential risks, and no real global coordination. The best case scenario might be if open-source organizations achieve AGI simultaneously, spreading the benefits more widely. But let’s be honest – the organizations with billions in funding and the world’s top AI talent have a significant head start. The next few years will determine whether AGI becomes humanity’s greatest achievement or our final exam. And nobody knows if we’ll pass.
