By the time Mark Zuckerberg started work on Koolau Ranch -- his sprawling 1,400-acre estate on Kauai -- the idea of Silicon Valley billionaires “prepping for doomsday” was still considered a fringe obsession.
That was 2014. A decade later, the whispers around his fortified Hawaiian compound have become part of a much larger conversation about fear, power, and the unsettling future of technology.
According to Wired, the ranch includes an underground shelter equipped with its own energy and food supply. The carpenters and electricians who built it reportedly signed strict NDAs. A six-foot wall keeps prying eyes away from the site. When asked last year whether he was building a doomsday bunker, Zuckerberg brushed it off. “No,” he said flatly. “It’s just like a little shelter, it’s like a basement.”
That explanation hasn’t stopped the speculation -- especially since he’s also bought up 11 properties in Palo Alto as per the BBC, spending about $110 million and allegedly adding another 7,000-square-foot underground space beneath them. His neighbours have their own nickname for it: the billionaire’s bat cave.
And Zuckerberg isn’t alone. As BBC reports, other tech heavyweights are quietly doing the same -- buying land, building underground vaults, and preparing, in some unspoken way, for a world that might fall apart.
‘Apocalypse insurance’ for the ultra-rich
Reid Hoffman, LinkedIn’s co-founder, once called it “apocalypse insurance.” He claims that roughly half of the world’s ultra-wealthy have some form of it -- and that New Zealand, with its remoteness and stability, has become a popular bolt-hole.
Sam Altman, the CEO of OpenAI, has even joked about joining German-American entrepreneur and venture capitalist Peter Thiel at a remote New Zealand property “in the event of a global disaster.”
Now, that might sound paranoid. But as BBC points out, the fear is not just about pandemics or nuclear war anymore. It’s about something else entirely -- something these men helped create.
When the people building AI start fearing it
By mid-2023, OpenAI’s ChatGPT had taken the world by storm. Hundreds of millions were using it, and the company’s scientists were racing to push updates faster than anyone could digest. Inside OpenAI, though, not everyone was celebrating.
According to journalist Karen Hao’s account, Ilya Sutskever -- OpenAI’s chief scientist and co-founder -- was growing uneasy. He believed computer scientists were closing in on Artificial General Intelligence (AGI), the theoretical point when machines match human reasoning.
In a meeting that summer, he’s said to have told colleagues: “We’re definitely going to build a bunker before we release AGI.”
It’s not clear who he meant by “we.” But the sentiment reflects a strange paradox at the heart of Silicon Valley: the same people driving the next technological leap are also the ones stockpiling for its fallout.
The countdown to AGI, and what happens after
The arrival of AGI has been predicted for years, but lately, tech leaders have been saying it’s coming soon. OpenAI’s Sam Altman said in December 2024 it will happen “sooner than most people in the world think.”
Sir Demis Hassabis of DeepMind pegs it at five to ten years. Dario Amodei, the founder of Anthropic, says “powerful AI” could emerge as early as 2026.
Others are sceptical. Dame Wendy Hall, professor of computer science at the University of Southampton, told the BBC: “They move the goalposts all the time. It depends who you talk to.” She doesn’t buy the AGI hype. “The technology is amazing, but it’s nowhere near human intelligence.”
As per the BBC report, Babak Hodjat, CTO at Cognizant, agrees. There are still “fundamental breakthroughs” needed before AI can truly match, or surpass, the human brain.
But that hasn’t stopped believers from imagining what comes next: ASI, or Artificial Super Intelligence -- machines that outthink, outplan, and perhaps outlive us.
Utopias, dystopias, and Star Wars fantasies
The optimists paint a radiant picture. AI, they say, will cure disease, fix the climate, and generate endless clean energy. Elon Musk even predicted it could usher in an era of “universal high income.”
He compared it to every person having their own R2-D2 and C-3PO, a Star Wars analogy meaning AI could act as a personal assistant for everyone, solving problems, managing tasks, translating languages, and providing guidance. In other words, advanced help and knowledge would be available to every individual. “Everyone will have the best medical care, food, home transport and everything else. Sustainable abundance,” Musk said.
But as BBC notes, there’s a darker side to this fantasy. What happens if AI decides humanity itself is the problem?
Tim Berners-Lee, the inventor of the World Wide Web, put it bluntly in a BBC interview: “If it’s smarter than you, then we have to keep it contained. We have to be able to switch it off.”
Governments are trying. President Biden’s 2023 executive order required companies to share AI safety results with federal agencies. But that order was later rolled back by Donald Trump, who called it a “barrier” to innovation. In the UK, the AI Safety Institute was set up to study the risks, but even there, oversight is more academic than actionable.
Meanwhile, the billionaires are digging in. Hoffman’s “wink, wink” remark about buying homes in New Zealand says it all.
One former bodyguard of a tech mogul told the BBC that if disaster struck, his team’s first priority “would be to eliminate said boss and get in the bunker themselves.” He didn’t sound like he was kidding.
Fear, fiction, and the myth of the singularity
To some experts, the entire AGI panic is misplaced. Neil Lawrence, professor of machine learning at Cambridge University, called it “nonsense.”
“The notion of Artificial General Intelligence is as absurd as the notion of an ‘Artificial General Vehicle’,” he said. “The right vehicle depends on context, a plane to fly, a car to drive, a foot to walk.”
His point: intelligence, like transportation, is specialised. There’s no one-size-fits-all version.
For Lawrence, the real story isn’t about hypothetical superminds, it’s about how existing AI already transforms everyday life. “For the first time, normal people can talk to a machine and have it do what they intend,” he said. “That’s extraordinary -- and utterly transformational.”
The risk, he warns, is that we’re so captivated by the myth of AGI that we ignore the real work, making AI safer, fairer, and more useful right now.
Machines that think, but don’t feel
Even at its most advanced, AI remains a pattern machine. It can predict, calculate, and mimic, but it doesn’t feel.
“There are some ‘cheaty’ ways to make a Large Language Model act as if it has memory,” Hodjat said, “but these are unsatisfying and inferior to humans.”
Vince Lynch, CEO of IV.AI, is even more blunt: “It’s great marketing. If you’re the company that’s building the smartest thing that’s ever existed, people are going to want to give you money.”
Asked if AGI is really around the corner, Lynch paused. “I really don’t know.”
Consciousness, the last frontier
Machines can now do what once seemed unthinkable: translate languages, generate art, compose music, and pass exams. But none of it amounts to understanding.
The human brain still has about 86 billion neurons and 600 trillion synapses, far more than any model built in silicon. It doesn’t pause or wait for prompts; it continuously learns, re-evaluates, and feels.
“If you tell a human that life has been found on another planet, it changes their worldview,” Hodjat said. “For an LLM, it’s just another fact in a database.”
That difference -- consciousness -- remains the one line technology hasn’t crossed.
The bunker mentality
Maybe that’s why the bunkers exist. Maybe it’s not just paranoia or vanity. Maybe, deep down, even the most brilliant technologists fear that they’ve unleashed something they can’t fully understand, or control.
Zuckerberg insists his underground lair is “just like a basement.” But basements don’t come with food systems, NDAs, and six-foot walls.
The bunkers are real. The fear behind them might be too.
That was 2014. A decade later, the whispers around his fortified Hawaiian compound have become part of a much larger conversation about fear, power, and the unsettling future of technology.
According to Wired, the ranch includes an underground shelter equipped with its own energy and food supply. The carpenters and electricians who built it reportedly signed strict NDAs. A six-foot wall keeps prying eyes away from the site. When asked last year whether he was building a doomsday bunker, Zuckerberg brushed it off. “No,” he said flatly. “It’s just like a little shelter, it’s like a basement.”
That explanation hasn’t stopped the speculation -- especially since he’s also bought up 11 properties in Palo Alto as per the BBC, spending about $110 million and allegedly adding another 7,000-square-foot underground space beneath them. His neighbours have their own nickname for it: the billionaire’s bat cave.
And Zuckerberg isn’t alone. As BBC reports, other tech heavyweights are quietly doing the same -- buying land, building underground vaults, and preparing, in some unspoken way, for a world that might fall apart.
‘Apocalypse insurance’ for the ultra-rich
Reid Hoffman, LinkedIn’s co-founder, once called it “apocalypse insurance.” He claims that roughly half of the world’s ultra-wealthy have some form of it -- and that New Zealand, with its remoteness and stability, has become a popular bolt-hole.
Sam Altman, the CEO of OpenAI, has even joked about joining German-American entrepreneur and venture capitalist Peter Thiel at a remote New Zealand property “in the event of a global disaster.”
Now, that might sound paranoid. But as BBC points out, the fear is not just about pandemics or nuclear war anymore. It’s about something else entirely -- something these men helped create.
When the people building AI start fearing it
By mid-2023, OpenAI’s ChatGPT had taken the world by storm. Hundreds of millions were using it, and the company’s scientists were racing to push updates faster than anyone could digest. Inside OpenAI, though, not everyone was celebrating.
According to journalist Karen Hao’s account, Ilya Sutskever -- OpenAI’s chief scientist and co-founder -- was growing uneasy. He believed computer scientists were closing in on Artificial General Intelligence (AGI), the theoretical point when machines match human reasoning.
In a meeting that summer, he’s said to have told colleagues: “We’re definitely going to build a bunker before we release AGI.”
It’s not clear who he meant by “we.” But the sentiment reflects a strange paradox at the heart of Silicon Valley: the same people driving the next technological leap are also the ones stockpiling for its fallout.
The countdown to AGI, and what happens after
The arrival of AGI has been predicted for years, but lately, tech leaders have been saying it’s coming soon. OpenAI’s Sam Altman said in December 2024 it will happen “sooner than most people in the world think.”
Sir Demis Hassabis of DeepMind pegs it at five to ten years. Dario Amodei, the founder of Anthropic, says “powerful AI” could emerge as early as 2026.
Others are sceptical. Dame Wendy Hall, professor of computer science at the University of Southampton, told the BBC: “They move the goalposts all the time. It depends who you talk to.” She doesn’t buy the AGI hype. “The technology is amazing, but it’s nowhere near human intelligence.”
As per the BBC report, Babak Hodjat, CTO at Cognizant, agrees. There are still “fundamental breakthroughs” needed before AI can truly match, or surpass, the human brain.
But that hasn’t stopped believers from imagining what comes next: ASI, or Artificial Super Intelligence -- machines that outthink, outplan, and perhaps outlive us.
Utopias, dystopias, and Star Wars fantasies
The optimists paint a radiant picture. AI, they say, will cure disease, fix the climate, and generate endless clean energy. Elon Musk even predicted it could usher in an era of “universal high income.”
He compared it to every person having their own R2-D2 and C-3PO, a Star Wars analogy meaning AI could act as a personal assistant for everyone, solving problems, managing tasks, translating languages, and providing guidance. In other words, advanced help and knowledge would be available to every individual. “Everyone will have the best medical care, food, home transport and everything else. Sustainable abundance,” Musk said.
But as BBC notes, there’s a darker side to this fantasy. What happens if AI decides humanity itself is the problem?
Tim Berners-Lee, the inventor of the World Wide Web, put it bluntly in a BBC interview: “If it’s smarter than you, then we have to keep it contained. We have to be able to switch it off.”
Governments are trying. President Biden’s 2023 executive order required companies to share AI safety results with federal agencies. But that order was later rolled back by Donald Trump, who called it a “barrier” to innovation. In the UK, the AI Safety Institute was set up to study the risks, but even there, oversight is more academic than actionable.
Meanwhile, the billionaires are digging in. Hoffman’s “wink, wink” remark about buying homes in New Zealand says it all.
One former bodyguard of a tech mogul told the BBC that if disaster struck, his team’s first priority “would be to eliminate said boss and get in the bunker themselves.” He didn’t sound like he was kidding.
Fear, fiction, and the myth of the singularity
To some experts, the entire AGI panic is misplaced. Neil Lawrence, professor of machine learning at Cambridge University, called it “nonsense.”
“The notion of Artificial General Intelligence is as absurd as the notion of an ‘Artificial General Vehicle’,” he said. “The right vehicle depends on context, a plane to fly, a car to drive, a foot to walk.”
His point: intelligence, like transportation, is specialised. There’s no one-size-fits-all version.
For Lawrence, the real story isn’t about hypothetical superminds, it’s about how existing AI already transforms everyday life. “For the first time, normal people can talk to a machine and have it do what they intend,” he said. “That’s extraordinary -- and utterly transformational.”
The risk, he warns, is that we’re so captivated by the myth of AGI that we ignore the real work, making AI safer, fairer, and more useful right now.
Machines that think, but don’t feel
Even at its most advanced, AI remains a pattern machine. It can predict, calculate, and mimic, but it doesn’t feel.
“There are some ‘cheaty’ ways to make a Large Language Model act as if it has memory,” Hodjat said, “but these are unsatisfying and inferior to humans.”
Vince Lynch, CEO of IV.AI, is even more blunt: “It’s great marketing. If you’re the company that’s building the smartest thing that’s ever existed, people are going to want to give you money.”
Asked if AGI is really around the corner, Lynch paused. “I really don’t know.”
Consciousness, the last frontier
Machines can now do what once seemed unthinkable: translate languages, generate art, compose music, and pass exams. But none of it amounts to understanding.
The human brain still has about 86 billion neurons and 600 trillion synapses, far more than any model built in silicon. It doesn’t pause or wait for prompts; it continuously learns, re-evaluates, and feels.
“If you tell a human that life has been found on another planet, it changes their worldview,” Hodjat said. “For an LLM, it’s just another fact in a database.”
That difference -- consciousness -- remains the one line technology hasn’t crossed.
The bunker mentality
Maybe that’s why the bunkers exist. Maybe it’s not just paranoia or vanity. Maybe, deep down, even the most brilliant technologists fear that they’ve unleashed something they can’t fully understand, or control.
Zuckerberg insists his underground lair is “just like a basement.” But basements don’t come with food systems, NDAs, and six-foot walls.
The bunkers are real. The fear behind them might be too.
You may also like
Tennessee explosion LIVE: Multiple dead and more missing after munitions plant blast
The Woman in Cabin 10 ending explained: Who is the woman?
In shocking move, French President Macron reappoints Sebastien Lecornu as PM
Martin Lewis urges anyone who bought a car from 2007-2024 to act now - 'Owed hundreds'
Dad-of-three killed after jet hose exploded and hit face as company fined £800k