Journal logo

The OpenAI Bubble is Worse than You Think

Last Tuesday, OpenAI closed a $122 billion funding round that values the company at $852 billion.

By Shahzaib Published about 5 hours ago 5 min read

That’s not a typo. It’s bigger than the GDP of most countries and makes OpenAI the second-most valuable private company on the planet, right behind SpaceX. Sam Altman and his team are out raising capital like it’s 1999, promising the kind of intelligence that will rewrite economies. Meanwhile, internal projections show the company losing $14 billion this year alone.d6bc69c45458
I’ve been covering tech long enough to smell a bubble when the money gets this loud. This one feels different, though. It’s not just overpriced servers or vaporware startups. It’s the gap between what OpenAI and its peers keep telling us AI will do any day now and what it actually delivers when you turn it loose in the real world. The hype isn’t harmless marketing fluff. It’s distorting investment, policy, and even how we think about work and creativity. And the longer we pretend otherwise, the harder the correction will hit.


Let’s start with what these systems actually do well, because the honest critique has to begin there. ChatGPT and its cousins are genuinely useful for a handful of narrow tasks. Programmers use GitHub Copilot to autocomplete boilerplate code and spot syntax errors faster than they could alone. Writers lean on it to brainstorm outlines or punch up a clunky paragraph. Marketers generate social posts and ad copy in seconds. Students summarize long PDFs or turn lecture notes into flashcards. These aren’t trivial wins. They shave hours off repetitive work. I’ve watched friends in small agencies cut content production time in half. That’s real productivity, the kind that shows up in quarterly reports if you measure it carefully.
The problem isn’t that the tools are useless. It’s that the story around them has inflated those incremental gains into something unrecognizable. OpenAI’s leadership talks about AGI—artificial general intelligence—being right around the corner. Altman has said they basically know how to build it and that 2026 will bring systems capable of “novel insights.” Internal benchmarks tell a different story. OpenAI’s own tests on its latest reasoning models, o3 and o4-mini, show hallucination rates of 33 percent and 48 percent respectively on questions about real people and facts. That’s worse than the older o1 model. The systems don’t just get things wrong; they get them confidently wrong even while showing you their “chain of thought.” One recent academic study found GPT-4o fabricating nearly 20 percent of citations in literature reviews and mangling another 45 percent of the real ones.


I’ve seen this play out in newsrooms and law firms. A reporter I know asked an AI to fact-check a story on local zoning changes and got back three invented city council votes. A paralegal friend fed it case law and watched it cite decisions that don’t exist. These aren’t edge cases. They’re what happens when you treat probabilistic text generators like oracles. The models predict the next word based on patterns in training data. They don’t “know” truth. When the pattern breaks or the data is thin, they fill the gap with confident nonsense.
That unreliability matters more as companies try to scale beyond chat windows. Remember the hype around AI agents that would handle entire workflows? 2025 delivered the reckoning. MIT researchers looked at hundreds of enterprise pilots and found that 95 percent delivered zero measurable value after six months. Projects stalled in testing. Costs piled up. A bank rolled out an AI call-center system to replace dozens of agents, only to discover the bot couldn’t handle basic customer frustration. Managers ended up working overtime while the company rehired the laid-off staff. Taco Bell’s drive-thru AI experiment became a punchline after it repeatedly botched orders and frustrated customers. Volkswagen’s Cariad AI division burned through $7.5 billion over three years with little to show for it in vehicle software. These aren’t isolated flops. They’re the predictable result of treating narrow pattern-matchers as if they were reliable junior employees.


The economic incentives behind all this are straightforward and ugly. OpenAI is burning cash at a historic rate. Even with roughly $25 billion in annualized revenue, the company needs ever-larger data centers, ever-more chips, and ever-more electricity just to stay competitive. Their Stargate project, a $500 billion push for massive AI infrastructure, will require gigawatts of power—enough to light up entire states. A Cornell study last year projected that unchecked AI data-center growth could pump 24 to 44 million metric tons of CO2 into the atmosphere annually by 2030, the equivalent of adding millions of cars to the roads, while sucking up enough water to supply 6 to 10 million American households. Altman has called some water-use concerns “fake,” but the total energy numbers are impossible to dismiss. Global data centers already consume about 1.5 percent of world electricity. AI is driving most of the growth.
None of this shows up in the glossy demos or the valuation decks. Investors aren’t pouring in because they’ve seen transformative ROI across the economy. They’re buying the narrative that today’s losses are the price of tomorrow’s dominance. It’s the same logic that fueled the dot-com bubble: get big fast, figure out the business model later. Only this time the infrastructure costs are orders of magnitude higher and the technology still can’t reliably do the jobs we’re supposedly automating.


The overlooked consequences go beyond balance sheets. Copyright lawsuits from The New York Times and authors keep piling up, arguing that training these models on their work without permission amounts to theft at scale. Labor markets are jittery—people hear “AI will replace your job” and either panic or tune out. Meanwhile, the actual displacement happening is messy and uneven: some roles get augmented, others get deskilled, and plenty stay untouched because the AI can’t handle the messy human parts. And then there’s the quiet environmental bill that local communities will pay when those data centers light up their grids and drain their water tables.
I’m not saying AI is a dead end. The incremental stuff is genuinely interesting. NotebookLM turning dry reports into surprisingly listenable podcasts is clever. Tools that help non-coders sketch simple apps or that summarize research papers save real time. The danger isn’t the technology. It’s the story we’re telling ourselves about it. When every productivity gain gets spun as the prelude to superintelligence, we stop asking practical questions. Does this tool actually make the work better, or just faster? Who bears the cost when it fails? What are we willing to trade—energy, accuracy, jobs—for marginal improvements in convenience?


The bubble isn’t going to pop with one dramatic crash. It will deflate slowly as more companies quietly shelve their pilots, as investors start demanding actual profits instead of promises, and as the public grows tired of being told the future is here while their electric bills climb and their news feeds fill with confident lies. OpenAI will keep shipping better models. Some of them will be genuinely useful. But the valuation math only works if you believe the sci-fi version of the story.


What actually matters, I think, is learning to treat these systems like very sophisticated tools rather than oracles or employees. Use them where they shine—brainstorming, summarizing, coding assistance—and keep a human in the loop where truth, creativity, or accountability count. Build infrastructure that doesn’t cook the planet in the process. Pay attention to the real costs instead of pretending they’ll magically disappear once the next model drops.
The gap between the narrative and the reality isn’t just embarrassing. It’s expensive, in money and in trust. We’ve seen this movie before. The question is whether we’ll recognize the ending in time.

businessbusiness warssocial media

About the Creator

Shahzaib

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

Shahzaib is not accepting comments at the moment
Want to show your support? Send them a one-off tip.

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.