AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
The simple evidence for that is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
Because almost everyone involved in AI race grew up in "winner takes it all" environments, typical for software, and they try really hard to make it reality. This means your model should do everything to just take 90% of market share, or at least 90% of specific niche.
The problem is, they can't find the moat, despite searching very hard, whatever you bake into your AI, your competitors will be able to replicate in few months. This is why OpenAI is striking deal with Disney, because copyright provides such moat.
> your competitors will be able to replicate in few months.
Will they really be able to replicate the quality while spending significantly less in compute investment? If not then the moat is still how much capital you can acquire for burning on training?
If Gemini can create or edit an image, chatgpt needs to be able to do this too. Who wants to copy&paste prompts between ai agents?
Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
OpenAI is also relevant bigger than antropic and is known as a generic 'helper'. Antropic probably saw the benefits of being more focused on developer which allows it to succeed longer in the game for the amount of money they have.
> Who wants to copy&paste prompts between ai agents?
An AI!
The specialist vs generalist debate is still open. And for complex problems, sure, having a model that runs on a small galaxy may be worth it. But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
>Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
I think you are confusing generation with analysis. As far I am aware your model does not need to be good at generating images to be able to decode an image.
It is, to first approximation, the same thing. The generative part of genAI is just running the analysis model in reverse.
Now there are all sorts of tricks to get the output of this to be good, and maybe they shouldn't be spending time and resources on this. But the core capability is shared.
I think you're partially right, but I don't think being an AI leader is the main motivation -- that's a side effect.
I think it's important to OpenAI to support as many use-cases as possible. Right now, the experience that most people have with ChatGPT is through small revenue individual accounts. Individual subscriptions with individual needs, but modest budgets.
The bigger money is in enterprise and corporate accounts. To land these accounts, OpenAI will need to provide coverage across as many use-cases as they can so that they can operate as a one-stop AI provider. If a company needs to use OpenAI for chat, Anthropic for coding, and Google for video, what's the point? If Google's chat and coding is "good enough" and you need to have video generation, then that company is going to go with Google for everything. For the end-game I think OpenAI is playing for, they will need to be competitive in all modalities of AI.
Because as with the internet 99% of the usage won’t be for education, work, personal development, what have you. It will be for effing kitten videos and memes.
Openrouter stats already mention 52% usage is roleplay.
As for photo/video very large number of people use it for friends and family (turn photo into creative/funny video, change photo, etc.).
Also I would think photoshop-like features are coming more and more in chatgpt and alike. For example, “take my poorly-lit photo and make it look professional and suitable for linkedin profile”
Because for all the incessant whining about "slop," multimodal AI i/o is incredibly useful. Being able to take a photo of a home repair issue, have it diagnosed, and return a diagram showing you what to do with it is great, and it's the same algos that power the slop. "Sorry, you'll have to go to Gemini for that use case, people got mad about memes on the internet" is not really a good way for them to be a mass consumer company.
It's like half the poster on here live in some parallel universe. I am making real money using generated image/video advertising content for both B2C and B2B goods. I am using Whisper and LLMs to review customer service call logs at scale and identity development opportunities for staff. I am using GPT/Gemini to help write SQL queries and little python scripts to do data analysis on my customer base. My business's productivity is way up since GenAI become accessible.
But how much more profitable are they? We see revenue but not profits / spending. Anthropic seems to be growing faster than OpenAI did but that could be the benefit of post-GPT hype.
There is no doubt that OpenAI is taking a lot of risks by betting that AI adoption will translate into revenues in the very short term. And that could really happen imo (with a low probability sure, but worth the risk for VCs? Probably).
It's mathematically impossible what OpenAI is promising. They know it. The goal is to be too big to fail and get bailed out by US taxpayers who have been groomed into viewing AI as a cold war style arms race that America cannot lose.
Apparently we all have enough money to put it into OpenAI.
Some players have to play, like google, some players want to play like USA vs. China.
Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.
I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.
This article doesn’t add anything to what we know already. It’s still an open question what happens with the labs this coming year, but I personally think Anthropic’s focus on coding represents the clearest path to subscriber-based success (typical SaaS) whereas OpenAI has a clear opportunity with advertising. Both of these paths could be very lucrative. Meanwhile I expect Google will continue to struggle with making products that people actually want to use, irrespective of the quality of its models.
Bart was a flop.
Google search is losing market share to other LLM providers.
Gemini adoption is low, people around me prefer OpenAI because it is good enough and known.
But on the contrary, Nano Banana is very good, so I don't know.
And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.
OK, but Gmail, Google Maps, Google Docs, and Google Search etc are ubiquitous. `Google' has even become a verb. Google might take a shotgun approach, but it certainly does create widely used products.
1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.
2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.
3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.
The difference in model capability seems to be marginal at best, or even in Google's favour.
OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.
Google's most significant advantage in this space is its organizational experience in providing services at this scale, as well as its mature infrastructure to support them. When the bubble pops, it's not lights-out or permanently degraded performance.
What Google AI products do people not want to use? Gemini is catching up to chatpt from a MAU perspective, ai overviews in search are super popular and staggeringly more used than any other ai-based product out there, a Google ai mode has decent usage, and Google Lens has surprisingly high usage. These products together dwarf everyone else out there by like 10x.
> ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".
When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.
The fact is nobody has any idea what OpenAI's cash burn is. Measuring how much they're raising is not an adequate proxy.
For all we know, they could be accumulating capital to weather an AI winter.
It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
The GPT-5 series is a new model, based on the o1/o3 series. It's very much inaccurate to say that it's a routing system and prompt chain built on top of 4o. 4o was not a reasoning model and reasoning prompts are very weak compared to actual RLVR training.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
Prior to 5.2 you couldn’t expect to get good answers to questions prior to March 2024. It was arguing with me that Bruno Mars did not have two hit songs in the last year. It’s clear that in 2025 OpenAI used the old 4.0 base model and tried to supercharge it using RLVR. That had very mixed results.
Didn't they create Sora and other models and literally burned so much money with their AI video app which they wanted to make a social media but what ended up happening was that they burned billions of dollars.
>It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4)
At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.
I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1]
It said:
OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.
However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.
If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.
> The fact is nobody has any idea what OpenAI's cash burn is.
Their investors surely do (absent outrageous fraud).
> For all we know, they could be accumulating capital to weather an AI winter.
If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.
RAG? Even for a "fresh" model, there is no way to keep it up to date, so there has to be a mechanism by which to reference eg last night's football game.
Yes it was, op didn't read the reporting closely enough. It said something to the effect of "Didn't pretrain a new broadly released, generally available model"
OpenAI has #5 traffic levels globally. Their product-market fit is undeniable. The question is monetization.
Their cost to serve each request is roughly 3 orders of magnitude higher than conventional web sites.
While it is clear people see value in the product, we only know they see value at today’s subsidized prices. It is possible that inference prices will continue their rapid decline. Or it is possible that OAI will need to raise prices and consumers will be willing to pay more for the value.
Yes, but that is the standard methodology for startups in their boost phase. Burn vast piles of cash to acquire users, then find out at the end if a profitable business can be made of it.
You are already paying for several national lab HPC centers. These are used for government/university research - no idea if commercial interests can rent time on them. The big ones are running weather, astronomy simulations, nuclear explosions, biological sequencing, and so on.
No chance they're going to take risks to share that hardware with anyone given what it does.
The scaled down version of El Capitan is used for non-classified workloads, some of which are proprietary, like drug simulation. It is called Tuolumne. Not long ago, it was nevertheless still a top ten supercomputer.
Like OP, I also don't see why a government supercomputer does it better than hyperscalers, coreweave, neoclouds, et al, who have put in a ton of capital as even compared to government. For loads where institutional continuity is extremely important, like weather -- and maybe one day, a public LLM model or three -- maybe. But we're not there yet, and there's so much competition in LLM infrastructure that it's quite likely some of these entrants will be bag holders, not a world of juicy margins at all...rather, playing chicken with negative gross margins.
if datacenters are built by the government, then i think it's fair to assume there will be some level of democratic control of what those datacenters will be used for.
This is literally the current system... adding more democratic controls is a good thing, the alternative is that only rich control these systems and would you look at it only the rich control these systems.
That's like every government initiative. Same as healthcare? School? I mean if you don't have children why do you pay taxes... and roads if you don't drive? I mean the examples are so many... why do you bring this argument that if it doesn't benefit you directly right now today, it shouldn't be done?
There are arguments aplenty that schooling and a minimum amount of healthcare are public goods, as are roads built on public land (the government owns most roads after all).
What is the justification for considering data centers capable of running LLMs to be a public good?
There are many counter examples of things many people use but are still private. Clothing stores, restaurants and grocery stores, farms, home appliance factories, cell phone factories, laundromats and more.
How is that distinct from any of my other examples which listed factories? Very few factories in the US are publicly owned; citing data centers as places of production merely furthers the argument that they should remain private.
Last-mile services like roads, electricity, water, and telecommunications are natural monopolies. Normal market forces fail somewhat and you want some government involvement to keep it running smoothly.
> The Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program has announced the 2026 Call for Proposals, inviting researchers to apply for access to some of the world’s most powerful high-performance computing systems.
> The proposal submission window runs from April 11 to June 16, 2025, offering an opportunity for scientific teams to secure substantial computational resources for large-scale research projects in fields such as scientific modeling, simulation, data analytics and artificial intelligence. [...]
> Individual awards typically range from 500,000 to 1,000,000 node-hours on Aurora and Frontier and 100,000 to 250,000 node-hours on Polaris, with the possibility of larger allocations for exceptional proposals. [...]
> The selection process involves a rigorous peer review, assessing both scientific merit and computational readiness. Awards will be announced in November 2025, with access to resources beginning in 2026.
Not sure OpenAI/Anthropic etc would be OK with a six month gap between application and getting access to the resources, but this does indeed demonstrate that government issued super-computing resources is a previously solved problem.
Well, people bid for USA government resources all the time. It's why the Washington DC suburbs have some of the country's most affluent neighborhoods among their ranks.
In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.
The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.
In a completely alternate dimension, a quarter of the capital being invested in AI literally just goes towards making sure everyone has quality food and water.
you'll never win that argument, but I absolutely agree.
people have no idea about how big the military and defense budgets worldwide are next to any other example of a public budget.
throw as many pie charts out as you want; people just can't see the astronomical difference in budgets.
I think it's based on how the thing works; a good defense works until it doesn't -- the other systems/budgets in place have a bit more of a graceful failure. This concept produces an irrationality in people that produces windfalls of cash availability.
Without capital invested in the past we wouldn’t have almost anything of modern technology. That has done a lot more for everyone, including food affordability, than actually simply buying food for people to eat once.
Datacenters are not a natural monopoly, you can always build more. Beyond what the public sector itself might need for its own use, there's not much of a case for governments to invest in them.
That could make sense in some steady state regime where there were stable requirements and mature tech (I wouldn’t vote for it but I can see an argument).
I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?
In a better parallel universe, we found a different innovation without using brute-force computation to train systems that unreliably and inefficiently compute things and still leaves us able to understand what we're building.
Same reason they should own access lines: everyone needs rackspace/access, it should be treated like a public service to avoid rent seeing. Having a data center in every city where all of the local lines terminate into could open the doors to a lot of interesting use cases, really help with local resiliency/decentralization efforts, and provide a great alternative to cloud providers that doesn't break the bank.
Smells like socialism. Around here we privatize the profits and only socialize the costs. Like the impending bailout of the most politically connected AI companies.
Prediction: on this thread you'll get a lot of talk about how government would slow things down. But when the AI bubble starts to look shaky, see how fast all the tech bros line up for a "public private partnership."
That's malinvestment. Too much overhead, disconnected from long term demand. The government doesn't have expertise, isn't lean and nimble. What if it all just blows over? (It won't? But who knows?)
Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.
The government has better things to do. Geopolitics, trade, transportation, resources, public health, consumer safety, jobs, economy, defense, regulatory activities, etc.
Burn rate often gets treated as a hard signal, but it is mostly about expectations. Once people get used to the idea of cheap intelligence, any slowdown feels like failure, even if the technology is still moving forward. That gap is usually where bubbles begin.
On the radio they mentioned that the total global chocolate market is ~100B, I googled it when I was home and it seems to be about ~135B. Apparently that is ... all chocolate, everywhere.. OpenAI's valuation is about 500B. Maybe going up to like 835B.
I'd love to see the rationale that OpenAI (not "AI" everywhere) is more valuable than chocolate globally.
Ignoring that those numbers aren't directly comparable, it did make me wonder, if I had to give up either "AI" or chocolate tomorrow, which would I pick?
Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.
OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.
I am just trying to help you write better. Your writing says "if I had to give up either AI or chocolate [...] I would probably choose AI". However, your language and intent seems to be that you would give up chocolate.
For what I use them for, the LLM market has become a two player game, and the players are Anthropic and Google. So I find it quite interesting that OpenAI is still the default assumption of the leader.
ChatGPT dominates the consumer market (though Nano Banana is singlehandedly breathing some life into consumer Gemini).
A small anecdote: when ChatGPT went down a few months ago, a lot of young people (especially students) just waited for it to come back up. They didn't even think about using an alternative.
When ChatGPT starts injecting ads or forcing payment or doing anything else that annoys its userbase then the young people won't have a problem looking for alternatives
Not really. It was the collapse of insurance companies that was at the core of 2008 crisis.
The same can happen now on the side of private credit that gradually offloads its junk to insurance companies (again):
As a result, private credit is on the rise as an investment option to compensate for this slowdown in traditional LBO (Figure 2, panel 2), and PE companies are actively growing the private credit side of their business by influencing the companies they control to help finance these operations. Life insurers are among these companies. For instance, KKR’s acquisition of 60 percent of Global Atlantic (a US life insurer) in 2020 cost KKR approximately $3billion.
why does the article used words like burn and incinerate, implying that OpenAI is somehow making money disappear or something? They’re spending it; someone is profiting here, even if it’s not OpenAI. Is it all Nvidia?
Because typically one expect a return on investment with that level of spending. Not only have they run at a loss for years, their spending is expected to increase, with no path to profitability in sight.
IIRC, current estimates are that OpenAI is losing as much money a year as Uber or Amazon lost in their entire lifetime of unprofitability. Also, both Uber and Amazon spent their unprofitable years having a clear roadmap to profitability. OpenAI's roadmap to profitability is "???"
I have lived through Amazon’s rags to riches and there was never a clear plan to profitability. Vast majority of people were questioning sanity of anyone investing in Amazon.
I am not saying OpenAI is Amazon but am saying I have seen this before where masses are going “oh business is bad, losses are huge, where is path to profitability…”
I think you're saying that just running up huge losses is sufficient to create a successful company? But that you personally wouldn't want to run up huge losses? Not sure.
I suspect most of it is going to utilities for power, water and racking.
That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.
Tragically I don't make CEO money so I also don't have one but I presume you'd want to have at least one per mansion and another one in the office. Maybe a separate one for special occasions.
“Burn rate” is a standard financial term for how much money a startup is losing. If you have $1 cash on hand and a burn rate of $2 a year, then you have six months before you either need to get profitable, raise more money, or shut down.
AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
The simple evidence for that is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
Just in time for a Government guaranteed backstop.
Not sure why they put so much investment into videoSlop and imageSlop. Anthropic seems to be more focused at least.
Because almost everyone involved in AI race grew up in "winner takes it all" environments, typical for software, and they try really hard to make it reality. This means your model should do everything to just take 90% of market share, or at least 90% of specific niche.
The problem is, they can't find the moat, despite searching very hard, whatever you bake into your AI, your competitors will be able to replicate in few months. This is why OpenAI is striking deal with Disney, because copyright provides such moat.
> your competitors will be able to replicate in few months.
Will they really be able to replicate the quality while spending significantly less in compute investment? If not then the moat is still how much capital you can acquire for burning on training?
Striking deals without a proper vision is a waste of resources. And that’s the path OAI is on.
Because OpenAI stands for AI leader.
If Gemini can create or edit an image, chatgpt needs to be able to do this too. Who wants to copy&paste prompts between ai agents?
Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
OpenAI is also relevant bigger than antropic and is known as a generic 'helper'. Antropic probably saw the benefits of being more focused on developer which allows it to succeed longer in the game for the amount of money they have.
> Who wants to copy&paste prompts between ai agents?
An AI!
The specialist vs generalist debate is still open. And for complex problems, sure, having a model that runs on a small galaxy may be worth it. But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
>Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
I think you are confusing generation with analysis. As far I am aware your model does not need to be good at generating images to be able to decode an image.
It is, to first approximation, the same thing. The generative part of genAI is just running the analysis model in reverse.
Now there are all sorts of tricks to get the output of this to be good, and maybe they shouldn't be spending time and resources on this. But the core capability is shared.
> The generative part of genAI is just running the analysis model in reverse.
I think that hasn't been the case since DeepDream?
I think you're partially right, but I don't think being an AI leader is the main motivation -- that's a side effect.
I think it's important to OpenAI to support as many use-cases as possible. Right now, the experience that most people have with ChatGPT is through small revenue individual accounts. Individual subscriptions with individual needs, but modest budgets.
The bigger money is in enterprise and corporate accounts. To land these accounts, OpenAI will need to provide coverage across as many use-cases as they can so that they can operate as a one-stop AI provider. If a company needs to use OpenAI for chat, Anthropic for coding, and Google for video, what's the point? If Google's chat and coding is "good enough" and you need to have video generation, then that company is going to go with Google for everything. For the end-game I think OpenAI is playing for, they will need to be competitive in all modalities of AI.
Because as with the internet 99% of the usage won’t be for education, work, personal development, what have you. It will be for effing kitten videos and memes.
Are the posters of effing kitten videos a customer base with a significant LTV?
(The obvious well-paying market would be erotic / furry / porn, but it's too toxic to publicly touch, at least in the US.)
Openrouter stats already mention 52% usage is roleplay.
As for photo/video very large number of people use it for friends and family (turn photo into creative/funny video, change photo, etc.).
Also I would think photoshop-like features are coming more and more in chatgpt and alike. For example, “take my poorly-lit photo and make it look professional and suitable for linkedin profile”
If only 99% of the Internet was kitten videos and memes
Well, it sure as hell not all 3blue1brown, crr0ww, Feynman, and alike
The fact that they do this isn't very bullish for them achieving whatever they define as AGI.
Because for all the incessant whining about "slop," multimodal AI i/o is incredibly useful. Being able to take a photo of a home repair issue, have it diagnosed, and return a diagram showing you what to do with it is great, and it's the same algos that power the slop. "Sorry, you'll have to go to Gemini for that use case, people got mad about memes on the internet" is not really a good way for them to be a mass consumer company.
Can Claude not do that? I've sent it pictures for simpler things and got answers, usually Id of bugs and plants.
Because their main use is for advertising/propaganda, which is largely videoSlop & imageSlop even without AI.
Outside of this: https://openai.com/index/disney-sora-agreement/ I don't think there has been much of a win for them even in advertising for image/video slop.
It's like half the poster on here live in some parallel universe. I am making real money using generated image/video advertising content for both B2C and B2B goods. I am using Whisper and LLMs to review customer service call logs at scale and identity development opportunities for staff. I am using GPT/Gemini to help write SQL queries and little python scripts to do data analysis on my customer base. My business's productivity is way up since GenAI become accessible.
that (very vocal) half tried it once and it didn’t work :)
But how much more profitable are they? We see revenue but not profits / spending. Anthropic seems to be growing faster than OpenAI did but that could be the benefit of post-GPT hype.
There is no doubt that OpenAI is taking a lot of risks by betting that AI adoption will translate into revenues in the very short term. And that could really happen imo (with a low probability sure, but worth the risk for VCs? Probably).
It's mathematically impossible what OpenAI is promising. They know it. The goal is to be too big to fail and get bailed out by US taxpayers who have been groomed into viewing AI as a cold war style arms race that America cannot lose.
> It's mathematically impossible what OpenAI is promising
Citation is needed
Don’t that have to make more money in the next 10 years than any company ever has… and that is just to break even.
It’s going to crash, guaranteed
Apparently we all have enough money to put it into OpenAI.
Some players have to play, like google, some players want to play like USA vs. China.
Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.
> Some players have to play, like google
I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.
This article doesn’t add anything to what we know already. It’s still an open question what happens with the labs this coming year, but I personally think Anthropic’s focus on coding represents the clearest path to subscriber-based success (typical SaaS) whereas OpenAI has a clear opportunity with advertising. Both of these paths could be very lucrative. Meanwhile I expect Google will continue to struggle with making products that people actually want to use, irrespective of the quality of its models.
To me Google is the clear winner with all this and Open AI is the next Netscape.
Where does google struggle to make products people don’t want to use? Is it a personal opinion?
Bart was a flop. Google search is losing market share to other LLM providers. Gemini adoption is low, people around me prefer OpenAI because it is good enough and known.
But on the contrary, Nano Banana is very good, so I don't know. And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.
what are you talking about Gemini adoption has tripled in a few months alone and have around 18% of marketshare and its accelerating.
I’ve heard too many rumors that much of that adoption is from copying ms i.e. bundling gemini into their office suite
Anti Gravity is a flop. I mean it uses Gemini under the hood.
But you cannot use it with an API key.
If you're on a workspace account, you can't have normal individual plan.
You have to have the team plan with $100/month or nothing.
Google's product management tier is beyond me.
OK, but Gmail, Google Maps, Google Docs, and Google Search etc are ubiquitous. `Google' has even become a verb. Google might take a shotgun approach, but it certainly does create widely used products.
I don't. Google has at least a few advantages:
1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.
2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.
3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.
The difference in model capability seems to be marginal at best, or even in Google's favour.
OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.
And they have hardware as well, and their own cloud platform.
In my view Google is uniquely well positioned because, contrary to the others, it controls most of the raw materials for Ai.
Google's most significant advantage in this space is its organizational experience in providing services at this scale, as well as its mature infrastructure to support them. When the bubble pops, it's not lights-out or permanently degraded performance.
What Google AI products do people not want to use? Gemini is catching up to chatpt from a MAU perspective, ai overviews in search are super popular and staggeringly more used than any other ai-based product out there, a Google ai mode has decent usage, and Google Lens has surprisingly high usage. These products together dwarf everyone else out there by like 10x.
> ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".
When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.
The fact is nobody has any idea what OpenAI's cash burn is. Measuring how much they're raising is not an adequate proxy.
For all we know, they could be accumulating capital to weather an AI winter.
It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
Why do you think they have not trained a new model since 4o? You think the GPT-5 release is /just/ routing to differently sized 4o models?
they're incorrect about the routing statement but it is not a newly trained model
The GPT-5 series is a new model, based on the o1/o3 series. It's very much inaccurate to say that it's a routing system and prompt chain built on top of 4o. 4o was not a reasoning model and reasoning prompts are very weak compared to actual RLVR training.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
Prior to 5.2 you couldn’t expect to get good answers to questions prior to March 2024. It was arguing with me that Bruno Mars did not have two hit songs in the last year. It’s clear that in 2025 OpenAI used the old 4.0 base model and tried to supercharge it using RLVR. That had very mixed results.
Didn't they create Sora and other models and literally burned so much money with their AI video app which they wanted to make a social media but what ended up happening was that they burned billions of dollars.
>It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4)
At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.
I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1]
It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.
However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.
If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.
[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
> The fact is nobody has any idea what OpenAI's cash burn is.
Their investors surely do (absent outrageous fraud).
> For all we know, they could be accumulating capital to weather an AI winter.
If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.
[1] Again c.f. fraud
How are they updating the data then? Wouldn’t the cutoff date be getting further away from today?
RAG? Even for a "fresh" model, there is no way to keep it up to date, so there has to be a mechanism by which to reference eg last night's football game.
They’re just feeding a little bit of slop in every so often. Fine tuning rather than training a new one.
> they could be accumulating capital to weather an AI winter
Doubtful. This would be the very antithesis of the Silicon Valley way.
wasn't 4.5 new
Yes it was, op didn't read the reporting closely enough. It said something to the effect of "Didn't pretrain a new broadly released, generally available model"
Wasn't 4.5 before 4o?
they're paying million dollar salaries to engineers and building data centers, it's not a huge mystery where their expenses are
They have not successfully trained a new model since 4o. That doesn’t mean they haven’t burned a pile of cash trying.
I know sama says they aren’t trying to train new models, but he’s also a known liar and would definitely try to spin systemic failure.
lol the typical AI boosters are down voting you.
OpenAI has #5 traffic levels globally. Their product-market fit is undeniable. The question is monetization.
Their cost to serve each request is roughly 3 orders of magnitude higher than conventional web sites.
While it is clear people see value in the product, we only know they see value at today’s subsidized prices. It is possible that inference prices will continue their rapid decline. Or it is possible that OAI will need to raise prices and consumers will be willing to pay more for the value.
Does that cost to serve multiple stay the same when conventional sites are forced to shovel ai into each request? e.g. the new google search
It's easy to get product-market fit when you give away dollars for the price of pennies.
Yes, but that is the standard methodology for startups in their boost phase. Burn vast piles of cash to acquire users, then find out at the end if a profitable business can be made of it.
It’s also the standard methodology for a number of scams.
Archive/Paywall: <https://archive.is/rHPk3>
thank you!
In a parallel universe, governments invest in the compute/datacenters (read: infra), and let model makers compete on the same playing field.
I’d rather stay far away from this parallel universe.
Why would you want my money to be used to build datacenter that won’t benefit me ? I might use a LLM once a month, many people never use it.
Let the one who use it pay for it.
You are already paying for several national lab HPC centers. These are used for government/university research - no idea if commercial interests can rent time on them. The big ones are running weather, astronomy simulations, nuclear explosions, biological sequencing, and so on.
>The big ones are running weather, astronomy simulations, nuclear explosions, biological sequencing, and so on.
these things constitute public goods that benefit the individual regardless of participation.
The biggest run classified nuclear stockpile loads, at least in the US. They cost about half a billion apiece. And are 30 (carefully cooled and cabled) megawatts. https://en.wikipedia.org/wiki/El_Capitan_(supercomputer)
No chance they're going to take risks to share that hardware with anyone given what it does.
The scaled down version of El Capitan is used for non-classified workloads, some of which are proprietary, like drug simulation. It is called Tuolumne. Not long ago, it was nevertheless still a top ten supercomputer.
Like OP, I also don't see why a government supercomputer does it better than hyperscalers, coreweave, neoclouds, et al, who have put in a ton of capital as even compared to government. For loads where institutional continuity is extremely important, like weather -- and maybe one day, a public LLM model or three -- maybe. But we're not there yet, and there's so much competition in LLM infrastructure that it's quite likely some of these entrants will be bag holders, not a world of juicy margins at all...rather, playing chicken with negative gross margins.
Many more people materially benefit from e.g. good weather forecasts than form video slop generation.
if datacenters are built by the government, then i think it's fair to assume there will be some level of democratic control of what those datacenters will be used for.
What's the democratic control of existing resources? I would make the opposite assumption, it would be captured by the wealthiest interests.
This is literally the current system... adding more democratic controls is a good thing, the alternative is that only rich control these systems and would you look at it only the rich control these systems.
Uncanny really.
Certainly! Your congress representatives would be voting on how to allocate its computing power. (Do you remember who did you vote for last time?)
That's like every government initiative. Same as healthcare? School? I mean if you don't have children why do you pay taxes... and roads if you don't drive? I mean the examples are so many... why do you bring this argument that if it doesn't benefit you directly right now today, it shouldn't be done?
There are arguments aplenty that schooling and a minimum amount of healthcare are public goods, as are roads built on public land (the government owns most roads after all).
What is the justification for considering data centers capable of running LLMs to be a public good?
There are many counter examples of things many people use but are still private. Clothing stores, restaurants and grocery stores, farms, home appliance factories, cell phone factories, laundromats and more.
Libraries with books are likely considered public goods right?
Why not an LLM datacenter if it also offers information? You could say it's the public library of the future maybe.
Not all libraries are publicly owned or accessible. Most are run by local municipalities because they wouldn't exist otherwise.
Data centers clearly can exist without being owned by the public.
a distinction: the data centers have become the means of production, unlike clothing from a store
How is that distinct from any of my other examples which listed factories? Very few factories in the US are publicly owned; citing data centers as places of production merely furthers the argument that they should remain private.
Healthcare, schools, roads, generative AI. One of these things is not like the others.
We gave incentives to broadband, why not generative AI?
Last-mile services like roads, electricity, water, and telecommunications are natural monopolies. Normal market forces fail somewhat and you want some government involvement to keep it running smoothly.
This is not at all true of generative AI.
If that did happen, how would the government then issue those resources?
OpenAI ask for 1m GPUs for a month, Anthropic ask for 2m, the government data center only has 500,000, and a new startup wants 750,000 as well.
Do you hand them out to the most convincing pitch? Hopefully not to the biggest donor to your campaign.
Now the most successful AI lab is the one that's best at pitching the government for additional resources.
UPDATE: See comment below for the answer to this question: https://news.ycombinator.com/item?id=46438390#46439067
National HPC labs have been over subscribed for decades with extensive queueing/time sharing allocation systems.
It would still likely devolve into most-money-wins, but it is not an insurmountable political obstacle to arrange some sort of sharing.
Edit: I meant to say over subscribed, not over provisioned. There are far more jobs in the queue than can be handled at once
Huh, TIL - thanks for the correction.
https://www.ornl.gov/news/doe-incite-program-seeks-2026-prop...
> The Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program has announced the 2026 Call for Proposals, inviting researchers to apply for access to some of the world’s most powerful high-performance computing systems.
> The proposal submission window runs from April 11 to June 16, 2025, offering an opportunity for scientific teams to secure substantial computational resources for large-scale research projects in fields such as scientific modeling, simulation, data analytics and artificial intelligence. [...]
> Individual awards typically range from 500,000 to 1,000,000 node-hours on Aurora and Frontier and 100,000 to 250,000 node-hours on Polaris, with the possibility of larger allocations for exceptional proposals. [...]
> The selection process involves a rigorous peer review, assessing both scientific merit and computational readiness. Awards will be announced in November 2025, with access to resources beginning in 2026.
Not sure OpenAI/Anthropic etc would be OK with a six month gap between application and getting access to the resources, but this does indeed demonstrate that government issued super-computing resources is a previously solved problem.
Well, people bid for USA government resources all the time. It's why the Washington DC suburbs have some of the country's most affluent neighborhoods among their ranks.
In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.
The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.
In a completely alternate dimension, a quarter of the capital being invested in AI literally just goes towards making sure everyone has quality food and water.
I'd rather live in a universe where that money is taken out of the military budget.
you'll never win that argument, but I absolutely agree.
people have no idea about how big the military and defense budgets worldwide are next to any other example of a public budget.
throw as many pie charts out as you want; people just can't see the astronomical difference in budgets.
I think it's based on how the thing works; a good defense works until it doesn't -- the other systems/budgets in place have a bit more of a graceful failure. This concept produces an irrationality in people that produces windfalls of cash availability.
Without capital invested in the past we wouldn’t have almost anything of modern technology. That has done a lot more for everyone, including food affordability, than actually simply buying food for people to eat once.
> governments invest in the compute/datacenters (read: infra), and let model makers compete on the same playing field
Hmm, what about member-owned coöperatives? Like what we have for stock exchanges.
Datacenters are not a natural monopoly, you can always build more. Beyond what the public sector itself might need for its own use, there's not much of a case for governments to invest in them.
That could make sense in some steady state regime where there were stable requirements and mature tech (I wouldn’t vote for it but I can see an argument).
I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?
In a better parallel universe, we found a different innovation without using brute-force computation to train systems that unreliably and inefficiently compute things and still leaves us able to understand what we're building.
why would they do that? not to mention governments are already doing that indirectly by taking equity stakes in some of the companies.
Same reason they should own access lines: everyone needs rackspace/access, it should be treated like a public service to avoid rent seeing. Having a data center in every city where all of the local lines terminate into could open the doors to a lot of interesting use cases, really help with local resiliency/decentralization efforts, and provide a great alternative to cloud providers that doesn't break the bank.
should the government own all types of "public services"? e.g. search index, video serving infra, etc?
Public ownership of public services hmm?
Smells like socialism. Around here we privatize the profits and only socialize the costs. Like the impending bailout of the most politically connected AI companies.
That sounds like a nightmare.
Do you like this idea?
Prediction: on this thread you'll get a lot of talk about how government would slow things down. But when the AI bubble starts to look shaky, see how fast all the tech bros line up for a "public private partnership."
That seems like a terrible idea. Data centers aren’t a natural monopoly. Regulate the externalities and let it flourish.
That's malinvestment. Too much overhead, disconnected from long term demand. The government doesn't have expertise, isn't lean and nimble. What if it all just blows over? (It won't? But who knows?)
Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.
The government has better things to do. Geopolitics, trade, transportation, resources, public health, consumer safety, jobs, economy, defense, regulatory activities, etc.
Burn rate often gets treated as a hard signal, but it is mostly about expectations. Once people get used to the idea of cheap intelligence, any slowdown feels like failure, even if the technology is still moving forward. That gap is usually where bubbles begin.
On the radio they mentioned that the total global chocolate market is ~100B, I googled it when I was home and it seems to be about ~135B. Apparently that is ... all chocolate, everywhere.. OpenAI's valuation is about 500B. Maybe going up to like 835B.
I'd love to see the rationale that OpenAI (not "AI" everywhere) is more valuable than chocolate globally.
... so crash early 2026?
Ignoring that those numbers aren't directly comparable, it did make me wonder, if I had to give up either "AI" or chocolate tomorrow, which would I pick?
Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.
OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.
I am just trying to help you write better. Your writing says "if I had to give up either AI or chocolate [...] I would probably choose AI". However, your language and intent seems to be that you would give up chocolate.
Wait, aren't you comparing revenue and market cap?
People take old things for granted often. Explains the Coolidge effect, and why plenty of people cheat.
For what I use them for, the LLM market has become a two player game, and the players are Anthropic and Google. So I find it quite interesting that OpenAI is still the default assumption of the leader.
ChatGPT dominates the consumer market (though Nano Banana is singlehandedly breathing some life into consumer Gemini).
A small anecdote: when ChatGPT went down a few months ago, a lot of young people (especially students) just waited for it to come back up. They didn't even think about using an alternative.
When ChatGPT starts injecting ads or forcing payment or doing anything else that annoys its userbase then the young people won't have a problem looking for alternatives
This "moat" that OpenAI has is really weak
They took early steps to do so (ads) just recently. User response was as you'd expect.
That's pretty nuts. With the models changing so much and so often, you have to switch it up sometimes just to see what the other company is offering.
How often do you or people you know use a search engine other than google?
2008: US Banks pump stocks -> market correction -> taxpayer bailout
2026: US AI companies pump stocks -> market correction -> taxpayer bailout
Mark my words. OpenAI will be bailed out by US taxpayers.
Not really. It was the collapse of insurance companies that was at the core of 2008 crisis.
The same can happen now on the side of private credit that gradually offloads its junk to insurance companies (again):
As a result, private credit is on the rise as an investment option to compensate for this slowdown in traditional LBO (Figure 2, panel 2), and PE companies are actively growing the private credit side of their business by influencing the companies they control to help finance these operations. Life insurers are among these companies. For instance, KKR’s acquisition of 60 percent of Global Atlantic (a US life insurer) in 2020 cost KKR approximately $3billion.
https://www.imf.org/en/Publications/global-financial-stabili...
paywall, no upvote
Someone posted already the non-paywall version: https://news.ycombinator.com/item?id=46438679
why does the article used words like burn and incinerate, implying that OpenAI is somehow making money disappear or something? They’re spending it; someone is profiting here, even if it’s not OpenAI. Is it all Nvidia?
Because that's normal language idioms in financial analysis reporting?
Because typically one expect a return on investment with that level of spending. Not only have they run at a loss for years, their spending is expected to increase, with no path to profitability in sight.
not that I disagree but would it be fair to say though that we have seen this before where it turned out OK? say Uber? Amazon?
IIRC, current estimates are that OpenAI is losing as much money a year as Uber or Amazon lost in their entire lifetime of unprofitability. Also, both Uber and Amazon spent their unprofitable years having a clear roadmap to profitability. OpenAI's roadmap to profitability is "???"
I have lived through Amazon’s rags to riches and there was never a clear plan to profitability. Vast majority of people were questioning sanity of anyone investing in Amazon.
I am not saying OpenAI is Amazon but am saying I have seen this before where masses are going “oh business is bad, losses are huge, where is path to profitability…”
To become the next Uber, do I just need to run huge losses?
I wouldn’t but path to success can clearly come from running 10-digit losses for a loooong time, no?
I think you're saying that just running up huge losses is sufficient to create a successful company? But that you personally wouldn't want to run up huge losses? Not sure.
To my knowledge Amazon never debt financed their ops like this
Amazon did borrow money, for a long time.
Where did their financing come from then?
I suspect most of it is going to utilities for power, water and racking.
That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.
How many gold toilets do you need? I mean, I don't even own one.
Tragically I don't make CEO money so I also don't have one but I presume you'd want to have at least one per mansion and another one in the office. Maybe a separate one for special occasions.
Your burn is the money you spend that exceeds the money you earn, see also "burn rate".
“Burn rate” is a standard financial term for how much money a startup is losing. If you have $1 cash on hand and a burn rate of $2 a year, then you have six months before you either need to get profitable, raise more money, or shut down.
> They’re spending it
That's what the words mean in this context.