I've been wondering for a while if Sam is going to become an Elizabeth Holmes style figure with all of his talk about "a magic intelligence in the sky" and that in 5 years AI will replace 95% of marketing work. It seems like he's set up impossible promises, it'll be interesting to see what comes from their non-delivery.
Very different to lie about what your current product actually does (especially if it's a medical testing product!) vs to give predictions about your future products that turn out to be too overhyped. A good example of the latter is Elon, which also goes to show that you only have to have the pie-in-the-sky vision succeed a couple times for a lot of people to forgive a lot of other overpromises.
Words are cheap. I really wish there was a way to incentivize authors like this to put their money where their mouth is, before seeking attention for their ideas.
Shorting a security means risking exponential losses if the stock you're shorting continues to increase in value. As the saying goes: the market can stay irrational longer than you can stay solvent.
If so, keep in mind that it's contingent advice. The question was how to profit from predicting an AI bubble popping [or something along those lines]. The answer is shorting Nvideo (assuming your prediction also includes a timeline).
It's always a way to lose a massive amount of money if you're wrong, so the advice is also contingent on confidence level.
What's your complaint about this article? I wish there were a way to incentivize comments to put effort into specific criticism, before seeking attention for their ideas.
I'm not aiming this at GP specifically, but there seems to be a culture around gen AI that the burden of proof is on sceptics, not the people claiming we're about to invent God
It's possible to notice a trend while still having the wit to realize you can't precisely time that trend well enough to profit from it. Another example might be, "Trump is increasingly old, feeble, and incapable of doing his job... but I'm not sure how that will translate into how long he's able to keep the job. It's possible that he could be a vegetable at some point and still POTUS."
Demanding that people gamble with their often limited finances to prove a point orthogonal to the one they're actually making feels disingenuous and dismissive to me.
It's not orthogonal. And you will find people will change their mind when forced to put a little money on the line.
"Team X is definitely winning, I'm certain". "So you'll offer me 1000-1 on the opponent?". "No". "6-1?", "No". They often realize they are about 65% certain at some point. And they often aren't being hyperbolic, they are just not thinking clearly.
While general vibe of Sam and OpenAI losing steam is correct for me; I genially disagree with all “hate” of GPT-5. It was indeed not a crazy breakthrough, but, honestly, 5.2 model at extended thinking path in Pro version is utterly scary to me. I can give it some complicated multi-tier question which requires modeling, abstract thinking, computation, and rationality; it can walk away for 45 minutes, produce CoT with a length of Empire State Building and give me fairly good and well written response. I can’t speak for all industries, but 5.2-Pro-extended is really scarce when it comes to math and reasoning. Here I am a bit on Sam side when he said that most of people severely underutilize modern AI by using it same as in 2023. The capability of recent models sounds to me way beyond current typical use cases.
IME it still can’t answer “is the state of Oregon entirely north of New York City” correctly. It’s possible they hard coded it since I posted about it online. How do you reconcile its inability to answer this simple question with your rather optimistic view of its capabilities?
There's many other people but most of them are independent. It is glaringly obvious now most of the media outlets are afraid to ask the tough questions.
That said, MPU has been pretty solidly crazy with their LLM critiques lately (did you know it uses all the drinking water?!?!?). There are plenty of sane, grounded-in-reality critiques. Why not focus on those?
I'm not a particularly big Gary Marcus fan, but I'm even less of a fan of the term "always".
I browsed back looking for posts of his that most obviously made predictions, and "GPT-5… now arriving Gate 8, Gate 9, Gate 10" make a few very clear predictions and was absolutely correct [0] about them.
This was in June 2024, and Marcus claims two major predictions:
- "As of today, I am more confident than ever that GPT-5 won’t land this year."
- "Gary Marcus is still betting that GPT-5 will continue to hallucinate and make a bunch of wacky errors, whenever it finally drops."
It would be over a year from his posting that article that GPT-5 would finally land, and his overall prediction that the result would be lack-luster was also spot on.
Again, I don't particularly care one way or another about Gary Marcus, but flat out dismissing his writing doesn't really hold up.
The man brought us LLMs and ever more capable models. I don’t know about the critics, but my work life completely changed from Dec 2022 on. Ever since chatgpt was released. I cannot imagine working without LLMs and agents anymore. They make me literally 10-100x more productive, transforming text, generating text, doing research, writing code, documenting systems, doing web search and so much more.
And for that I am forever thankful to Sam Altman and all the people who made this possible.
> The man brought us LLMs and ever more capable models. ... And for that I am forever thankful to Sam Altman and all the people who made this possible.
Why thank Altman and not thank Joe Biden? He was running the country when these models you praise were released, and they both had about as much real involvement in their actual construction.
The CEO didn't do the work, he shouldn't get all (or even most) of the credit.
Not sure why you’re getting downvoted. Sam Altman despite his flaws played a key role in kickstarting the capital war that’s led to the insane investment in infrastructure that’ll carry us to a new age once the hype dies down.
He created and continues to create an atmosphere for innovation inside OpenAI that showed the way for the fast-followers.
He lit a fire under the ass of Google for gods sake.
Whatever he did or didn’t invent, he made a ton of invention possible.
Where to from here is uncertain but without sama maybe ChatGPT didn’t happen the same way - or maybe it crashed and people shrugged. Maybe in the other timeline that leads to another AI winter. But one thing is for certain, without sama the whole thing would’ve been a lot smaller.
At a friend’s birthday last year, I wrote in the space of 8 minutes - then performed - a 3-minute long verse about said friend and their puppy. I didn’t get the verse from ChatGPT. I had it help me find rhyming words that fit the rhythm, had it help me find synonyms, and find punchy ends to sentences.
I made a xylophone iphone app way back in mid 2024 by copy pasting code to Claude and errors from Xcode, just to show off what AI can do. Someone asked to make it support dragging your finger across the screen to play lots of notes really fast - Claude did that in one shot. In mid 2024, 6 months before Claude Code.
I made a sorting hat for my sisters’ kids for Christmas a few weeks ago. I found a voice cloning website, had Claude write some fun dialogue, and vibe coded an app with the generated recordings of the sorting hat voice saying various virtues and Harry Potter house names. The cloned voice was so good, it sounded exactly like the actor in the movie. I loaded the app on my phone and hid a Bluetooth speaker in a Santa hat - tapping a button in the app would play a voice recording from the sorting hat AI voice. The kids unwrapped the hat and it declared itself as the sorting hat. Put the hat on a kid’s head, tap a button, hat talks! With a little sleight of hand, the kids really believed it was the hat talking all by itself. Laughing together with my whole family as the hat declared my cheeky niece “Slytherin!!!” was one of the most humanising things I’ve ever seen.
I’ve made event posters for my Burning Man camp. Zillions of dumb memes for group chats. You always have to do some inpainting or touch it up in an image editor, but it’s worth it for the lulz.
And right now I’m using Claude Code for my startup, ApprovIQ. Dario Amodei was right in a way: 99% of the code was written by Claude.
But sorry, no multi million line vibe coded codebases. For that my friend, you’ll be waiting until after the next AI winter.
The downvotes probably come from the idea that I’m crediting one person. Obviously LLMs were built by many people, but Altman raised the money, pulled the org together, and shipped something that millions were using within days.
Computers existed before Steve Jobs made them usable. Sam Altman created a company that created a product that millions of people started to use witin days.
Do people actually find this kind of dunk content interesting? It's super popular but I wonder if people find it insightful or entertaining. I watched a little of Stephen Colbert and John Oliver and it's mostly boring stuff that is pretty much characterized as "This guy IS AN IDIOT!" and "They're LOSING" and whatnot. It seems like the equivalent of those "Charlie Kirk owns feminists with FACTS and LOGIC". If you go to /r/all you'll see that it's almost all dunk content.
I get the appeal in an abstract sense. I get a real kick out of watching yet another Manchester United manager cock it up. But all I do is see the scores and enjoy a bit of schadenfreude. The audience for these guys seems highly enthusiastic about their content. It's like if I watched every United game to get as much enjoyment out of watching them suck.
It's particularly annoying because people are clearly not posting this guy because he's right often or because he's good at predicting stuff. They're posting because he's dunking on people they dislike.
Haven't read this one, and there is certainly plenty of useless rage bait on the internet. But in general it is important and serves a purpose to criticize people wielding a lot of power. This doesn't have to be constructive, if those people are idiots and doing harmful things.
You're conflating Charlie Kirk trying to foment anger and humiliate random non-public figures to one of the most public and powerful person in the tech industry.
Sam has made bombastic claims that were key to his success along with his shrewdness in the business and tech world. Reporting on this and looking at how his public and private profile have changed are an important part of an open and free society as it helps the general public understand the people and forces shaping their world.
Is this particular article important in the grand scheme of things ¯\_(ツ)_/¯ But I wouldn't throw the entire genre out
If I'm being honest, the combination of the HN audience and him is that he's just a dunk machine. It's all right. I get it. People here like this kind of content. I have to killfile the domain and users who post positively about it if I want to improve my personal feed. That's on me to curate rather than to post Yet Another Complaint Comment (the lesser known yacc).
The rapid rise and slow decline of Sam Altman
Let's be honest, Generative AI isn't going all that well
Marcus Weighs in (Mostly) for LeCun
Why ChatGPT can't be trusted with breaking news
The AI bubble is all over now, baby blue
Six (or seven) predictions for AI 2026 from a Generative AI realist
The Core Misconception That Is Driving American AI Policy
"Scale Is All You Need" Is Dead
ChatGPT 3 turned 3 today. It still hasn't come close to meeting expectations
A trillion dollars (potentially) wasted on gen-AI
Has the bailout of generative AI begun?
Yann LeCun originated none of his ideas
Hot take on Google's Gemini 3
The False Glorification of Yann LeCun
Sam Altman's pants are on fire
Is Vibe Coding Dying?
OpenAI probably can't make ends meet. That's where you come in
Five signs that Generative AI is losing traction Usage may be declining
Could China devastate the US without firing a shot?
Five signs that Generative AI is losing traction
Is vibe coding dying?
Erdosgate
AGI is not imminent, and LLMs are not the royal road to getting there
Game over for pure LLMs. Even Rich Sutton has gotten off the bus
New AI hype "Our language models are so 'conscious' we need to give them rights"
Peak Bubble
OpenAI's Future, Foretold?
I've been wondering for a while if Sam is going to become an Elizabeth Holmes style figure with all of his talk about "a magic intelligence in the sky" and that in 5 years AI will replace 95% of marketing work. It seems like he's set up impossible promises, it'll be interesting to see what comes from their non-delivery.
Very different to lie about what your current product actually does (especially if it's a medical testing product!) vs to give predictions about your future products that turn out to be too overhyped. A good example of the latter is Elon, which also goes to show that you only have to have the pie-in-the-sky vision succeed a couple times for a lot of people to forgive a lot of other overpromises.
Right, though both are about lying blatantly, despite better knowledge, with the motivation of personal gain behind it.
Well, Musk does both now. He's lying about the present and his future.
Pie in the sky is not the same as fraudulent claims about your product.
He’s the next SBF.
Losing Apple as a customer is kind of a problem.
Words are cheap. I really wish there was a way to incentivize authors like this to put their money where their mouth is, before seeking attention for their ideas.
I’m guessing it’s hard to go short OpenAI without also going short a bunch of other companies riding the AI wave that aren’t led by Sam Altman?
Would love to hear how that could work.
Hmm. Maybe he might do... a bet!
And then maybe he might ... change the bet! when he was about to lose?
Maybe!
Who's to say, really? Certainly not me! I'm just a neural network!
Shorting a security means risking exponential losses if the stock you're shorting continues to increase in value. As the saying goes: the market can stay irrational longer than you can stay solvent.
Just short Nvidia. If the thing goes bang then that’ll be one of the big losers.
"Just short Nvidia"
Is this financial advice? :-)
If so, keep in mind that it's contingent advice. The question was how to profit from predicting an AI bubble popping [or something along those lines]. The answer is shorting Nvideo (assuming your prediction also includes a timeline).
It's always a way to lose a massive amount of money if you're wrong, so the advice is also contingent on confidence level.
Gary is an insufferable blowhard but he's had skin the AI game for awhile. I believe he sold an AI startup to Uber back in the 2010s.
Many of his criticisms of OpenAI and LLMs have been apt.
What's your complaint about this article? I wish there were a way to incentivize comments to put effort into specific criticism, before seeking attention for their ideas.
I'm not aiming this at GP specifically, but there seems to be a culture around gen AI that the burden of proof is on sceptics, not the people claiming we're about to invent God
It's possible to notice a trend while still having the wit to realize you can't precisely time that trend well enough to profit from it. Another example might be, "Trump is increasingly old, feeble, and incapable of doing his job... but I'm not sure how that will translate into how long he's able to keep the job. It's possible that he could be a vegetable at some point and still POTUS."
Demanding that people gamble with their often limited finances to prove a point orthogonal to the one they're actually making feels disingenuous and dismissive to me.
It's not orthogonal. And you will find people will change their mind when forced to put a little money on the line.
"Team X is definitely winning, I'm certain". "So you'll offer me 1000-1 on the opponent?". "No". "6-1?", "No". They often realize they are about 65% certain at some point. And they often aren't being hyperbolic, they are just not thinking clearly.
While general vibe of Sam and OpenAI losing steam is correct for me; I genially disagree with all “hate” of GPT-5. It was indeed not a crazy breakthrough, but, honestly, 5.2 model at extended thinking path in Pro version is utterly scary to me. I can give it some complicated multi-tier question which requires modeling, abstract thinking, computation, and rationality; it can walk away for 45 minutes, produce CoT with a length of Empire State Building and give me fairly good and well written response. I can’t speak for all industries, but 5.2-Pro-extended is really scarce when it comes to math and reasoning. Here I am a bit on Sam side when he said that most of people severely underutilize modern AI by using it same as in 2023. The capability of recent models sounds to me way beyond current typical use cases.
IME it still can’t answer “is the state of Oregon entirely north of New York City” correctly. It’s possible they hard coded it since I posted about it online. How do you reconcile its inability to answer this simple question with your rather optimistic view of its capabilities?
does he think truly think e.g. opus 4.5 is "plateaued"? i really can't even vaguely take him seriously if he actually thinks that. what a silly man.
[dead]
Why. Do. We. Keep. Posting. Gary. Marcus.
He's been writing variants of this kind of thing for decades and he's always been wrong.
Gary Marcus is not the only one that has been critical of Sam Altman: https://www.youtube.com/watch?v=l0K4XPu3Qhg
There's many other people but most of them are independent. It is glaringly obvious now most of the media outlets are afraid to ask the tough questions.
Sure, but why post Gary Marcus then?
That said, MPU has been pretty solidly crazy with their LLM critiques lately (did you know it uses all the drinking water?!?!?). There are plenty of sane, grounded-in-reality critiques. Why not focus on those?
Here is a more fun one to watch:
https://www.youtube.com/watch?v=zrgEZ8FeZEc
Any examples? I've never heard of him, but that seems like a big statement.
I'm not a particularly big Gary Marcus fan, but I'm even less of a fan of the term "always".
I browsed back looking for posts of his that most obviously made predictions, and "GPT-5… now arriving Gate 8, Gate 9, Gate 10" make a few very clear predictions and was absolutely correct [0] about them.
This was in June 2024, and Marcus claims two major predictions:
- "As of today, I am more confident than ever that GPT-5 won’t land this year."
- "Gary Marcus is still betting that GPT-5 will continue to hallucinate and make a bunch of wacky errors, whenever it finally drops."
It would be over a year from his posting that article that GPT-5 would finally land, and his overall prediction that the result would be lack-luster was also spot on.
Again, I don't particularly care one way or another about Gary Marcus, but flat out dismissing his writing doesn't really hold up.
0. https://garymarcus.substack.com/p/gpt-5-now-arriving-gate-8-...
He also posted https://garymarcus.substack.com/p/lets-be-honest-generative-..., which got flagged as low effort spam about 30 minutes ago. https://news.ycombinator.com./item?id=46605587
The man brought us LLMs and ever more capable models. I don’t know about the critics, but my work life completely changed from Dec 2022 on. Ever since chatgpt was released. I cannot imagine working without LLMs and agents anymore. They make me literally 10-100x more productive, transforming text, generating text, doing research, writing code, documenting systems, doing web search and so much more.
And for that I am forever thankful to Sam Altman and all the people who made this possible.
> The man brought us LLMs and ever more capable models. ... And for that I am forever thankful to Sam Altman and all the people who made this possible.
Why thank Altman and not thank Joe Biden? He was running the country when these models you praise were released, and they both had about as much real involvement in their actual construction.
The CEO didn't do the work, he shouldn't get all (or even most) of the credit.
Not sure why you’re getting downvoted. Sam Altman despite his flaws played a key role in kickstarting the capital war that’s led to the insane investment in infrastructure that’ll carry us to a new age once the hype dies down.
He created and continues to create an atmosphere for innovation inside OpenAI that showed the way for the fast-followers.
He lit a fire under the ass of Google for gods sake.
Whatever he did or didn’t invent, he made a ton of invention possible.
Where to from here is uncertain but without sama maybe ChatGPT didn’t happen the same way - or maybe it crashed and people shrugged. Maybe in the other timeline that leads to another AI winter. But one thing is for certain, without sama the whole thing would’ve been a lot smaller.
“Whatever he did or didn’t invent, he made a ton of invention possible.“
I think it’s time to pony up.
Where are your vibe coded databases that take on SQLite and Postgres?
Where are your vibe coded Operating Systems?
Where are your vibe coded browsers?
Where are your vibe coded literally anything?
My pony’s doing just fine.
At a friend’s birthday last year, I wrote in the space of 8 minutes - then performed - a 3-minute long verse about said friend and their puppy. I didn’t get the verse from ChatGPT. I had it help me find rhyming words that fit the rhythm, had it help me find synonyms, and find punchy ends to sentences.
I made a xylophone iphone app way back in mid 2024 by copy pasting code to Claude and errors from Xcode, just to show off what AI can do. Someone asked to make it support dragging your finger across the screen to play lots of notes really fast - Claude did that in one shot. In mid 2024, 6 months before Claude Code.
I made a sorting hat for my sisters’ kids for Christmas a few weeks ago. I found a voice cloning website, had Claude write some fun dialogue, and vibe coded an app with the generated recordings of the sorting hat voice saying various virtues and Harry Potter house names. The cloned voice was so good, it sounded exactly like the actor in the movie. I loaded the app on my phone and hid a Bluetooth speaker in a Santa hat - tapping a button in the app would play a voice recording from the sorting hat AI voice. The kids unwrapped the hat and it declared itself as the sorting hat. Put the hat on a kid’s head, tap a button, hat talks! With a little sleight of hand, the kids really believed it was the hat talking all by itself. Laughing together with my whole family as the hat declared my cheeky niece “Slytherin!!!” was one of the most humanising things I’ve ever seen.
I’ve made event posters for my Burning Man camp. Zillions of dumb memes for group chats. You always have to do some inpainting or touch it up in an image editor, but it’s worth it for the lulz.
And right now I’m using Claude Code for my startup, ApprovIQ. Dario Amodei was right in a way: 99% of the code was written by Claude.
But sorry, no multi million line vibe coded codebases. For that my friend, you’ll be waiting until after the next AI winter.
The downvotes probably come from the idea that I’m crediting one person. Obviously LLMs were built by many people, but Altman raised the money, pulled the org together, and shipped something that millions were using within days.
It can be a bit tricky to attribute the product of an entire company to the CEO... even if the CEO is a founder!
Computers existed before Steve Jobs made them usable. Sam Altman created a company that created a product that millions of people started to use witin days.
Do people actually find this kind of dunk content interesting? It's super popular but I wonder if people find it insightful or entertaining. I watched a little of Stephen Colbert and John Oliver and it's mostly boring stuff that is pretty much characterized as "This guy IS AN IDIOT!" and "They're LOSING" and whatnot. It seems like the equivalent of those "Charlie Kirk owns feminists with FACTS and LOGIC". If you go to /r/all you'll see that it's almost all dunk content.
I get the appeal in an abstract sense. I get a real kick out of watching yet another Manchester United manager cock it up. But all I do is see the scores and enjoy a bit of schadenfreude. The audience for these guys seems highly enthusiastic about their content. It's like if I watched every United game to get as much enjoyment out of watching them suck.
It's particularly annoying because people are clearly not posting this guy because he's right often or because he's good at predicting stuff. They're posting because he's dunking on people they dislike.
Haven't read this one, and there is certainly plenty of useless rage bait on the internet. But in general it is important and serves a purpose to criticize people wielding a lot of power. This doesn't have to be constructive, if those people are idiots and doing harmful things.
People must find it click-worthy, otherwise why would we get the amount of "we got him"-type of content we do?
You're conflating Charlie Kirk trying to foment anger and humiliate random non-public figures to one of the most public and powerful person in the tech industry.
Sam has made bombastic claims that were key to his success along with his shrewdness in the business and tech world. Reporting on this and looking at how his public and private profile have changed are an important part of an open and free society as it helps the general public understand the people and forces shaping their world.
Is this particular article important in the grand scheme of things ¯\_(ツ)_/¯ But I wouldn't throw the entire genre out
Gary Marcus is a terrific self-promoter and grifter, and there are very large audiences for that sort of stuff. It's simple.
He does more than dunks though, I don't think that's really fair to him. He's trying to position himself as a public intellectual and expert.
I did a quick Algolia search https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
If I'm being honest, the combination of the HN audience and him is that he's just a dunk machine. It's all right. I get it. People here like this kind of content. I have to killfile the domain and users who post positively about it if I want to improve my personal feed. That's on me to curate rather than to post Yet Another Complaint Comment (the lesser known yacc).