Bots have ruined reddit but that is what the owners wanted.
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
It's been really sad to see reddit go like this because it was pretty much the last bastion of the human internet. I hated reddit back in the day but later got into it for that reason. It's why all our web searches turned into "cake recipe reddit." But boy did they throw it in the garbage fast.
One of their new features is you can read AI generated questions with AI generated answers. What could the purpose of that possibly be?
We still have the old posts... for the most part (a lot of answers were purged during the protest) but what's left of it is also slipping away fast for various reasons. Maybe I'll try to get back into gemini protocol or something.
The biggest change reddit made was ignoring subscriptions and just showing anything the algorithm thinks you will like. Resulting in complete no name subreddits showing on your front page. Meaning moderators no longer control content for quality, which is both a good and bad thing, but it means more garbage makes it to your front page.
I can't remember the last time I was on the Reddit front page and I use the site pretty much daily. I only look at specific subreddit pages (barely a fraction of what I'm subscribed to).
These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.
The reddit bot story has been an interesting thing to follow. Either reddit is lying about views, or more that > 99% of views are from bots, or click through conversion of reddit humans is like 2 orders of magnitude lower than the typical human.
If they show those view numbers to advertisers, then it's fraud, especially if they know that the views are bots. What's also surprising is the advertisers should realize they are getting low conversion and stop buying reddit ads.
I suspect that company is a whistleblower away from an enormous scandal.
At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.
Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!
Steve Huffman is an awful CEO. With that being said I've always been curious how the rest of the industry (for example, the web-wide practice of autoplaying videos) was constructed to catch up with Facebook's fraudulent metrics. Their IPO (and Zuckerberg is certainly known to lie about things) was possibly fraud and we know that they lied about their own video metrics (to the point it's suspected CollegeHumor shut down because of it)
I don't have strong negative feelings about the era of LLM writing, but I resent that it has taken the em-dash from me. I have long used them as a strong disjunctive pause, stronger than a semicolon. I have gone back to semicolons after many instances of my comments or writing being dismissed as AI.
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
Now I'm actually curious to see statistics regarding the usage of em-dashes on HN before and after AI took over. The data is public, right? I'd do it myself, but unfortunately I'm lazy.
I prefer a Dark Forest theory [1] of the internet. Rather than being completely dead and saturated with bots, the internet has little pockets of human activity like bits of flotsam in a stream of slop. And that's how it is going to be from here on out. Occasionally the bots will find those communities and they'll either find a way to ban them or the community will be abandoned for another safe harbour.
To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.
In one hand, we are past the Turing Test definition if we can't distinguish if we are talking with an AI or a real human or more things that were rampant on internet previously, like spam and scam campaigns, targeted opinion manipulation, or a lot of other things that weren't, let's say, an honest opinion of the single person that could be identified with an account.
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
The funny thing is I knew people that used the phrase 'you're absolutely right' very commonly...
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
I'm a confessing user of em-dashes (or en-dashes in fonts that feature overly accentuated em-dashes). It's actually kind of hard to not use them, if you've ever worked with typography and know your dashes and hyphenations. —[sic!] Also, those dashes are conveniently accessible on a Mac keyboard. There may be some Win/PC bias in the em-dash giveaway theory.
> part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.
I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.
Much like someone from Schaumburg Illinois can say they are from Chicago, Hacker News can call itself social media. You fly that flag. Don’t let anyone stop you.
I’m a bit scared of this theory, i think it will be true, ai will eat the internet, then they’ll paywall it.
Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.
Good post, Thank you.
May I say Dead, Toxic Internet? With social media adding the toxicity.
The Enshittification theory by Cory Doctorow sums up the process of how this unfolds (look it up on Wikipedia).
I am curious when we will land dead github theory? I am looking at growing of self hosted projects and it seems many of them are simply AI slop now or slowly moving there.
Bots have ruined reddit but that is what the owners wanted.
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
It's been really sad to see reddit go like this because it was pretty much the last bastion of the human internet. I hated reddit back in the day but later got into it for that reason. It's why all our web searches turned into "cake recipe reddit." But boy did they throw it in the garbage fast. One of their new features is you can read AI generated questions with AI generated answers. What could the purpose of that possibly be? We still have the old posts... for the most part (a lot of answers were purged during the protest) but what's left of it is also slipping away fast for various reasons. Maybe I'll try to get back into gemini protocol or something.
> Bots have ruined reddit but that is what the owners wanted.
Adding the option to hide profile comments/posts was also a terrible move for several reasons.
The biggest change reddit made was ignoring subscriptions and just showing anything the algorithm thinks you will like. Resulting in complete no name subreddits showing on your front page. Meaning moderators no longer control content for quality, which is both a good and bad thing, but it means more garbage makes it to your front page.
I can't remember the last time I was on the Reddit front page and I use the site pretty much daily. I only look at specific subreddit pages (barely a fraction of what I'm subscribed to).
These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.
why would you look at the "front page" if you only wanted to see things you subscribed to? that's what the "latest" and whatever the other one is for.
they have definitely made reddit far worse in lots of ways, but not this one.
The reddit bot story has been an interesting thing to follow. Either reddit is lying about views, or more that > 99% of views are from bots, or click through conversion of reddit humans is like 2 orders of magnitude lower than the typical human.
If they show those view numbers to advertisers, then it's fraud, especially if they know that the views are bots. What's also surprising is the advertisers should realize they are getting low conversion and stop buying reddit ads.
I suspect that company is a whistleblower away from an enormous scandal.
> allow even more bots to increase traffic which drives up ad revenue
Isn't that just fraud?
It is. Reddit is probably 99% fraud/bots at this point.
I’m think you are overestimating humanity.
At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.
Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!
Wouldn’t taking the API away hurt the bots?
the bots just scrape
Isn't showing ads to bots...pointless?
If the advertisers don't know the difference between a human and a bot then they will still pay money to display the ad.
Steve Huffman is an awful CEO. With that being said I've always been curious how the rest of the industry (for example, the web-wide practice of autoplaying videos) was constructed to catch up with Facebook's fraudulent metrics. Their IPO (and Zuckerberg is certainly known to lie about things) was possibly fraud and we know that they lied about their own video metrics (to the point it's suspected CollegeHumor shut down because of it)
[dead]
> The use of em-dashes, which on most keyboard require a special key-combination that most people don’t know
Most people probably don't know, but I think on HN at least half of the users know how to do it.
It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.
I don't have strong negative feelings about the era of LLM writing, but I resent that it has taken the em-dash from me. I have long used them as a strong disjunctive pause, stronger than a semicolon. I have gone back to semicolons after many instances of my comments or writing being dismissed as AI.
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
Now I'm actually curious to see statistics regarding the usage of em-dashes on HN before and after AI took over. The data is public, right? I'd do it myself, but unfortunately I'm lazy.
I prefer a Dark Forest theory [1] of the internet. Rather than being completely dead and saturated with bots, the internet has little pockets of human activity like bits of flotsam in a stream of slop. And that's how it is going to be from here on out. Occasionally the bots will find those communities and they'll either find a way to ban them or the community will be abandoned for another safe harbour.
To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.
[1] https://en.wikipedia.org/wiki/Dark_forest_hypothesis
In one hand, we are past the Turing Test definition if we can't distinguish if we are talking with an AI or a real human or more things that were rampant on internet previously, like spam and scam campaigns, targeted opinion manipulation, or a lot of other things that weren't, let's say, an honest opinion of the single person that could be identified with an account.
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
> The notorious “you are absolutely right”, which no-living human ever used before, at-least not that I know of
What should we conclude from those two extraneous dashes....
The funny thing is I knew people that used the phrase 'you're absolutely right' very commonly...
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
I'm a confessing user of em-dashes (or en-dashes in fonts that feature overly accentuated em-dashes). It's actually kind of hard to not use them, if you've ever worked with typography and know your dashes and hyphenations. —[sic!] Also, those dashes are conveniently accessible on a Mac keyboard. There may be some Win/PC bias in the em-dash giveaway theory.
I use them -but I generally use the short version (I'm lazy), while AI likes the long version (which is correct -my version is not).
You don't use em dashes then, you use en dash.
I think they are saying they are using an en dash where they should use an em dash.
Yup. Note that I didn't name the dash.
> part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.
I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.
An LLM cannot explain itself and its explanations have no relation to what actually caused the text to be generated.
That I'm a real human being that is stupid in English sometimes? :)
That's just what an AI would say :)
Nice article, though. Thanks.
Much like someone from Schaumburg Illinois can say they are from Chicago, Hacker News can call itself social media. You fly that flag. Don’t let anyone stop you.
If you can ride the Metra from your city to Chicago proper, you're in Chicago!
I’m a bit scared of this theory, i think it will be true, ai will eat the internet, then they’ll paywall it.
Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.
That’s why we need a parallel internet.
What safeguards would be in place to prevent this parallel internet from also, with time, becoming a dead internet?
When it becomes a dead parallel internet, we'll make a internet'' and go again
What would stop them from scraping it and infecting it?
Poison Fountain: https://rnsaffn.com/poison2/
https://www.theregister.com/2026/01/11/industry_insiders_see...
A̶O̶L̶ Humans Online
Good post, Thank you. May I say Dead, Toxic Internet? With social media adding the toxicity. The Enshittification theory by Cory Doctorow sums up the process of how this unfolds (look it up on Wikipedia).
I am curious when we will land dead github theory? I am looking at growing of self hosted projects and it seems many of them are simply AI slop now or slowly moving there.
But what about the children improving their productivity 10x? What about their workflows?
Think of the children!!!