So far I've found Firefox reader mode does what this aims to sufficiently well if I plan to read the page online, and saving a minimal version to Joplin works if I want it offline.
My idea is admittedly half-baked, but I've been thinking of the concept of an alternative web. I don't know how it would work. Ignoring implementation for a second, what if you could stay within a network of non-corporate content, search engines, etc.? I feel like everyday I find neat websites on Hacker News built by real people for fun and community, but then my daily web browsing activity doesn't feel like this. If I do one Google or DuckDuckGo search, I'm inevitably going to land on some site that is at least cluttered with SEO garbage, even if it's an individual person's blog (cooking blogs are a great example). I guess my criteria for what sites would get included in this are not well defined. Maybe I just want the old web back.
You could be right. But plenty of websites with useful information intentionally obfuscate their pages to trick the browser into disabling reader mode.
Does this do anything that Firefox Reader Mode doesn't? Just seems like a non-deterministic, slower, non-free version.
Also, I understand writing READMEs is boring, but please at least edit what the LLM produces. You do not need this many content-free emoji bullet point sections.
For the love of God, people need to get back to writing their own Readmes and not just taking LLM output unedited. I do want to read, but I don't want to read ChatGPTese.
Sorry, let me ask ChatGPT to put it in terms people seem to prefer now (I don't think this stuff is actually quite right but who cares anymore):
## 1. They Optimize for Politeness, Not Usefulness
ChatGPT READMEs tend to:
- Over-explain obvious things
- Avoid strong claims
- Hedge unnecessarily
The result is text that feels safe but not informative. A good README should reduce uncertainty quickly, not pad it with disclaimers and filler.
## 2. They Follow Templates Instead of Intent
Most generated READMEs look structurally correct but contextually shallow:
- Generic section headings (“Installation”, “Usage”, “Contributing”) regardless of relevance
- Boilerplate language that could apply to almost any project
- No clear prioritization of what actually matters
This signals that the README was assembled, not written with purpose.
## Summary
ChatGPT READMEs are usually:
- Correct but unhelpful
- Polished but shallow
- Complete but low-signal
The modern web is broken. Before you can read anything, you hit a wall: popups,
ads, paywalls, tracking scripts, and navigation clutter designed to keep you clicking
instead of reading.
Declutter strips all of it away using AI. But here's the thing, you don't need
frontier models. Even small, cheap models like Gemini Flash or Claude Haiku do an
excellent job extracting pure content. That means you can archive whatever you want
to read for pennies per month.
Just point it at a URL, and out comes beautifully formatted Markdown, HTML, or PDF.
Offline. Local. Clean.
I built this because I was tired of the fight. The web doesn't have to be this way.
I just want to read. Please let me read without distractions.
I just can't enjoy reading something so obviously AI generated. Write it yourself.
Or use Safari’s reader mode, it works well!
I've been using a webapp I build to do the same thing, and I've really enjoyed it.
https://yazzy.carter.works/
Example: https://yazzy.carter.works/https://paulgraham.com/submarine....
https://github.com/carterworks/yazzy
It uses Steph Ango's/kepano's defuddle under the hood.
The live version is hosted on a single free-tier fly.io node, but it is easily self-hosted.
So far I've found Firefox reader mode does what this aims to sufficiently well if I plan to read the page online, and saving a minimal version to Joplin works if I want it offline.
I feel the author could have applied the declutter tool to their readme.
Or perhaps the LLM should have known to do that.
My idea is admittedly half-baked, but I've been thinking of the concept of an alternative web. I don't know how it would work. Ignoring implementation for a second, what if you could stay within a network of non-corporate content, search engines, etc.? I feel like everyday I find neat websites on Hacker News built by real people for fun and community, but then my daily web browsing activity doesn't feel like this. If I do one Google or DuckDuckGo search, I'm inevitably going to land on some site that is at least cluttered with SEO garbage, even if it's an individual person's blog (cooking blogs are a great example). I guess my criteria for what sites would get included in this are not well defined. Maybe I just want the old web back.
So like: https://geminiprotocol.net/
They claim the protocol is resilient to enshittification.
I agree with the premise and don't want to knock someone's project.
It does seem like a lot of computational effort to achieve what F9 / Reader View does in FF.
You could be right. But plenty of websites with useful information intentionally obfuscate their pages to trick the browser into disabling reader mode.
Does this do anything that Firefox Reader Mode doesn't? Just seems like a non-deterministic, slower, non-free version.
Also, I understand writing READMEs is boring, but please at least edit what the LLM produces. You do not need this many content-free emoji bullet point sections.
Edit: Looking at the prompt made me realize that the output of this would obviously be completely untrustworthy: https://github.com/subranag/declutter/blob/main/src/llm.ts#L...
README oozes ai generated copy.
Nah. Hard pass on even more AI inbetween me and the few bits of original content that are still out there. Reader mode and we're good.
For the love of God, people need to get back to writing their own Readmes and not just taking LLM output unedited. I do want to read, but I don't want to read ChatGPTese.
Sorry, let me ask ChatGPT to put it in terms people seem to prefer now (I don't think this stuff is actually quite right but who cares anymore):