What is the "this platform" mentioned three times in the post? It reads like it was intended to be posted on Twitter, or someone's Mastodon, or Tumblr, or something, instead of on that absurdpirate.com domain (which doesn't look like a "platform" to me).
Whatever "this platform" is, it's apparently run by someone named Herman.
He is talking about Bearblog, the blogging platform that he hosts his website with. Bearblog has it's own discover page and ranking system and every once in a while a post on Bear about AI/tech/whatever will be posted on Hackernews and all of those new people will "upvote" the Bear post and send it to the top of the Bear discover page.
So am I right to take away from this that the blog is complaining about Bear Blog (about which I know nothing, is it just hosting, a framework, or does it have a front page like HN somewhere?) hosting too many posts that are optimized for HN? It’s not a complain about HN itself, which by implication cannot be saved?
Edit: as pointed out in another post, they actually do have an aggregation page which is pretty cool: https://bearblog.dev/discover/
TBD how correlated it is with HN
I thought this might be about accounts that seem to only post AI generated comments. But it's about "AI glazing" posts. There's really not that many of those types of posts appearing.
There is quite a few AI discussions overall which include both AI is so over and AI is so back with the occasional models drop release.
Overall I feel like Hackernews does have a lot of tangentially AI related posts but we all know the reason why and so there exactly isn't something we can do about it per se
Maybe we need to use AI to remove AI related posts :) (on a serious note, there surely must be an extension where it can block hackernews posts from appearing with specific keywords on it so they can ban AI, LLM basically to get a more peaceful mind away from AI both good and bad posts overall since that's the sentiment I got from them and its understandable)
The front page is 30 items. I wouldn't call it a lot of those types of posts if there was 1 AI glazing post at all times there. At least, not relatively speaking with AI currently eating the media world.
I'm not sure how many would be considered glazing, but right now, 6/30 front-page submissions are about AI.
My best guess is that someone saw a recent front page post (https://news.ycombinator.com/item?id=46330726) that is hosted on bearblog.dev, clicked on Discover which brought them to that trending list, and then saw this "Hacker News slop" post on that list and decided to post it here.
The term originated on 4chan before the advent of mass AI-generated content. Only the normified definition of the word means "LLM-generated content I don't like".
my life was ever so slightly better prior to reading this rantlet. I'm glad someone flagged it. If ever there was a use of the expression "go touch some grass" -- this is it. Of course author has already pre-avowed to never read any reactions to his screed.
Yeah, well, I'm tired of luddites like OP. If you want to stick your head in the sand and persuade yourself that the tsunami isn't already upon us, go do it in a private Discord channel instead of whining publicly.
there is nuance to the whole situation that people do not understand and its understandable but do you really consider creating an internal divide between the same community (pro AI vs anti AI) is something of a "human" quality
(to be honest, I would consider this the most human quality to create an us vs them dynamic but I meant human as in good)
Why can't there be nuance to the whole situation is something beyond me because in software engineering, from what I am seeing, people have an lack of faith in AI generated/assisted written code and In my honest opinion, its good for early prototypes / seeing the market fit but be honest about what your intentions are
Like personally, I create a lot of software (AI generated) which I use for my own personal enjoyments/problems-issues solving. I generally release them to the public as an after-thought if someone's interested but that's not the reason why I create.
Now tho, if someone is interested in my product or I find a real niche and I find people are willing to pay for the project, I feel like at that point, I will seriously reconsider writing the app myself or weighing in the choices of what to do moving forward but whatever it is, I may do, I will try to do so with full transparency.
As an example of another nuance:- A really good game got disqualified for a game award show because they used AI generated artwork as a filler or something when they were prototyping and then they scrubbed every AI generated stuff from the game when they released to the public.
I see even AI haters defend that it was a good use of AI/understandable use of it so we all definitely need to understand the nuance and have a mature discussion.
Although, in all admittance I am a bit of AI hater myself partially because I see it filled with grifters and even though I use it, Its mostly for my own personal usecases or prototyping purposes mostly and should be treated as such.
In my opinion, using AI generated stuff just generates backlash which just doesnt seem worth it and there are genuine reasons not to use it either like the studies which showed that people became less productive overall with AI and so so many others.
I feel like this is a commentary on the general landscape itself. Like the idea of shipping fast with capitalism feels like an empty promise personally because I'd rather much do with minimalism and KISS policy than adding feature creep.
Sometimes not having some features is a feature in it of itself. Personally I'd take something minimalist designed for the purpose thing which (maybe be prototyped with AI but at the end right now, would be handled by a person) than a thing which will do a 1000 things generated by feature creep with AI
These are something that I grapple with a lot as I really like sveltekit but I hate the node ecosystem and I love golang too but there is definitely a bit more friction in golang compared to sveltekit but the performance upgrades while being cross compatible, simple, mostly very few dependency-needed makes me try to create golang generated websites and so I use LLM's to create golang binaries for personal use for things which I can probably code in a few hours/days with sveltekit
With my focus on using AI to prototype, I feel like its a crutch. Although I only use AI free versions in the web so a more file-based approach like sveltekit definitely has crutches compared to the single file main.go approach that I follow for prototyping.
It would be interesting to discuss this with someone perhaps. I really love golang and I genuinely love it for something minimal but at the same time, for websites, I feel like although I might appreciate golang as an end user, for creating it though, sveltekit is perfect and can be deployed to cf workers too
I have yet to "master" sveltekit but its guides feel like a wonderful thing to go through and although I still ask questions with sveltekit, I am way more involved overall so I am interested in what language I should prototype with.
One can allow massive scalability without resources at a less time being more ergonomic at the cost of overall being harder to grasp compared to the other which might take more resources and might be a bit less ergonomic but moving forward It would be easier for me to grasp
I had always considered memory/resoruces to be expensive (and yeah memory crisis is happening) but with cloud providers like hetzner and so so many others that I got into the rabbit hole of. It genuinely doesn't feel worth the hassle of solely for performance. I genuinely don't feel like It might transition from sveltekit to golang for performance upgrade or god forbid transitioning to rust because even go's garbage collector was too much
Golang feels more simple to view for local code, I can understand things happening in it and I feel like all code is really simple to understand but to take all of the picture in the whole mind becomes complicated as you only rely on stdlib which is good but for websites its really complicated
Whereas Sveltekit feels more magic-y so less localized code overall but overall things do fit nicely (although I still can't code UI myself even with ShadCN for the love of my life but I can use LLM to generate prototypes but it has its own issues)
Sorry for the yap but also not because I am sharing the nuance I face with AI, and so I am interested in asking what are the nuances that hackernews share about AI/ LLM generated code and what are some acceptable policies in your mind? (personally I would strive to one day be on the non AI side than AI)
Like people say that they aren't interested in writing code by hand and I can agree to that since I think we all in the end are motivated by computers going beep boop but at the same time, I have real questioning of ownerships sometimes if I really made a project that I could've made but I instead asked an LLM to prototype for. Should I still create an human prototype of those LLm generated prototypes for learning experiences or that feeling of ownership that I feel so missing at times if I generate a project idea with LLM.
What is the "this platform" mentioned three times in the post? It reads like it was intended to be posted on Twitter, or someone's Mastodon, or Tumblr, or something, instead of on that absurdpirate.com domain (which doesn't look like a "platform" to me).
Whatever "this platform" is, it's apparently run by someone named Herman.
He is talking about Bearblog, the blogging platform that he hosts his website with. Bearblog has it's own discover page and ranking system and every once in a while a post on Bear about AI/tech/whatever will be posted on Hackernews and all of those new people will "upvote" the Bear post and send it to the top of the Bear discover page.
I believe the Herman in question is the owner/operator of the blog hosting provider that this blog is on[1].
(I was also very confused by this, until I saw the footer and clicked on it.)
[1]: https://herman.bearblog.dev
Thank you, I didn’t understand this either.
So am I right to take away from this that the blog is complaining about Bear Blog (about which I know nothing, is it just hosting, a framework, or does it have a front page like HN somewhere?) hosting too many posts that are optimized for HN? It’s not a complain about HN itself, which by implication cannot be saved?
Edit: as pointed out in another post, they actually do have an aggregation page which is pretty cool: https://bearblog.dev/discover/ TBD how correlated it is with HN
[dead]
He's trying to refer to HN, but struggles with the use of "this" apparently. Maybe he should have used AI for better grammar and clarity.
He started the blog a few weeks ago and figured out how to [redacted].
This post has zero nutritional value.
Surely someone can vibe code a feed that filters posts that seem related to "Hacker News / AI style"
I thought this might be about accounts that seem to only post AI generated comments. But it's about "AI glazing" posts. There's really not that many of those types of posts appearing.
There is quite a few AI discussions overall which include both AI is so over and AI is so back with the occasional models drop release.
Overall I feel like Hackernews does have a lot of tangentially AI related posts but we all know the reason why and so there exactly isn't something we can do about it per se
Maybe we need to use AI to remove AI related posts :) (on a serious note, there surely must be an extension where it can block hackernews posts from appearing with specific keywords on it so they can ban AI, LLM basically to get a more peaceful mind away from AI both good and bad posts overall since that's the sentiment I got from them and its understandable)
Some people unfortunately interpret any nonnegative post about AI as glazing.
When's the last time an AI glazing post wasn't on the front page?
The front page is 30 items. I wouldn't call it a lot of those types of posts if there was 1 AI glazing post at all times there. At least, not relatively speaking with AI currently eating the media world.
I'm not sure how many would be considered glazing, but right now, 6/30 front-page submissions are about AI.
My impression is that this so called glazing very much exists on HN at the cost of better posted links and threads
> Herman
Who or what is that?
I believe this is a complaint about the content of this feed: https://bearblog.dev/discover/
My best guess is that someone saw a recent front page post (https://news.ycombinator.com/item?id=46330726) that is hosted on bearblog.dev, clicked on Discover which brought them to that trending list, and then saw this "Hacker News slop" post on that list and decided to post it here.
Is it possible that the fairly-specific term "slop" has ALREADY been co-opted to mean "LLM-generated context which I don't like"?!?
Because personally I'M tired of folks using that word inappropriately.
The term originated on 4chan before the advent of mass AI-generated content. Only the normified definition of the word means "LLM-generated content I don't like".
Slop has been coopted to mean any content the writer doesn't like.
my life was ever so slightly better prior to reading this rantlet. I'm glad someone flagged it. If ever there was a use of the expression "go touch some grass" -- this is it. Of course author has already pre-avowed to never read any reactions to his screed.
I'm not even sure what the author is complaining about, specifically. Weird post.
Did you miss the Cryptocurrency slop? Or the social graph slop?
Web 2.0 slop, das good shit.
Yeah, well, I'm tired of luddites like OP. If you want to stick your head in the sand and persuade yourself that the tsunami isn't already upon us, go do it in a private Discord channel instead of whining publicly.
there is nuance to the whole situation that people do not understand and its understandable but do you really consider creating an internal divide between the same community (pro AI vs anti AI) is something of a "human" quality
(to be honest, I would consider this the most human quality to create an us vs them dynamic but I meant human as in good)
Why can't there be nuance to the whole situation is something beyond me because in software engineering, from what I am seeing, people have an lack of faith in AI generated/assisted written code and In my honest opinion, its good for early prototypes / seeing the market fit but be honest about what your intentions are
Like personally, I create a lot of software (AI generated) which I use for my own personal enjoyments/problems-issues solving. I generally release them to the public as an after-thought if someone's interested but that's not the reason why I create.
Now tho, if someone is interested in my product or I find a real niche and I find people are willing to pay for the project, I feel like at that point, I will seriously reconsider writing the app myself or weighing in the choices of what to do moving forward but whatever it is, I may do, I will try to do so with full transparency.
As an example of another nuance:- A really good game got disqualified for a game award show because they used AI generated artwork as a filler or something when they were prototyping and then they scrubbed every AI generated stuff from the game when they released to the public.
I see even AI haters defend that it was a good use of AI/understandable use of it so we all definitely need to understand the nuance and have a mature discussion.
Although, in all admittance I am a bit of AI hater myself partially because I see it filled with grifters and even though I use it, Its mostly for my own personal usecases or prototyping purposes mostly and should be treated as such.
In my opinion, using AI generated stuff just generates backlash which just doesnt seem worth it and there are genuine reasons not to use it either like the studies which showed that people became less productive overall with AI and so so many others.
I feel like this is a commentary on the general landscape itself. Like the idea of shipping fast with capitalism feels like an empty promise personally because I'd rather much do with minimalism and KISS policy than adding feature creep.
Sometimes not having some features is a feature in it of itself. Personally I'd take something minimalist designed for the purpose thing which (maybe be prototyped with AI but at the end right now, would be handled by a person) than a thing which will do a 1000 things generated by feature creep with AI
These are something that I grapple with a lot as I really like sveltekit but I hate the node ecosystem and I love golang too but there is definitely a bit more friction in golang compared to sveltekit but the performance upgrades while being cross compatible, simple, mostly very few dependency-needed makes me try to create golang generated websites and so I use LLM's to create golang binaries for personal use for things which I can probably code in a few hours/days with sveltekit
With my focus on using AI to prototype, I feel like its a crutch. Although I only use AI free versions in the web so a more file-based approach like sveltekit definitely has crutches compared to the single file main.go approach that I follow for prototyping.
It would be interesting to discuss this with someone perhaps. I really love golang and I genuinely love it for something minimal but at the same time, for websites, I feel like although I might appreciate golang as an end user, for creating it though, sveltekit is perfect and can be deployed to cf workers too
I have yet to "master" sveltekit but its guides feel like a wonderful thing to go through and although I still ask questions with sveltekit, I am way more involved overall so I am interested in what language I should prototype with.
One can allow massive scalability without resources at a less time being more ergonomic at the cost of overall being harder to grasp compared to the other which might take more resources and might be a bit less ergonomic but moving forward It would be easier for me to grasp
I had always considered memory/resoruces to be expensive (and yeah memory crisis is happening) but with cloud providers like hetzner and so so many others that I got into the rabbit hole of. It genuinely doesn't feel worth the hassle of solely for performance. I genuinely don't feel like It might transition from sveltekit to golang for performance upgrade or god forbid transitioning to rust because even go's garbage collector was too much
Golang feels more simple to view for local code, I can understand things happening in it and I feel like all code is really simple to understand but to take all of the picture in the whole mind becomes complicated as you only rely on stdlib which is good but for websites its really complicated
Whereas Sveltekit feels more magic-y so less localized code overall but overall things do fit nicely (although I still can't code UI myself even with ShadCN for the love of my life but I can use LLM to generate prototypes but it has its own issues)
Sorry for the yap but also not because I am sharing the nuance I face with AI, and so I am interested in asking what are the nuances that hackernews share about AI/ LLM generated code and what are some acceptable policies in your mind? (personally I would strive to one day be on the non AI side than AI)
Like people say that they aren't interested in writing code by hand and I can agree to that since I think we all in the end are motivated by computers going beep boop but at the same time, I have real questioning of ownerships sometimes if I really made a project that I could've made but I instead asked an LLM to prototype for. Should I still create an human prototype of those LLm generated prototypes for learning experiences or that feeling of ownership that I feel so missing at times if I generate a project idea with LLM.
Joke's on you; I used an LLM summary to turn that coal into a gem.