I find it worrying that this was upvoted so much so quickly, and HN users are apparently unable to spot the glaring red flags about this article.
1. Let's start with where the post was published. Check what kind of content this blog publishes - huge volumes of random low-effort AI-boosting posts with AI-generated images. This isn't a blog about history or linguistics.
2. The author is anonymous.
3. The contents of the post itself: it's just raw AI output. There's no expert commentary. It just mentions that unnamed experts were unable to do the job.
This isn't to say that LLMs aren't useful for science; on the contrary. See for example Terence Tao's blog. Notice how different his work is from whatever this post is.
From what I understand, it didn't "help" solving the mystery, it did solve it entirely by itself where multiple human experts had previously failed. Signs of emerging superintelligence start accumulating.
https://news.ycombinator.com/item?id=46456387
fresh_broccoli 2 days ago | next [–]
I find it worrying that this was upvoted so much so quickly, and HN users are apparently unable to spot the glaring red flags about this article.
1. Let's start with where the post was published. Check what kind of content this blog publishes - huge volumes of random low-effort AI-boosting posts with AI-generated images. This isn't a blog about history or linguistics.
2. The author is anonymous.
3. The contents of the post itself: it's just raw AI output. There's no expert commentary. It just mentions that unnamed experts were unable to do the job.
This isn't to say that LLMs aren't useful for science; on the contrary. See for example Terence Tao's blog. Notice how different his work is from whatever this post is.
From what I understand, it didn't "help" solving the mystery, it did solve it entirely by itself where multiple human experts had previously failed. Signs of emerging superintelligence start accumulating.