I just commented about this in another thread. I know there has been some walking back e.g. of the significance of a Turing test but I think overall the goalposts for AI have shifted in the other way, to narrowing down the definition of intelligence to something like “being really good at some set of defined tasks” which coincidentally is basically the strong point of neural networks.
We seem hyperfocused on finding more tasks to train neural networks to do. This of course leads to a moving goalpost effect like in the article, but they’re moving along an axis that doesn’t measure intelligence.
That's a good question, one I thought of, but have put off grappling with.
Based on what LLMs have given me for answers so far, I'd look harder for the human-written source of the nroff code. I have written what I believe to be the only quine in the GPP macro processing language, LLMs only refer me to my code if I ask for a GPP quine. Google, Meta, OpenAI really have strip mined the entire web.
If I genuinely thought anything creative or new appeared, I'd probably be at a loss as well.
I just commented about this in another thread. I know there has been some walking back e.g. of the significance of a Turing test but I think overall the goalposts for AI have shifted in the other way, to narrowing down the definition of intelligence to something like “being really good at some set of defined tasks” which coincidentally is basically the strong point of neural networks.
We seem hyperfocused on finding more tasks to train neural networks to do. This of course leads to a moving goalpost effect like in the article, but they’re moving along an axis that doesn’t measure intelligence.
My other comment: https://news.ycombinator.com/item?id=46445511
What would be a better way to measure intelligence?
The article mentions a personal goalpost involving Busy Beavers.
Mine is: write a nroff document that executes at least one macro, and is a quine.
How would your views about AI change if that goal were achieved? When my personal goal was reached, I found myself a little bit at a loss for words.
That's a good question, one I thought of, but have put off grappling with.
Based on what LLMs have given me for answers so far, I'd look harder for the human-written source of the nroff code. I have written what I believe to be the only quine in the GPP macro processing language, LLMs only refer me to my code if I ask for a GPP quine. Google, Meta, OpenAI really have strip mined the entire web.
If I genuinely thought anything creative or new appeared, I'd probably be at a loss as well.