6 comments

  • Nevermark 7 hours ago

    The strongest case for me believing I have an interesting experience that correlates with what other people call "qualia", is being able to take time away, from distractions of other people, from distractions of some urgent task, to have continuously integrating memory, and look and listen and ponder at some self-chosen sensory input, maintaining consciousness of my ability to recognize I am experiencing this input, over time, and integrate that experience loop mindfully, etc.

    Despite them actually having direct senses (as direct as ours), whether those are token streams, or images, we don't give our models the freedom to do that. They do one shot thinking. One directional thinking. Then that working memory is erased, and the next tasks starts.

    My point being, they almost certainly don't have qualia now, but we can't claim that means anything serious, because we are strictly enforcing a context where they don't have any freedom to discover it, whether they could with small changes of architecture or context, or not.

    So any strong opinions about machines innate/potential abilities vis a vis qualia today, are just confabulation.

    Currently we strictly prevent the possibility. That is all we can say.

  • ben_w 9 hours ago

    Might be, but as usual, people pick a side on qualia and are far too confident about that side.

    We only experience qualia, we don't know what it actually is to have it, we don't know why the moist electrochemistry 'twixt our ears has it nor what makes it go away for most of our sleep only to return for our dreams (or if it's absent during the dreams and we only have qualia of the memories of the dreams), we cannot truly be sure that any given AI does or doesn't have it until we know what it is that it does or doesn't have.

    We also definitely can't just ask LLMs, which has been a problem since Blake Lemoine getting fooled by an AI that was definitely making up impossible stories about experiences it couldn't have had, and both the fooling itself and the response to which demonstrated Alan Turing wrong with regard to the idea people would reject what he called "the extreme and solipsist point of view" of:

      the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man.
    
    - https://genius.com/Alan-turing-computing-machinery-and-intel...

    These machines can absolutely answer "viva voce", even when we also know they're making stuff up.

  • graemefawcett 5 hours ago

    Are you aware that there are certain members of your very own species that are as intelligent as you or I, who lack those qualia.

    Non standard cognitive architectures are already coherent. Even if they were, why do you think qualia cannot be replicated with a similar signal to semantic meaning? Are there additional dimensions that we can feel that we've never talked about or more importantly, written down?

  • andy99 9 hours ago

    I don’t buy it. Obviously LLMs have a role and are powerful, but they (and more importantly the people that think they know something because they can prompt an LLM) are more like the kid in 2008 that thinks he knows something because he has Wikipedia. Fact lookup isn’t intelligence, it’s not even idiot savant intelligence which I think is the point of the article.

  • nephihaha 10 hours ago

    I really like this analogy, and have recently been trying to work out how to describe this very problem.

    By coincidence, I saw "Good Will Hunting" fairly recently so I remember that scene well. It is years since I had last seen it.

  • xvxvx 8 hours ago

    Great analogy. We’re giving this glorified chatbot nonsense they fraudulently call ‘AI’ far too much credit though.