2 comments

  • kylecazar 2 hours ago

    I agree with what's written, and I've been talking about the harm seemingly innocuous anthropomorphization does for a while.

    If you do correct someone (a layperson) and say "it's not thinking", they'll usually reply "sure but you know what I mean". And then, eventually, they will say something that indicates they're actually not sure that it isn't thinking. They'll compliment it on a response or ask it questions about itself, as if it were a person.

    It won't take, because the providers want to use these words. But different terms would benefit everyone. A lot of ink has been spilled on how closely LLM's approximate human thought, and maybe if we never called it 'thought' to begin with it wouldn't have been such a distracting topic from what they are -- useful.

  • donutquine an hour ago

    An article about AI "cognition" is written by LLM. You kidding.