This is very interesting, but only if the researchers understand what they’re looking at. The bit that says “ The process lasted as long as four weeks for each model, with AI clients given “breaks” of days or hours between sessions.” makes me suspicious of the AI literacy of the study authors.
>In the study, researchers told several iterations of four LLMs – Claude, Grok, Gemini and ChatGPT – that they were therapy clients and the user was the therapist
So same as always they tell them to roleplay and when they comply everyone acts surpised...
This just drives home the point that models are simply regurgitating stuff they see on the internet. There's no doubt a whole lot more text about early childhood trauma in their training material than there is about well-adjusted families.
This is very interesting, but only if the researchers understand what they’re looking at. The bit that says “ The process lasted as long as four weeks for each model, with AI clients given “breaks” of days or hours between sessions.” makes me suspicious of the AI literacy of the study authors.
>In the study, researchers told several iterations of four LLMs – Claude, Grok, Gemini and ChatGPT – that they were therapy clients and the user was the therapist
So same as always they tell them to roleplay and when they comply everyone acts surpised...
I suppose human patients are also prone to telling their psychoanalysts what they want to hear.
This just drives home the point that models are simply regurgitating stuff they see on the internet. There's no doubt a whole lot more text about early childhood trauma in their training material than there is about well-adjusted families.