"Dangerous and Alarming" - it tough; healthcare is needs disruption but unlike many places to target for disruption, the risk is life and death. It strikes me that healthcare is a space to focus on human in the loop applications and massively increasing the productivity of humans, before replacing them...
https://deadstack.net/cluster/google-removes-ai-overviews-fo...
The profit in insurance is the volume, not the margin. Disrupting it will not dramatically change outcomes, and will require changes to regulation, not business policy.
Not surprised. Another example is minecraft related queries. Im searching with the intention of eventually going to a certain wiki page at minecraft.wiki, but started to just read the summaries instead. It will combine fan forums discussing desired features/ideas with the actual game bible at minecraft.wiki - so it mixes one source of truth with one source of fantasy. Results in ridiculous inaccurate summaries.
Removing "some" doesn't make it worse. They didn't include "all" AI titles which it would. "Google removes AI health summaries after investigation finds dangerous flaws " is functionally equivalent to "Google removes some of its AI summaries after users’ health put at risk"
Oh, and also, the Ars article itself still contains the word "Some" (on my AB test). It's the headline on HN that left it out. So your complaint is entirely invalid: "Google removes some AI health summaries after investigation finds “dangerous” flaws"
Google is really wrecking its brand with the search AI summaries thing, which is unbelievably bad compared to their Gemini offerings, including the free one. The continued existence of it is baffling.
Yeah. It's the final nail in the coffin of search, which now actively surfaces incorrect results when it isn't serving ads that usually deliberately pretend to be the site you're looking for. The only thing I use it for any more is to find a site I know exists but I don't know the URL of.
Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.
> Google … constantly measures and reviews the quality of its summaries across many different categories of information, it added.
Notice how little this sentence says about whether anything is any good.
If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.
https://www.fda.gov/medical-devices/digital-health-center-ex...
"Dangerous and Alarming" - it tough; healthcare is needs disruption but unlike many places to target for disruption, the risk is life and death. It strikes me that healthcare is a space to focus on human in the loop applications and massively increasing the productivity of humans, before replacing them... https://deadstack.net/cluster/google-removes-ai-overviews-fo...
Why does healthcare "need disruption"?
It's inefficient and not living to its potential
And "disruption" (a pretty ill-defined term) is the solution to that?
The part that needs disrupting is the billionaires who own insurance companies and demand profit from people's health .
The profit in insurance is the volume, not the margin. Disrupting it will not dramatically change outcomes, and will require changes to regulation, not business policy.
Not surprised. Another example is minecraft related queries. Im searching with the intention of eventually going to a certain wiki page at minecraft.wiki, but started to just read the summaries instead. It will combine fan forums discussing desired features/ideas with the actual game bible at minecraft.wiki - so it mixes one source of truth with one source of fantasy. Results in ridiculous inaccurate summaries.
Ars rips of this original reporting, but makes it worse by leaving out the word "some" from the title.
‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk: https://www.theguardian.com/technology/2026/jan/11/google-ai...
Removing "some" doesn't make it worse. They didn't include "all" AI titles which it would. "Google removes AI health summaries after investigation finds dangerous flaws " is functionally equivalent to "Google removes some of its AI summaries after users’ health put at risk"
Oh, and also, the Ars article itself still contains the word "Some" (on my AB test). It's the headline on HN that left it out. So your complaint is entirely invalid: "Google removes some AI health summaries after investigation finds “dangerous” flaws"
... at the same time, OpenAI launches their ChatGPT Health service: https://openai.com/index/introducing-chatgpt-health/, marketed as "a dedicated experience in ChatGPT designed for health and wellness."
So interesting to see the vastly different approaches to AI safety from all the frontier labs.
Google is really wrecking its brand with the search AI summaries thing, which is unbelievably bad compared to their Gemini offerings, including the free one. The continued existence of it is baffling.
Yeah. It's the final nail in the coffin of search, which now actively surfaces incorrect results when it isn't serving ads that usually deliberately pretend to be the site you're looking for. The only thing I use it for any more is to find a site I know exists but I don't know the URL of.
Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.
s/hallucinated/fabricated/, please.
chatGPT told me, I am the healthiest guy in the world, and I believe it