17 comments

  • Terretta a few seconds ago

    > Google … constantly measures and reviews the quality of its summaries across many different categories of information, it added.

    Notice how little this sentence says about whether anything is any good.

  • myhf 18 minutes ago

    If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.

    https://www.fda.gov/medical-devices/digital-health-center-ex...

  • dreadsword an hour ago

    "Dangerous and Alarming" - it tough; healthcare is needs disruption but unlike many places to target for disruption, the risk is life and death. It strikes me that healthcare is a space to focus on human in the loop applications and massively increasing the productivity of humans, before replacing them... https://deadstack.net/cluster/google-removes-ai-overviews-fo...

      bandrami 40 minutes ago

      Why does healthcare "need disruption"?

        wswin 27 minutes ago

        It's inefficient and not living to its potential

          bandrami 15 minutes ago

          And "disruption" (a pretty ill-defined term) is the solution to that?

        miltonlost 29 minutes ago

        The part that needs disrupting is the billionaires who own insurance companies and demand profit from people's health .

          zdragnar 2 minutes ago

          The profit in insurance is the volume, not the margin. Disrupting it will not dramatically change outcomes, and will require changes to regulation, not business policy.

  • InMice 24 minutes ago

    Not surprised. Another example is minecraft related queries. Im searching with the intention of eventually going to a certain wiki page at minecraft.wiki, but started to just read the summaries instead. It will combine fan forums discussing desired features/ideas with the actual game bible at minecraft.wiki - so it mixes one source of truth with one source of fantasy. Results in ridiculous inaccurate summaries.

  • xnx an hour ago

    Ars rips of this original reporting, but makes it worse by leaving out the word "some" from the title.

    ‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk: https://www.theguardian.com/technology/2026/jan/11/google-ai...

      miltonlost 32 minutes ago

      Removing "some" doesn't make it worse. They didn't include "all" AI titles which it would. "Google removes AI health summaries after investigation finds dangerous flaws " is functionally equivalent to "Google removes some of its AI summaries after users’ health put at risk"

      Oh, and also, the Ars article itself still contains the word "Some" (on my AB test). It's the headline on HN that left it out. So your complaint is entirely invalid: "Google removes some AI health summaries after investigation finds “dangerous” flaws"

  • ipython 16 minutes ago

    ... at the same time, OpenAI launches their ChatGPT Health service: https://openai.com/index/introducing-chatgpt-health/, marketed as "a dedicated experience in ChatGPT designed for health and wellness."

    So interesting to see the vastly different approaches to AI safety from all the frontier labs.

  • jeffbee 28 minutes ago

    Google is really wrecking its brand with the search AI summaries thing, which is unbelievably bad compared to their Gemini offerings, including the free one. The continued existence of it is baffling.

      gvedem 2 minutes ago

      Yeah. It's the final nail in the coffin of search, which now actively surfaces incorrect results when it isn't serving ads that usually deliberately pretend to be the site you're looking for. The only thing I use it for any more is to find a site I know exists but I don't know the URL of.

  • leptons 40 minutes ago

    Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.

      chrisjj 32 minutes ago

      s/hallucinated/fabricated/, please.

  • jnamaya an hour ago

    chatGPT told me, I am the healthiest guy in the world, and I believe it