This thread reads like an advertisement for ChatGPT Health.
I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"
OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.
> This thread reads like an advertisement for ChatGPT Health.
This thread has a theme I see a lot in ChatGPT users: They're highly skeptical of the answers other people get from ChatGPT, but when they use it for themselves they believe the output is correct and helpful.
I've written before on HN about my friend who decided to take his health into his own hands because he trusted ChatGPT more than his doctors. By the end he was on so many supplements and "protocols" that he was doing enormous damage to his liver and immune system.
The more he conversed with ChatGPT, the better he got at getting it to agree with him. When it started to disagree or advise caution, he'd blame it on overly sensitive guardrails, delete the conversation, and start over with an adjusted prompt. He'd repeat this until he had something to copy and paste to us to "prove" that he was on the right track.
As a broader anecdote, I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people. This combined with the TikTok trend of diagnosing everything as a symptom of ADHD is becoming really alarming. In some cohorts, it's a rarity for someone to believe they don't have ADHD. There are also a lot of complaints from people who are angry their GP wouldn't just write a prescription for Adderall and tips for doctor shopping around to find doctors who won't ask too many questions before dispensing prescriptions.
Great write up. I'd even double down on this statement: "You can opt in to chat history privacy". This is really "You can opt in to chat history privacy on a chat-by-chat basis, and there is no way to set a default opt-out for new chats".
There are lots of companies that do this. Doesn't make it right.
The real "evil" here is that companies like Meta, Google, and now OpenAI sell people a product or service that the customer thinks is the full transaction. I search with Google, they show me ads - that's the transaction. I pay for Chatgpt, it helps me understand XYZ - that's the transaction.
But it isn't. You give them your data and they sell it - that's the transaction. And that obscurity is not ethical in my opinion.
> You give them your data and they sell it - that's the transaction
I think that's the wrong framing. Let's get real: They're pimping you out. Google and Meta are population-scale fully-automated digital pimping operations.
They're putting everyone's ass on the RTB street and in return you get this nice handbag--err, email account/YouTube video/Insta feed. They use their bitches' data to run an extremely sophisticated matchmaking service, ensuring the advertiser Johns always get to (mind)fuck the bitches they think are the hottest.
What's even more concerning about OpenAI in particular is they're poised to be the biggest, baddest, most exploitative pimp in world history. Instead of merely making their hoes turn tricks to get access to software and information, they'll charge a premium to Johns to exert an influence on the bitches and groom them to believe whatever the richest John wants.
Goodbye democracy, hello pimp-ocracy. RTB pimping is already a critical national security threat. Now AI grooming is a looming self-governance catastrophe.
No, but if I hear you telling someone you have the flu and are picking up flu medicine after work then I have a portion of your medical records. Why is it hard for people on HN to believe that normal people do not protect their medical data and email about it or search Google for their conditions? People in the "real world" hook up smart TV's to the internet and don't realize they are being tracked. They use cars with smart features that let them be tracked. They have apps on their phone that track their sentiments, purchases, and health issues... All we are seeing here is people getting access to smart technology for their health issues in such a manner that they might lower their healthcare costs. If you are an American you can appreciate ANY effort in that direction.
Depends on your goals. If you are starting a business and you see a company surpass the market cap of Apple, again, then you might view their business model as successful. If you are a privacy advocate then you will hate their model.
Well you said "is this any _worse_" (emphasis mine) and I could only assume you meant ethically worse. At which point the answer is kind of obvious because Google hasn't proven to be the most ethical company w.r.t. user data (and lots of other things).
I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.
My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.
The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.
The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.
Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.
My uncle had an issue with his balance and slurred speech. Doctors claimed dementia and sent him home. It kept becoming worse and worse. Then one day I entered the symptoms in ChatGPT (or was it Gemini?) and asked it for the top 3 hypotheses. The first one was related to dementia. The second was something else (I forget the long name). I took all 3 to his primary care doc who had kept ignoring the problem, and asked her to try the other 2 hypotheses. She hesitantly agreed to explore the second one, and referred him to a specialist in that area. And guess what? It was the second one! They did some surgery and now he's fine as a fiddle.
I've heard a lot of such anecdotes. I'm not saying its ill-intentioned, but the skeptic in me is cautious that this is the type of reasoning which propels the anti-vax movement.
I wish / hope the medical community will address stories like this before people lose trust in them entirely. How frequent are mis-diagnosis like this? How often is "user research" helping or hurting the process of getting good health outcomes? Are there medical boards that are sending PSAs to help doctors improve common mis-diagnosis? Whats the role of LLMs in all of this?
I think the ultimate answer is that people must take responsibility for their own health and that of their children and loved ones. That includes research and double-checking your doctors. True, the result is that a good number of people will be convinced they have something (eg. autism) that they don't. But the anecdotes are piled up into giant mountains at this point. A good number of people in my family have had at least one doctor that has been useless in dealing with a particular problem. It required trying to figure out what was wrong, then finding a doctor that could help before there were correct diagnoses and treatments.
Patients should always advocate for their own care.
This includes researching their own condition, looking into alternate diagnoses/treatments, discussing them with a physician, and potentially getting a second opinion.
Especially the second opinion. There are good and bad physicians everywhere.
But advocating also does not mean ignoring a physician's response. If they say it's unlikely to be X because of Y, consider what they're saying!
Physicians are working from a deep well of experience in treating the most frequent problems, and some will be more or less curious about alternate hypotheses.
When it comes down to it, House-style medical mysteries are mysteries because they're uncommon. For every "doc missed Lyme disease" story there are many more "it's just flu."
Nothing he stated suggests this. Not giving a nod to how difficult it is doesn't mean people don't care. Unfortunately it is still true, we all have to advocate for our own care and pay attention to ourselves. The fact that this negatively affects the people who need the most care and attention is a harrowing part of humanity we often gloss over.
A boxing referee says "Protect yourself at all times."
They do this not because it isn't their job to protect fighters from illegal blows, but because the consequences of illegal blows are sometimes unfixable.
An encouragement for patients to co-own their own care isn't a removal of a physician's responsibility.
It's an acknowledgement that (1) physicians are human, fallible, and not omniscient, (2) most health systems have imperfect information sync'ing across multiple parties, and (3) no one is going to care more about you than you (although others might be much more informed and capable).
Self-advocacy isn't a requirement for good care -- it's due diligence and personal responsibility for a plan with serious consequences.
If a doc misses a diagnosis and a patient didn't spend any effort themselves, is that solely the doctor's fault?
PS to parent's insinuation: 20 years in the industry and 15 years of managed cancer in immediate family, but what do I know?
This applies to all areas of life, not just medicine.
We trade away our knowledge and skills for convenience. We throw money at doctors so they'll solve the issue. We throw money at plumbers to turn a valve. We throw money at farmers to grow our veggies.
Then we wonder why we need help to do basic things.
> researching their own condition
what a joke. so if I am sufferring with cancer, I should learn the lay of the land, treatments available ... wow. if I need to do everything, what am I paying for ?
Face-time. Their knowledge, training, and ability to write letters. Just because it's expensive, doesn't mean they are spending their evenings researching possible patient conditions and expanding their knowledge. Some might, but this isn't TV.
Anyway, what are you paid for? Guessing a programmer, you just sit in a chair all day and press buttons on a magical box. As your customer, why am I having to explain what product I want and what my requirements are? Why don't you have all my answers immediately? How dare you suggest a different specialism? You made a mistake?!?
We are idiots who will bear the consequences of our own idiocy. The big issue with all transactions done under significant information asymmetry is moral hazard. The person performing the service has far less incentive to ensure a good outcome past the conclusion of the transaction than the person who lives with the outcome.
Applies doubly now that many health care interactions are transactional and you won't even see the same doctor again.
On a systemic level, the likely outcome is just that people who manage their health better will survive, while people who don't will die. Evolution in action. Managing your health means paying attention when something is wrong and seeking out the right specialist to fix it, while also discarding specialists who won't help you fix it.
But the effects aren't just financial, look in an ER. People who for one reason or another haven't been able to take care of themselves in the emergency room for things that aren't an emergency, and it means your standard of care is going to take a hit.
Neither does collective responsibility, for the same reason, particularly in any sort of representative government. Or did you expect people to pause being idiots as soon as they stepped into the ballot box to choose the people they wanted to have collective responsibility?
>But the anecdotes are piled up into giant mountains at this point
This is disorganized thinking. Anecdotes about what? Does my uncle having an argument with his doctor over needing more painkillers, combine with an anecdote about my sister disagreeing with a midwife over how big her baby would be, combined with my friend outliving their stage 4 cancer prognosis all add up to "therefore I'm going to disregard nutrition recommendations"? Even if they were all right and the doctors were all wrong, they still wouldn't aggregate in a particular direction the way that a study on processed foods does.
And frankly it overlooks psychological and sociological dynamics that drive this kind of anecdotal reporting, which I think are more about tribal group emotional support in response to information complexity.
In fact, reasoning from separate instances that are importantly factually different is a signature line of reasoning used by alien abduction conspiracy theorists. They treat the cultural phenomenon of "millions" of people reporting UFOs or abduction experiences over decades as "proof" of aliens writ large, when the truth is they are helplessly incompetent interpreters of social data.
> Does my uncle having an argument with his doctor over needing more painkillers, combine with an anecdote about my sister disagreeing with a midwife over how big her baby would be, combined with my friend outliving their stage 4 cancer prognosis all add up to "therefore I'm going to disregard nutrition recommendations"?
Not sure about your sister and uncle, but from my observations the anecdotes combine into “doctor does not have time and/or doesn’t care”. People rightfully give exactly zero fucks about Bayes theorem, national health policy, insurance companies, social dynamics or whatever when the doctor prescribes Alvedon after 5 minutes of listening to indistinct story of a patient with a complicated condition which would likely be solved with additional tests and dedicated time. ChatGPT is at least not in a hurry.
As of course you should be. Doctors, who are generally pretty caring and empathetic humans, try to invoke the mantra "You can't care about your patient's health more than they do" due to how deeply frustrating it is to try to treat someone who's not invested in the outcome.
It's when "being your own health advocate" turns into "being your own doctor" that the system starts to break down.
They’re not saying you’re crazy they’re saying you may be helplessly incompetent when it comes to interpreting social data. You probably aren’t a good reader either if crazy was your takeaway.
> I wish / hope the medical community will address stories like this before people lose trust in them entirely.
Too late for me. I have a similar story. ChatGPT helped me diagnose an issue which I had been suffering with my whole life. I'm a new person now. GPs don't have the time to spend hours investigating symptoms for patients. ChatGPT can provide accurate diagnoses in seconds. These tools should be in wide use today by GPs. Since they refuse, patients will take matters into their own hands.
GPs don't have time to do the investigation, but they also have biases.
My own story is one of bias. I spent much of the last 3 years with sinus infections (the part I wasn't on antibiotics). I went to a couple ENTs and one observed allergic reaction in my sinuses, did a small allergy panel, but that came back negative. He ultimately wanted to put me on a CPAP and nebulizer treatments. I fed all the data I got into ChatGPT deep research and it came back with an NIH study that said 25% of people in a study had localized allergic reactions that would show up one place, but not show up elsewhere on the body in an allergy test. I asked my ENT about it and he said "That's not how allergies work."
I decided to just try second generation allergy tablets to see if they helped, since that was an easy experiment. It's been over 6 months since I've had a sinus infection, where before this I couldn't go 6 weeks after antibiotics without a reoccurrence.
There are over a million licensed physicians in the US. If we assume that each one interacts with five patients per weekday, then in the six months since you had this experience, that would conservatively be six-hundred-million patient interactions in that time.
Now, obviously none of this math would actually hold up to any scrutiny, and there's a bevy of reasons that the quality of those interactions would not be random. But just as a sense of scale, and bearing in mind that a lot of people will easily remember a single egregious interaction for the rest of their life, and (very reasonably!) be eager to share their experience with others, it would require a frankly statistically impossibly low error rate to not be able to fill threads like these with anecdotes of the most heinous, unpleasant, ignorant, and incompetent anecdotes anyone could ever imagine.
And this is just looking at the sheer scale of medical care, completely ignoring the long hours and stressful situations many doctors work in, patients' imperfect memories and one-sided recollections (that doctors can never correct), and the fundamental truth that medicine is always, always a mixture of probabilistic and intuitive judgement calls that can easily, routinely be wrong, because it's almost never possible to know for sure what's happening in s given body, let alone what will happen.
That E.N.T. wasn't up to date on the latest research on allergies. They also weren't an allergy specialist. They also were the one with the knowledge, skills, and insight to consider and test for allergies in the first place.
Imagine if we held literally any other field to the standard we hold doctors. It's, on the one hand, fair, because they do something so important and dangerous and get compensated comparitively well. But on the other hand, they're humans with incomplete, flawed information, channeling an absurdly broad and deep well of still insufficient education that they're responsible for keeping up-to-date while looking at a unique system in unique circumstances and trying to figure out what, if anything, is going wrong. It's frankly impressive that they do as well as they do.
If you fully accept everything BobaFloutist says, what do you do differently?
Nothing. You just... feel more sympathetic to doctors and less confident that your own experience meant anything.
Notice what's absent: any engagement with whether the AI-assisted approach actually worked, whether there's a systemic issue with ENTs not being current on allergy research, whether patients should try OTC interventions as cheap experiments, whether the 25% localized-reaction finding is real and undertaught.
The actual medical question and its resolution get zero attention.
Also though...
You are sort of just telling people "sometimes stuff is going to not work out, oh also there's this thing that can help, and you probably shouldn't use it?"
What is the action you would like people to take after reading your comment? Not use ChatGPT to attempt to solve things they have had issues solving with their human doctors?
This is a doctor feeding the LLM a case scenario, which means the hard part of identifying relevant signal from the extremely noisy and highly subjective human patient is already done.
For every one "ChatGPT accurately diagnosed my weird disease" anecdote, how many cases of "ChatGPT hallucinated obvious bullshit we ignored" are there? 100? 10,000? We'll never know, because nobody goes online to write about the failure cases.
> nobody goes online to write about the failure cases.
Why wouldn't they? This would seem to be engagement bait for a certain type of Anti-AI person? Why would you expect this to be the case? "My dad died because he used that dumb machine" -- surely these will be everywhere right?
Let's make our beliefs pay rent in anticipated experiences!
Failure cases aren't just "patient died." They also include all the times where ChatGPT's "advice" aligned with their doctor's advice, and when ChatGPT's advice was just totally wrong and the patient correctly ignored it. Nobody knows how numerous these cases are.
These are failures to provide useful advice over and above what could be gotten from a professional. In the sense that ChatGPT is providing net-neutral (maybe slightly positive since it confirms the doctor's diagnosis) or net-negative benefits (in the case that it's just wasting the user's time with garbage).
> The study, from UVA Health’s Andrew S. Parsons, MD, MPH and colleagues, enlisted 50 physicians in family medicine, internal medicine and emergency medicine to put Chat GPT Plus to the test. Half were randomly assigned to use Chat GPT Plus to diagnose complex cases, while the other half relied on conventional methods such as medical reference sites
This is not ChatGPT outperforming doctors. It is doctors using ChatGPT.
If you keep hearing anecdotes at what point is it statistically important ? IBM 15 years ago was selling a story about a search engine they created specifically for the medical field(they had it on jeopardy) where doctors spent 10 years before they figured this poor patients issue. They plugged the original doctors notes into it and the 4th result was the issue they took a decade to figure out. Memorizing dozens of medical books and being able to recall and correlate all that information in a human brain is a rare skill to be good at. The medical system works hard to ensure everyone going through can memorize but clearly search engines/llms can be a massive help here.
> If you keep hearing anecdotes at what point is it statistically important ?
Fair question but one has to keep in mind about ALL the other situations we do NOT hear about, namely all the failed attempts that did take time from professionals. It doesn't the successful attempts are not justified, solely that a LOT of positive anecdotes might give the wrong impressions that they are not radically most negative ones that are simply not shared. It's hard to draw conclusions either way without both.
I hear about people winning the lottery all the time. There were two $100m+ winners just this week. The anecdotes just keep piling up! That doesn't mean the lottery is a valid investment tool. People just do not understand how statistically insignificant anecdotes are in a sufficiently large dataset. Just for the US population, a 1 in a million chance of something happening to a person should happen enough to be reported on a new person every weekday of the year.
You guys are getting downvoted but you're 100% right. You never hear the stories about someone typing symptoms into ChatGPT and getting back wrong, bullshit answers--or the exact answer their doctor would have told them. Because those stories are boring. You only hear about the miraculous cases where ChatGPT accurately diagnosed an unusual condition. What's the ratio of miracle:bullshit? 1:100? 1:10,000?
> the skeptic in me is cautious that this is the type of reasoning which propels the anti-vax movement
I think there's a difference between questioning your doctor, and questioning advice given by almost every doctor. There are plenty of bad doctors out there, or maybe just doctors who are bad fits for their patients. They don't always listen or pay close attention to your history. And in spite of their education they don't always choose the correct diagnosis.
I also think there's an ever-increasing difference between AI health research and old-school WebMD research.
well, to the credit of Bayes, dementia is likely a safe choice (depending on age/etc.) but dementia is largely a diagnosis of exclusion and most doctors, besides being unfamiliar with Bayes, are also just plain lazy and/or dumb and shouldn't immediately jump to the most likely explanation when it's one with the worst prognosis and fewest treatments...
I work in biomed. Every textbook on epidemiology or medical statistics that I've picked up has had a section on Bayes, so I'm not inclined to believe this.
Here is research about doctors interpreting test results. It seems to favor GP's view that many doctors struggle to weigh test specificity and sensitivity vs disease base rate.
I'm on some anti rejection meds post-transplant and chatgptd some of my symptoms and it said they were most likely caused by my meds. Two different nephrologists told me that the meds I'm on didn't cause those symptoms before looking it up themselves and confirming they do. I think LLMs have a place in this as far as being able to quick come up with hyphotesese that can be looked into and confirmed/disproved. If I hadn't had chatGPT, I wouldnt have brought it or my team would have just blamed lifestyle rather than meds.
I can see why, but this is doc+patient in collab. And driven by using science in the form of applying llm-as-database-of-symptoms-and-treatments.
Anti-vax otoh is driven by ignorance and failure to trust science in the form of neither doctors, nor new types of science. Plus, anti-vax works like flat earth; a signaling mechanism of poor epostemic judgment."
Linking this anecdote to anti-vaxxing really seems a stretch, and I would like to see the reasoning behind that. My impression is that anti-vaxxers have more issues with vaccines themselves than with doctors who recommend them
I think that completely misreads a comment that was already painstakingly clear, they're specifically talking about the phenomenon of reasoning by anecdote. It wasn't a one-to-one equivalence between LLM driven medicine consultations and the full range of dynamics found in the anti-vax movement. Remember to engage in charitable interpretation.
They are closely related. The authority of the medical establishment is more and more questioned. And whenever it is correctly questioned, they lose a bit of their authority. It is only their authority that gets people vaccinated.
The fact is that many doctors do suck. Nearly all of my family members have terrible doctor stories, one even won a huge malpractice law suit. We can’t hide the real problems because we’re afraid of anti-vaxxers.
Generally the medical system is in a bad place. Doctors are often frustrated with patients who demand more attention to their problems. You can even see it for yourself on doctor subreddits when things like Fibromyalgia is brought up. They ridicule these patients for trying to figure out why their quality of life has dropped like a rock.
I think similar to tech, Doctors are attracted to the money, not the work. The AMA(I think, possibly another org) artificially restricts the number of slots for new doctors restricting doctor supply while private equity squeezes hospitals and buys up private practices. The failure doctors sit on the side of insurance trying to prevent care from being performed and it's up to the doctor who has the time/energy to fight insurance and the hospital to figure out what's wrong.
The AMA has no authority over the number of slots for new doctors. The primary bottleneck is the number of residency slots. Teaching hospitals are free to add more slots but generally refuse to do so due to financial constraints without more funding from Medicare. At one point the AMA lobbied Congress to restrict that funding but they reversed that position some years back. If you want more doctors then ask your members of Congress to boost residency funding.
yea specially because he is not saying what diagnosis It was, if you want to say doctors were unscientific at least be scientific and give the proper medical account of the symptoms and diagnosis
Humans are extraordinarily lazy sometimes too. A good LLM does not possess that flaw.
A doctor can also have an in-the-moment negatively impactful context: depression, exhaustion, or any number of life events going on, all of which can drastically impact their performance. Doctors get depressed like everybody else. They can care less due to something affecting them. These are not problems a good LLM has.
> cautious that this is the type of reasoning which propels the anti-vax movement
I hear you but there are two fundamentally different things:
1. Distrust of / disbelief in science
2. Doctors not incentivized to spend more than a few minutes on any given patients
There are many many anecdotes related to the second, many here in this thread. I have my own as well.
I can talk to ChatGPT/whatever at any time, for any amount of time, and present in *EXHAUSTIVE* detail every single datapoint I have about my illness/problem/whatever.
If I was a billionaire I assume I could pay a super-smart, highly-experienced human doctor to accommodate the same.
But short of that, we have GPs who have no incentive to spend any time on you. That doesn't mean they're bad people. I'm sure the vast majority have absolutely the best of intentions. But it's simply infeasible, economically or otherwise, for them to give you the time necessary to actually solve your problem.
I don't know what the solution to this is. I don't know nearly enough about the insurance and health industries to imagine what kind of structure could address this. But I am guessing that this might be what is meant by "outcome-based medicine," i.e., your job isn't done until the patient actually gets the desired outcome.
Right now my GP has every incentive to say "meh" and send me home after a 3-minute visit. As a result I more or less stopped bothering making doctor appointments for certain things.
The anecdote in question is not about mis-diagnosis, it's about a delayed diagnosis. And yeah, the inquiry sent a doctor down three paths, one of which led to a diagnosis, so let's be clear: no, the doctor didn't get it completely on their own, and: ChatGPT was, at best, 33% correct.
The biggest problem in medicine right now (that's creating a lot of the issues people have with it I'd claim) is twofold:
- Engaging with it is expensive, which raises the expectations of quality of service substantially on the part of the patients and their families
- Virtually every doctor I've ever talked to complains about the same things: insufficient time to give proper care and attention to patients, and the overbearingness of insurance companies. And these two lead into each other: so much of your doc's time is spent documenting your case. Basically every hour of patient work on their part requires a second hour of charting to document it. Imagine having to write documentation for an hour for every hour of coding you did, I bet you'd be behind a lot too. Add to it how overworked and stretched every medical profession is from nursing to doctors themselves, and you have a recipe for a really shitty experience on the part of the patients, a lot of whom, like doctors, spend an inordinate amount of time fighting with insurance companies.
> How often is "user research" helping or hurting the process of getting good health outcomes?
Depends on the quality of the research. In the case of this anecdote, I would say middling. I would also say though if the anecdotes of numerous medical professionals I've heard speak on the topic are to be believed, this is an outlier in regard to it actually being good. The majority of "patient research" that shows up is new parents upset about a vaccine schedule they don't understand, and half-baked conspiracy theories from Facebook. Often both at once.
That said, any professional, doctors included, can benefit from more information from whomever they're serving. I have a great relationship with my mechanic because by the time I take my car to him, I've already ruled out a bunch of obvious stuff, and I arrive with detailed notes on what I've done, what I've tried, what I've replaced, and most importantly: I'm honest about it. I point exactly where my knowledge on the vehicle ends, and hope he can fill in the blanks, or at least he'll know where to start poking. The problem there is the vast majority of the time, people don't approach doctors as "professionals who know more than me who can help me solve a problem," they approach them as ideological enemies and/or gatekeepers of whatever they think they need, which isn't helpful and creates conflict.
> Are there medical boards that are sending PSAs to help doctors improve common mis-diagnosis?
Doctors have shitloads of journals and reading materials that are good for them to go through, which also factors into their overworked-ness but nevertheless; yes.
> Whats the role of LLMs in all of this?
Honestly I see a lot of applications of them in the insurance side of things, unless we wanted to do something cool and like, get a decent healthcare system going.
I'm married to a provider. It is absolutely insane what she has to do for insurance. She's not a doctor, but she oversees extensive therapy for 5-10 kids at a time. Insurance companies completely dictate what she can and can't do, and frequently she is unable to do more in-depth, best-practice analysis because insurance won't pay for it. So her industry ends up doing a lot of therapy based on educated guesswork. Every few months, she has to create a 100+ page report for insurance. And on top of it, insurance denies the first submissions all the time which then cause her to burn a bunch of time on calls with the company appealing the peer review. And the "peer review" is almost always done by people who have no background in her field. It's basically akin to a cardiologist reviewing a family therapist's notes and deciding what is or isn't necessary. Except that my wife's job can be the difference between a child ever talking or not, or between a child being institutionalized or not when they become an adult. People who think private insurance companies are more efficient than government-run healthcare are nuts. Private insurance companies are way worse and actively degrade the quality of care.
> Insurance companies completely dictate what she can and can't do, and frequently she is unable to do more in-depth, best-practice analysis because insurance won't pay for it.
The distinction between "can't do" and "can't get paid for" seems to get lost a lot with medical providers. I'm not saying this is necessarily what's happening with your wife, but I've had it happen to me where someone says, "I can't do this test. Your insurance won't pay for it," and then I ask what it costs and it's a few hundred or a couple thousand dollars and I say, "That's OK. I'll just pay for the test myself," and something short-circuits and they still can't understand that they can do it.
The most egregious example was a prescription I needed that my insurance wouldn't approve. It was $49 without insurance. But the pharmacy wouldn't sell it to me even though my doctor had prescribed it because they couldn't figure out how to take my money directly when I did have insurance.
I get that when insurance doesn't cover something, most patients won't opt to pay for it anyway, but it feels like we need more reminders on both the patient and the provider side that this doesn't mean it can't be done.
> The distinction between "can't do" and "can't get paid for" seems to get lost a lot with medical providers. I'm not saying this is necessarily what's happening with your wife, but I've had it happen to me where someone says, "I can't do this test. Your insurance won't pay for it," and then I ask what it costs and it's a few hundred or a couple thousand dollars and I say, "That's OK. I'll just pay for the test myself," and something short-circuits and they still can't understand that they can do it.
Tell me you've never lived in poverty without telling me.
An unexpected expense of several hundred to a couple thousand dollars, for most of my lived life both as a child and a young adult, would've ruined me. If it was crucial, it would've been done, and I would've been hounded by medical billing and/or gone a few weeks without something else I need.
I generally agree (and sympathize with your wife), but let's not present an overly rosy view of government run healthcare or single-payer systems. In many countries with such systems, extensive therapy simply isn't available at all because the government refuses to pay for it. Every healthcare system has limited resources and care is always going to be rationed, the only question is how we do the rationing.
Every healthcare system has problems, yes. However the spectre of medical debt and bankruptcy is a uniquely American one, so, IMHO, even if we moved to single-payer healthcare and every other problem stayed the same, but we no longer shoved people into the capitalist fuck-barrel for things completely outside their control, I think that's an unmitigated, massive improvement.
Well now you're talking about a different problem and moving the goalposts. It would be impossible for every other problem to stay the same under a single-payer system. That would solve some existing problems and create other new problems. In particular the need to hold down government budgets would necessarily force increased care rationing and longer queues. Whether that would be a net positive or negative is a complex question with no clear answers.
The statistics you see about bankruptcy due to medical debt are highly misleading. While it is a problem, very few consumers are directly forced into bankruptcy by medical expenses. What tends to happen is that serious medical problems leave them unable to work and then with no income and then with no income all of their debts pile up. What we really need there is a better disability welfare system to keep consumers afloat.
> Well now you're talking about a different problem and moving the goalposts.
I am absolutely not. I am reacting to what's been replied to what I've said. In common vernacular, this is called a "conversation."
To recap: the person who replied to me left a long comment about the various strugglings and limitations of healthcare when subjected to the whims of insurance companies. You then replied:
> I generally agree (and sympathize with your wife), but let's not present an overly rosy view of government run healthcare or single-payer systems. In many countries with such systems, extensive therapy simply isn't available at all because the government refuses to pay for it. Every healthcare system has limited resources and care is always going to be rationed, the only question is how we do the rationing.
Which, at least how I read it, attempts to lay the blame for the lack of availability of extensive therapies at the feet of a government's unwillingness to pay, citing that every system has limited resources and care is always being rationed.
I countered, implying that while that may or may not be true, that lack of availability is effectively status quo for the majority of Americans under our much more expensive, and highly exploitative insurance-and-pay-based healthcare system, and that, even if those issues around lack of availability persisted through a transition to a single-payer healthcare system, it would at least alleviate us from the uniquely American scourge of people being sent to the poorhouse, sometimes poor-lack-of-house, for suffering illnesses or injuries they are in no way responsible for which in my mind is still a huge improvement.
> The statistics you see about bankruptcy due to medical debt are highly misleading. While it is a problem, very few consumers are directly forced into bankruptcy by medical expenses. What tends to happen is that serious medical problems leave them unable to work and then with no income and then with no income all of their debts pile up.
I mean we can expand this if you like into a larger conversation about how insurance itself being tied to employment and everyone being kept broke on purpose to incentivize them to take on debt to survive, placing them on a debt treadmill their entire lives which has been demonstrably shown to reduce quality and length of life, as well as introducing the notion that missing any amount of work for no matter how valid a reason has the potential to ruin your life, is probably a highly un-optimal and inhumane way to structure a society.
> What we really need there is a better disability welfare system to keep consumers afloat.
I get where you’re coming from. I would argue the mistakes doctors make and the amount of times they are wrong literally dwarfs the amount of anti vaxers in existence.
Also the anti vax movement isn’t completely wrong. It’s now confirmed (officially) that the covid-19 vaccine isn’t completely safe and there are risks taking it that don’t exist in say something like the flu shot. The risk is small but very real and quite deadly. Source: https://med.stanford.edu/news/all-news/2025/12/myocarditis-v... This was something many many doctors originally claimed was completely safe.
The role of LLMs is they take the human bias out of the picture. They are trained on formal medical literature and actual online anecdotal accounts of patients who will take a shit on doctors if need be (the type of criticism a doctor rarely gets in person). The generalization that comes from these two disparate sets of data is actually often superior to a doctor.
Key word is “often”. Less often (but still often in general) the generalization can be an hallucination.
Your post irked me because I almost got the sense that there’s a sort of prestige, admiration and respect given to doctors that in my opinion is unearned. Doctors in my opinion are like car mechanics and that’s the level of treatment they deserve. They aren’t universally good, a lot of them are shitty, a lot are manipulative and there’s a lot of great car mechanics I respect as well. That’s a fair outlook they deserve… but instead I see them get these levels of respect that matches mother Theresa as if they devoted their careers to saving lives and not money.
No one and I mean no one should trust the medical establishment or any doctor by default. They are like car mechanics and should be judged on a case by case basis.
You know for the parent post, how much money do you think those fucking doctors got to make a wrong diagnosis of dementia? Well over 700 for less than an hour of there time. And they don’t even have the kindness to offer the patient a refund for incompetence on their part.
> This was something many many doctors originally claimed was completely safe.
I never heard any doctors claim any of the covid vaccines were completely safe. Do you mind if I ask which doctors, exactly? Not institutions, not vibes, not headlines. Individual doctors. Medicine is not a hive mind, and collapsing disagreement, uncertainty, and bad messaging into “many doctors” is doing rhetorical work that the evidence has to earn.
> The role of LLMs is they take the human bias out of the picture.
That is simply false. LLMs are trained on human writing, human incentives, and human errors. They can weaken certain authority and social pressures, which is valuable, but they do not escape bias. They average it. Sometimes that helps. Sometimes it produces very confident nonsense.
> Your post irked me because I almost got the sense that there’s a sort of prestige, admiration and respect given to doctors that in my opinion is unearned. Doctors in my opinion are like car mechanics and that’s the level of treatment they deserve.
> No one and I mean no one should trust the medical establishment or any doctor by default. They are like car mechanics and should be judged on a case by case basis.
You are entitled to that opinion, but I wanted to kiss the surgeon who removed my daughter’s gangrenous appendix. That reaction was not to their supposed prestige, it was recognition that someone applied years of hard won skill correctly at a moment where failure had permanent consequences.
Doctors make mistakes. Some are incompetent. Some are cynical. None of that justifies treating the entire profession as functionally equivalent to a trade whose failures usually cost money rather than lives.
And if doctors are car mechanics, then patients are machines. That framing strips the humanity from all of us. That is nihilism.
No one should trust doctors by default. Agreed. But no one should distrust them by default either. Judgment works when it is applied case by case, not when it is replaced with blanket contempt.
> I never heard any doctors claim any of the covid vaccines were completely safe. Do you mind if I ask which doctors, exactly? Not institutions, not vibes, not headlines. Individual doctors. Medicine is not a hive mind, and collapsing disagreement, uncertainty, and bad messaging into “many doctors” is doing rhetorical work that the evidence has to earn.
There’s no data here. Many aspects of life are not covered by science because trials are expensive and we have to go with vibes.
And even on just vibes we often can get accurate judgements. Do you need clinical trials to confirm there’s a ground when you leap off your bed? No. Only vibes unfortunately.
If you ask people (who are not doctors) to remember this time they will likely tell you this is what they remember. I also do have tons of anecdotal accounts of doctors saying the Covid 19 vaccine is safe and you can find many yourself by searching. Here’s one: https://fb.watch/Evzwfkc6Mp/?mibextid=wwXIfr
The pediatrician failed to communicate the risks of the vaccine above and made the claim it was safe.
At the time to my knowledge the actual risks of the vaccine were not fully known and the safety was not fully validated. The overarching intuition was that the risk of detrimental of effects from the vaccine was less than the risk+consequence of dying from Covid. That is still the underlying logic (and best official practice) today even with the knowledge about the heart risk covid vaccines pose.
This doctor above did not communicate this risk at all. And this was just from a random google search. Anecdotal but the fact that I found one just from a casual search is telling. These people are not miracle workers.
> That is simply false. LLMs are trained on human writing, human incentives, and human errors. They can weaken certain authority and social pressures, which is valuable, but they do not escape bias. They average it. Sometimes that helps. Sometimes it produces very confident nonsense.
No it’s not false. Most of the writing on human medical stuff is scientific in nature. Formalized with experimental trials which is the strongest form of truth humanity has both practically and theoretically. This “medical science” is even more accurate than other black box sciences like psychology as clinical trials have ultra high thresholds and even test for causality (in contrast to much of science only covers correlation and assumes causality through probabilistic reasoning)
This combined with anecdotal evidence that the LLM digests in aggregate is a formidable force. We as humans cannot quantify all anecdotal evidence. For example, I heard anecdotal evidence of heart issues with rna vaccines BEFORE the science confirmed it and LLMs were able to aggregate this sentiment through sheer volumetric training on all complaints of the vaccine online and confirm the same thing BEFORE that Stanford confirmation was available.
> You are entitled to that opinion, but I wanted to kiss the surgeon who removed my daughter’s gangrenous appendix. That reaction was not to their supposed prestige, it was recognition that someone applied years of hard won skill correctly at a moment where failure had permanent consequences.
Sure I applaud that. True hero work for that surgeon. I’m talking about the profession in aggregate. In aggregate in the US 800000k patients die or get permanently injured from a misdiagnosis every year. Physicians fuck up and it’s not occasionally. It’s often and all the fucking time. You were safer getting on the 737 max the year before they diagnosed the mcas errors then you are NOT getting a misdiagnosis and dying from a doctor. Those engineers despite widespread criticism did more for your life and safety than doctors in general. That is not only a miracle of engineering but it also speaks volumes of the medical profession itself which DOES not get equivalent criticism for mistakes. That 800000k statistic is swept under the rug like car accidents.
I am entitled to my own opinion just as you are to yours but I’m making a bigger claim here. My opinion is not just an opinion. It’s a ground truth general fact backed up by numbers.
> And if doctors are car mechanics, then patients are machines. That framing strips the humanity from all of us. That is nihilism.
There is nothing wrong with car mechanics. It’s an occupation and it’s needed. And those cars if they fail they can cause accidents that involve our very lives.
But car mechanics are fallible and that fallibility is encoded into the respect they get. Of course there are individual mechanics who are great and on a case by case basis we pay those mechanics more respect.
Doctors need to be treated the same way. It’s not nilhism. It’s a quantitative analysis grounded in reality. The only piece of evidence you provided me in your counter is your daughter’s life being saved. That evidence warrants respect for the single doctor who saved your daughter’s life and not for the profession in general. The numbers agree with me.
And treatment for say the corporation responsible for the mcas failures and the profession responsible for medical misdiagnosis that killed people is disproportionate. Your own sentiment and respect for doctors in general is one piece of evidence for this.
> If you ask people (who are not doctors) to remember this time they will likely tell you this is what they remember. I also do have tons of anecdotal accounts of doctors saying the Covid 19 vaccine is safe and you can find many yourself by searching. Here’s one: https://fb.watch/Evzwfkc6Mp/?mibextid=wwXIfr
> No it’s not false. Most of the writing on human medical stuff is scientific in nature. Formalized with experimental trials which is the strongest form of truth humanity has both practically and theoretically. This “medical science” is even more accurate than other black box sciences like psychology as clinical trials have ultra high thresholds and even test for causality (in contrast to much of science only covers correlation and assumes causality through probabilistic reasoning)
Sorry, but these kinds of remarks wreck your credibility and make it impossible for me to take you seriously.
If you disagree with me then it is better to say you disagree and state your reasoning why. If the reasoning is too foundational than it is better to state it as such and exit.
Saying something like my "credibility is wrecked" and impossible to take me "seriously" crosses a line into deliberate attack and insult. It's like calling me an idiot but staying technically within the HN rules. You didn't need to go there and breaking those rules in spirit is just as bad imo.
Yeah I agree I think the conversation is over. I suggest we don't talk to each other again as I don't really appreciate how you shut down the conversation with deliberate and targeted attacks.
With the pandemic, I've lost my faith in the medical community. They recommended a lot of unproven medicines. They where more based in ideology than science. I trust more an LLM the the average doctor.
That's a tip I recommend people to try when they are using LLMs to solve stuff. Instead of asking "how to..", ask "what alternatives are there to...". A top-k answer is way better, and you get to engage more with whatever you are trying to learn/solve.
Same if you are coding, ask "Is it possible" not "How do I" as the second one will more quickly result in hallucinations when you are asking it for something that isn't possible.
General doctors aren't trained for problem solving, they're trained for memorization. The doctors that are good at problem solving aren't general doctors.
That's a sweeping generalization unsupported by facts.
In reality you'll find the vast majority of GPs are highly intelligent and quite good at problem solving.
In fact, I'd go so far as to say their training is so intensive and expansive that laypeople who make such comments are profoundly lacking in awareness on the topic.
Physicians are still human, so like anything there's of course bad ones, specialists included. There's also healthcare systems with various degrees of dysfunction and incentives that don't necessarily align with the patient.
None of that means GPs are somehow less competent at solving problems; not only is it an insult but it's ridiculous on the face of it.
Even if they are good at problem solving, a series of 10-minute appointments spaced out in 2-3 month intervals while they deal with a case load of hundreds of other patients will not let them do it. That's the environment that most GPs work under in the modern U.S. health care system.
Pay for concierge medicine and a private physician and you get great health care. That's not what ordinary health insurance pays for.
>You followed up a sweeping generalization with a sweeping generalization and a touch of bias.
As opposed to what, proving that GPs are highly trained, not inherently inferior to other types of physicians, and regularly conduct complex problem solving?
Heck, while I'm at it I may as well attempt to prove the sky is blue.
>I imagine the issue with problem solving more lays in the system doctors are stuck in and the complete lack of time they have to spend on patients.
Maybe they are, but for most of my interactions with GP's in recent years, and several with specialists, for anything much beyond the very basics, I've had to educate them, and it didn't require much knowledge to exceeds theirs on specific conditions.
In one case, a specialist made arguments that were trivially logically fallacious and went directly against the evidence from treatment outcomes.
In other cases, sheer stupidity of pattern matching with rational thinking seemingly totally turned off. E.g. hearing I'd had a sinus infection for a long time, and insisting that this meant it was chronic and chronic meant the solution was steroids rather than antibiotic, despite a previous course having done nothing, and despite the fact that an antibiotic course had removed most of the symptoms both indicating the opposite - in the end, after bypassing my GP at the time and explaining and begging an advance nurse practitioner, I got two more courses of antibiotic and the infection finally fully went.
I'm sure all of them could have done better, and that a lot of it is down to dysfunction, such as too little time allotted to actually look at things properly, but some of the interactions (the logical fallacy in particular) have also clearly been down to sheer ignorance.
I also expect they'd eventually get there, but doing your own reading and guiding things in the right direction can often short-circuit a lot of bullshit that might even deliver good outcomes in a cost effective way on a population level (e.g. I'm sure the guidance on chronic sinus issues is right the vast majority of time - most bacterial sinus infections either clear by themselves or are stopped early enough not to "pattern match" as chronic), but might cause you lots of misery in the meantime...
Your personal experience is anecdotal and thus not as reliable as statistical facts. This alone is not a good metric.
However your anecdotal experience is not only inline with my own experience. It is actually inline with the facts as well.
When the person your responding to said that what you wasn’t backed up by facts I’m going to tell you straight up that, that statement was utter bullshit. Everything you’re saying here is true and generally true and something many many patients experience.
>When the person your responding to said that what you wasn’t backed up by facts I’m going to tell you straight up that, that statement was utter bullshit.
The person you just replied to here isn't the same person I replied to.
> In reality you'll find the vast majority of GPs are highly intelligent and quite good at problem solving.
Is this statement supported by facts? If anything this statement is just your internal sentiment. If you claim it’s not supported by facts the proper thing you should do is offer facts to counter his statement. Don’t claim his statement isn’t supported by facts than make a counter claim without facts yourself.
Read that fact. 800,000 deaths from misdiagnosis a year is pretty pathetic. And this is just deaths. I can guarantee you the amount of mistakes unreported that don’t result in deaths dwarfs that number.
Boeing the air plane manufacurwe who was responsible for the crashing Boeing 737 mcas units have BETTER outcomes than this. In the year that those planes crashed you have a 135x better survival rate of getting on a 737 max then you are getting an important diagnosis from a doctor and not dying from a misdiagnosis. Yet doctors are universally respected and Boeing as a corporation was universally reviled that year.
I will say this GPs are in general not very competent. They are about as competent and trust worthy as a car mechanic. There are good ones, bad ones, and also ones that bullshit and lie. Don’t expect anything more than that, and this is supported by facts.
Yeah, the main fact here is called medical school.[0]
>Read that fact. 800,000 deaths from misdiagnosis a year is pretty pathetic. And this is just deaths.
Okay, and if that somehow flows from GPs (but not specialists!) being uniquely poor at problem solving relative to all other types of physicians—irrespective of wider issues inherent in the U.S. healthcare system—then I stand corrected.
>135x better survival rate of getting on a 737 max
The human body isn't a 737.
>I will say this GPs are in general not very competent. They are about as competent and trust worthy as a car mechanic.
How is going to medical school a measurement of problem solving ability? You need to cite a metric involving ACTUAL problem solving. For example, a misdiagnosis is a FAILURE at solving a problem.
Instead you say “medical school” and cite the Harvard handbook as if everyone went to Harvard and that the medical book was a quantitative metric on problem solving success or failure. Come on man. Numbers. Not manuals.
> The human body isn't a 737
Are you joking? You know a 737 is responsible for ensuring the survival of human bodies hurdling through the air at hundreds of miles per hour at altitudes higher than Mount Everest? The fact that your risk of dying is lower going through that then getting a correct diagnosis from a doctor is quite pathetic.
This statement you made here is manipulative. You know what I mean by that comparison. Don’t try to spin it like I'm not talking about human lives.
> Ignorant.
Being a car mechanic is a respectable profession. They get the typical respect of any other occupation and nothing beyond that. I’m saying doctors deserve EXACTLY the same thing. The problem is doctors sometimes get more than that and that is not deserved at all. Respect is earned and the profession itself doesn’t earn enough of that respect.
Are you yourself a doctor? If so your response speaks volumes about the treatment your patients will get.
The one I had as kid, well. He was old, stuck in old ways, but I still think he was decent at it.
But seeing the doctor is a bit more difficult these days, since the assistants are backstopping. They do some heavy lifting / screening.
I think an LLM could help with symptoms and then looking at the most probable cause, but either way I wouldn't take it too serious. And that is the general issue with ML: people take the output too serious, at face value. What matters is: what are the cited sources?
You still need 2 deviations above the average college student to get to med school. As a rough proxy for intelligence. The bottom threshold for doctors is certainly higher than lawyers
In general your comment was false. You're just lying and making things up. There are lower-tier medical schools in California, Massachusetts, and most every other state. The state, whether it's Kansas or somewhere else, is almost totally irrelevant to the quality of physicians produced.
No I'm not. I'm referring to a specific bad school(s) in kansas. I never made a comment about Kansas itself.
I never said the state is correlated with the quality of the doctor, or even if the quality of the school is associated with the quality of the doctor. You made that up. Which makes you the liar.
>If you're referring to a specific school then name the school instead of making lame low-effort comments about a state.
You're fucking right. I should've named the specific school. (And I didn't make a comment about the state I made a comment about school(s) in the state which is not about all schools in the state.)
That's would I should do. What you should do is: Don't accuse me of lying and then lie yourself. Read the comment more carefully. Don't assume shit.
No point in continue this. We both get it and this thread is going nowhere.
After undergoing stomach surgery 8 years ago I started experiencing completely debilitating stomach aches. I had many appointments with my GP and a specialist leading to endoscopies, colonoscopies, CAT scans, and MRI scans all to no avail and they just kept prescribing more and more anti-acids and stronger painkillers.
It was after seven years of this that I paid for a private food allergy test to find that I am allergic to Soya protein. Once I stopped eating anything with Soya in it the symptoms almost completely vanished.
At my next GP appointment I asked why no-one had suggested it could be an allergic reaction only to be told that it is not one of the things they check for or even suggest. My faith in the medical community took a bit of a knock that day.
On a related note, I never knew just how many foods contain Soya flour that you wouldn't expect until I started checking.
They’re not even trained for memorization. They’re trained for mitigation and I don’t really blame them for the crap pay they receive. Over the course of a 40-year career they basically make what a typical junior dev makes. It’s fast becoming a rich man’s hobby career.
It's about the same pay as a (professional) engineer. In the US, both engineers and doctors are very highly paid. In the UK and Japan they are paid about 50-100k if experienced, which is somewhere about 2-4x less than their US counterparts.
"According to the Government of Canada Job Bank, the median annual salary for a General Practitioner (GP) in Canada is $233,726 (CAD) as of January 23, 2024."
That's roughly $170,000 in the US. If you adjust for anything reasonable, such as GDP per capita or median income between the US & Canada, that $170k figure matches up very well with the median US general practitioner figure of around $180k-$250k (sources differ, all tend to fall within that range). The GPs in Canada may in fact be slightly better paid than in the US.
I wouldn't be surprised if AI was better than going to GP or many other specialists in majority of cases.
And the issue is not with the doctors themselves, but the complexity of human body.
Like many digestive issues can cause migraines or a ton of other problems. I am yet to see when someone is referred to gut health professional because of the migraine.
And a lot of similar cases when absolutely random system causes issues in seemingly unrelated system.
A lot of these problems are not life threatening thus just get ignored as they would take too much effort and cost to pinpoint.
AI on the other hand should be pretty good at figuring out those vague issues that you would never figured out otherwise.
> AI on the other hand should be pretty good at figuring out those vague issues that you would never figured out otherwise.
Not least because it almost certainly has orders of magnitude more data to work with than your average GP (who definitely doesn't have the time to keep up with reading all the papers and case studies you'd need to even approach a "full view".)
And speaking of migraines, even neurological causes can apparently be tricky: Around here, cluster headaches would go without proper diagnosis for about 10 years on average. In my case, it also took about 10 years and 3 very confused GPs before one would refer me to a neurologist who in turn would come up with the diagnosis in about 30 seconds.
Since someone else asked and you said you didn't remember, do you think he may have had Normal Pressure Hydrocephalus (NPH)? And the surgery which he had may have been a VP shunt (ventricular-peritoneal) -- something to move fluid away from his brain?
Quite a mouthful for the layman and the symptoms you are describing would fit. NPH has one of my favorite mnemonic in medicine for students learning about the condition, describing the hallmark symptoms as: "Wet, Wobbly and Wacky."
Wet referring to urinary incontinence, Wobbly referring to ataxia/balance issues and Wacky referring to encephalopathy (which could mimic dementia symptoms).
Now that you mention it, it may have been NPH. The thing is, I did the chatting with ChatGPT and handed the printout to the doc. Biology was never my strong suit, so my eyes glaze over when I see words like "Hydrocephalus" :-D
The first was "dementia" (or something related to it, I don't remember the exact medical term). The second was something to do with fluid in some spinal column (I am sorry once again, I do not remember the medical term; they operated on him to drain it, which is why I remember it). I don't remember the third one, unfortunately.
Perhaps a CSF leak due to a dural sack tear in the spine? Was his symptom only having headaches while standing? Happened to my wife. 6 weeks of absolute hell.
On second thought — the opposite. A bulge/blockage of CSF?
Apparently, as I've recently learned due to a debilitating headache, CSF pressure (both high and low) can cause a whole host of symptoms, ranging from mild headache and blurred vision to coma and death.
It's pretty wild that a doctor wouldn't have that as a hypothesis.
Thanks for sharing. I struggled with long-term undiagnosed issues for so long. It took me 15 years of trying with doctors until one did a colonoscopy and found an H.Pylori infection in 2018. Prescribed the right kind of antibiotics and changed my life. In hindsight, my symptoms matched many of the infection's. No doctor figured it out.
So many doctors never bothered to conduct any tests. Many said it's in my head. Some told me to just exercise. I tried general doctors, specialists. At some point, I was so desperate that I went to homeopathy route.
15 years wasted. Why did it take 15 years for the current system?
I'd bet that if I had ChatGPT earlier, it could have helped me in figuring out the issue much faster. When you're sick, you don't give a damn who might have your health data. You just want to get better.
Programmers have the benefit of being able to torture and kill our patients at scale (unit and integration testing), doctors less so. The diagnostic skills one hits in any given doctor may be relatively shallow, plus tired, overworked, or annoyed by a patients self expression… the results I’ve seen are commonly abysmal and care providers are never shocked by poor and misdiagnosis from other practitioners.
I have some statistically very common conditions and a family medical history with explicit confirmation of inheritable genetic conditions. Yet, if I explain my problems A to Z I’m a Zebra whose female hysteria has overwhelmed his basic reasoning and relationship to reality. Explained Z to A, well I can’t get past Z because, holy crap is this an obvious Horse and there’s really only one cause of Horse-itis and if your mom was a Horse then you’re Horse enough for Chronic Horse-itis.
They don’t have time to listen, their ears aren’t all that great, and the mind behind them isn’t necessarily used to complex diagnostics with misleading superficial characteristics. Fire that through a 20 min appointment, 10 of which is typing, maybe in a second language or while in pain, 6 plus month referral cycles and… presto: “it took a decade to identify the cause of the hoof prints” is how you spent your 30s and early 40s.
I thought Hy.Pylori was diagnosed from a stool sample which in my experience is the first thing you’re asked for if you have any gastric issues. Was it only possible to find via the colonoscopy in your case or did the doctors never do a stool test?
The sensitivity from stool samples seems to be less than 80%. The gold standard is gastroscopy, which is often performed anyway to rule out ulcers etc. It is the first time I heard about colonoscopy for H Pylori.
You could do a breath test for h pylori. The colonoscopy was done as a general check by a specialist doctor. So the doctor wasn’t sure but a colonoscopy covered h pylori check.
That story says a lot about where the gaps really are. Most doctors aren’t lacking raw intelligence, they’re just crushed for time and constrained by whatever diagnostic playbook their clinic rewards. A chatbot isn’t magic insight, it’s just the only “colleague” people can brainstorm with for as long as they need. In your uncle’s case it nudged the GP out of autopilot and back into actual differential diagnosis. I’d love a world where physicians get protected time and incentives to do that kind of broader reasoning without a patient having to show up with a print‑out from Gemini, but until then these tools are becoming the second opinion patients can actually obtain.
I can give you the exact opposite anecdote for myself. Spent weeks with Dr Google and one or another LLMs (few years ago so not current SOTA) describing myself and getting like 10 wrong possibilities. Took my best guess with me to a doctor who listened to me babble for 5 minutes and immediately gave me a correct diagnosis of a condition I had not remotely considered. Problem was most likely that I was not accurately describing my symptoms because it was difficult to put it into words. But also I was probably priming queries with my own expected (and mistaken) outcomes. Not sure if current models would have done a better job, but in my case at least, a human doctor was far superior.
Here’s something: my chatGPT quietly assumed I had ADHD for around 9 months, up until October 2025. I don’t suffer from ADHD. I only found out through an answer that began “As you have ADHD..”
I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.
I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.
It likely just hallucinated the ADHD thing in this one chat and then made this up when you pushed it for an explanation. It has no way to connect memories to the exact chats they came from AFAIK.
ChatGPT used the name on my credit card, a name which isn't uncommon, and started talking about my business, XYZ, that I don't have and never claimed to.
Did some digging and there was an obscure reference to a company that folded a long time ago associated with someone who has my name.
What makes it creepier is that they have the same middle name, which isn't in my profile or on my credit card.
When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out[1] that they're required to adhere to by law in several places.
Also, given that my name isn't rare, there are unfortunately some people with unsavory histories documented online with the name. I can't wait to be confused for one of them.
“ When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out …”
You did all of that but then you gave them your real name?
Visa/MC payment network has no ability to transfer or check card holder name. Merchants act as if it does, but it doesn’t. You can enter Mickey Mouse as your first name and last name… It won’t make any difference.
Only AMEX and Discover have the ability to validate names.
FWIW, I have a paid account with OpenAI, for using ChatGPT, and I gave them no personal information.
Do you think the majority of those people are lying or do you think it's possible that our pursuit of algorithmic consumption is actually rewiring our neural pathways into something that looks/behaves more like ADHD?
Personally, I'm on the fence. I suspect that I've always had a bit of that, but anecdotally, it does seem to have gotten worse in the past decade, but perhaps it's just a symptom of old age (31 hehehe).
> Do you think the majority of those people are lying
I don’t think they’re lying, but it is very clear that ADHD has entered the common vernacular and is now used as a generic term like OCD.
People will say “I’m OCD about…” as a way of saying they like to be organized or that they care about some detail.
Now it’s common to say “My ADHD made me…” to refer to getting distracted or following an impulse.
> or do you think it's possible that our pursuit of algorithmic consumption is actually rewiring our neural pathways into something that looks/behaves more like ADHD?
Focus is, and always has been, something that can be developed through practice. Ability to focus starts to decrease when you don’t practice it much.
The talk about “rewiring the brain” and blaming algorithms is getting too abstract, in my opinion. You’re just developing bad habits and not investing time and energy into maintaining the good habits.
If you choose to delete those apps from your phone or even just use your phone’s time limit features today, you could start reducing time spent on the bad habits. If you find something to replace it with like reading a book (ideally physical book to avoid distractions) or even just going outside for a 10 minute walk with your phone at home, I guarantee you’ll find that what you see as an adult-onset “ADHD” will start to diminish and you will begin returning to the focus you remember a decade ago.
Or you could continue scrolling phones and distractions, which will probably continue the decline.
This is a good place to note that a lot of people think getting a prescription will fix the problem, but a very common anecdote in these situations is that the stimulant without a concomitant habit change just made them hyperfocus on their distractions or even go deeper into more obsessive focus on distractions. Building the better habits is a prerequisite and you can’t shortcut out of it.
> Focus is, and always has been, something that can be developed through practice. Ability to focus starts to decrease when you don’t practice it much.
> The talk about “rewiring the brain” and blaming algorithms is getting too abstract, in my opinion. You’re just developing bad habits and not investing time and energy into maintaining the good habits.
> If you choose to delete those apps from your phone ...
I would like to add that focus is one of the many aspects of adhd, and for many people, isn't even the biggest thing.
For many people, it's about the continuous noise in their mind. Brown noise or music can partly help with parts of that.
For many, it's about emotional responses. It's the difference between hearing your boss criticise you and getting heart palpitations while mentally thinking "Shit, I'm going to get fired again", vs "Ahh next time I'll take care of this specific aspect". (Googling "RSD ADHD" will give more info.)
It's the difference between wanting to go to the loo because you haven't peed in 6 hours but you can't pull yourself off your chair, and... pulling yourself off your chair.
Focus is definitely one aspect. But between the task positive network, norepinephrine and the non-focus aspects of dopamine (including - more strength! Less slouching, believe it or not!), there are a lot of differences.
Medications can help with many of these, albeit at the "risk" of tolerance.
(I agree this is a lot of detail and nuance for a random comment online, but I just felt it had to be said. Btw - all those examples... might've been from personal experience - without vs with meds.)
I have what you would call metric shittons of ADHD. Medically diagnosed. Was kicked outta university for failing grades and all. Pills saved me. If you think you have it, the best thing you can do for yourself is at least get a diagnosis done. In b4 people come in and chime it can be faked. Yes the symptoms can be faked. But why would you if you really want to know what is wrong with you if any? (Hoping you aren't a TikTok content creator lurking here)
I really hope this doesn't get lost in the sea of comments and don't feel pressured to answer any of them but:
what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
do you feel the difference between being medicated and (strong?) coffee?
have you felt the effects weaken over time?
if you did drink coffee, have you noticed a difference between the medication effects weakening on the same scale as caffeine?
is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
this is a personal pet-project of observing how different people deal with ADHD.
I take ritalin as needed, 20-30mg a day. A black coffee will usually make me just a little sleepier, if anything at all. a couple more will do the same. Ritalin can make me sleepy if I'm already deeply tired, but after ~30min will actually allow me to partially focus on off days, and be able to get more work done on normal days. I may not need it every day.
> is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
I am now 20, admittedly "early" in my career. Through high school and the first 2 years of university I have banged my head against ADHD and tried to just "power through it" or adapt. Medication isn't a magic bullet, but it is clear to me at least now that I am at least able to rely on it as a crutch in order to improve myself and my lifestyle to deal with what is at least for me, truly a disability. Maybe one day I won't need it, but in the mean time I see no reason why attempt #3289 will work for real this time to turn around my life.
> what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
Given that ADHD people tend to commit suicide 2x-4x times more often than general population [0] keep in mind that it's not detrimental until it suddenly is.
Also it gets worse with age, so it's better to get under doctor's control sooner than later.
ADHD is a debilitating neurological disorder, not a mild inconvenience.
Believe me, I wish that just drinking coffee and "trying harder" was a solution. I started medication because I spent two decades actively trying every other possible solution.
> what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
If your neurological issues aren't impacting your life negatively, they aren't neurological issues. I don't know what else to say to this. Of course you shouldn't treat non-disorders with medication.
> do you feel the difference between being medicated and (strong?) coffee?
These do not exist in the same universe. It's not remotely comparable.
> have you felt the effects weaken over time?
Only initially, after the first few days. It stabilizes pretty well after that.
> if you did drink coffee, have you noticed a difference between the medication effects weakening on the same scale as caffeine?
Again, not even in the same universe. Also, each medication has different effects in terms of how it wears off at the end of the day. For some it's a pretty sudden crash, for others it tapers, and some are mostly designed to keep you at a long term level above baseline (lower peaks, but higher valleys).
> is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
If I could have solved the biological issue "naturally" I would have. ADHD comes with really pernicious effects that makes adaptation very challenging.
Unmanaged ADHD is dangerous, and incredibly detrimental to people's lives, but the level of such may not be entirely apparent to somebody until after they receive treatment. I think the attitude of being against medication for neurological issues where that is recommended by medical professionals (including where that for something perceived to not be detrimental enough) is, to say the least, risky.
I would perhaps encourage you to do some reading into the real-world ways ADHD affects people's lives beyond just what medical websites say.
To answer your questions, though:
* Medication vs coffee: yes, I don't notice any effect from caffeine
* Meds weakening over time: nope
* Medication cost: so worth it (£45/mo for the drugs alone in the UK) because I was increasingly not able to adapt or cope and continuing to try to do so may well have destroyed me
Probably a bit of both, it's trendy do have a quirk, and modern life fucks up your attention span. Everyone wants to put a label on everything, remember when facebook had a dropdown of like 60+ genders? I also know people who talk about "being on the spectrum" all the time, at first I thought it was a meme, but they genuinely believe they're autistic because they're #notliketheothers. At the end of the day everything is a spectrum and nobody is normal, I'm not sure it's healthy to want to put a label on everything or medicate to fall back on the baseline.
The meme of 'ADHD as the "fucked up attention span disorder"' has done immeasurable damage to people, neurotypical and ADHD alike. it is the attribute that is the least important to my life, but most centered towards the neurotypical, or the other people it bothers.
> modern life fucks up your attention span
That said, this statement is true, it's just a fundamental misunderstanding of ADHD as "dog like instinct to go chase a squirrel" or whatever. Google is free, so is Chatgpt if that's too hard.
> I'm not sure it's healthy to want to put a label on everything
I don't particularly care for microlabeling, but it's usually harmless, nothing suggest the alternative of "just stop talking about your problems" is better. People create language usually because they want to label a shared idea. This is boomer talk (see "remember facebook?" no)
> or medicate to fall back on the baseline
I'm not sure "If you have ADHD you should simply suffer because medicine is le bad" is a great stance, but you're allowed I suppose
> it is the attribute that is the least important to my life
still one of the most common symptom, and the one everyone use to self diagnose...
> because medicine is le bad
idk man, I've seen the ravage of medicine one people close to me. Years of adhd medicine, anti depressants pills, anti obesity treatments... They're still non functional, obese and depressed, but now they're broke and think there truly is no way out of the pit because they "tried everything" (everything besides not playing video games 16 hours a day, eating junk food 24/7 and never going out of their bedroom, but the doctors don't seem to view this as a root cause)
Whatever you think, I believe some things are over prescribed to the point of being a net negative to society. I never said adhd doesn't exist or shouldn't be treated btw, you seem to be projecting a lot of things. If it works for you good, personally I prefer to change my environment to fit how my brain/body works, not influence my body/brain by swallowing side effects riddled pills until death to fit in the fucked up world we created and call "normality"
Unfortunately I don't think that's a good solution. Memories are an excellent feature and you see them on.... most similar services now.
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
I have not really seen ChatGPT learn who I “am”, what I “like” etc. With memories enabled it seems to mostly remember random one-off things from one chat that are definitely irrelevant for all future chats. I much prefer writing a system prompt where I can decide what's relevant.
I know what you mean, but the issue the parent comment brought up is real and "bad" chats can contaminate future ones. Before switching off memories, I found I had to censor myself in case I messed up the system memory.
I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.
Maybe if they worked correctly they would be. I've had answers to questions be influenced needlessly by past chats and I had to tell it to answer the question at hand and not use knowledge of a previous chat that was completely unrelated other than being a programming question.
This idea that it is so much more better for OpenAI to have all this information about because it can make some suggestions seem ludicrous. How has humanity survived thus far without this. This seems like you just need more connections with real people.
> The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful.
I could not disagree more. A major failure mode of LLMs in my experience is their getting stuck on a specific train of thought. Being forced to re-explain context each time is a very useful sanity check.
Not the parent poster but I’ve disabled memory and history and I can still see ChatGPT reference previous answers or shape responses based on previous instructions. I don’t know what I’m doing wrong or how to fix it.
Wasn’t there a static memory store from before the wider memory capabilities were released?
I remember having conversations asking ChatGPT to add and remove entries from it, and it eventually admitting it couldn’t directly modify it (I think it was really trying, bless its heart) - but I did find a static memory store with specific memories I could edit somewhere.
Googling and reading yourself allows you to assess and compare sources, and apply critical thinking and reasoning specific to yourself and your own condition. Using AI takes all this control away from you and trusts a machine to do the reasoning and assessing, which may be based on huge amounts of data which differ from yourself.
Googling allows you to choose the sources you trust, AI forces you to trust it as a source.
I know in Europe we have the GDPR regulations and in theory you can get bad information corrected but in practice you still need to know that someone is holding it to take action.
Then there's laundering of data between brokers.
One broker might acquire data via dubious and then transfer that to another. In some jurisdictions once that happens the second company can do what they like with it without having to worry about the original source.
I feel like the right legal solution is to make the service providers liable in the same way if you offered a service where you got diagnosed by a human and they fucked up, the service is liable. And real liability, with developers and execs going to jail or fined heavily.
The AI models are just tools, but the providers who offer them are not just providing a tool.
This also means if you run the model locally, you're the one liable. I think this makes the most sense and is fairly simple to draw a line.
this seems to be a memory problem with ChatGPT, in your case, I bet it was changing a lot of answers due to that. For me, it really liked referring to the fact that I have an ADU in my backyard, almost pointlessly, something like "Since you walk the dogs before work, and you have a backyard ADU, you should consider these items for breakfast..."
I wonder if that's because so many people claim to have ADHD for dubious reasons, often some kind of self-diagnosis. Maybe because being "neurodivergent" is somewhat trendy, or maybe to get some amphetamines.
ChatGPT may have picked that up and give people ADHD for no good reason.
I help take care of my 80-ish year old mother. ChatGPT figured out in 5 minutes the reason behind a pretty serious chronic problem that her very good doctors hadn't been able to figure out in 3 years. Her doctors came around to the possibility, tested out the hypothesis, and it was 100% right. She's doing great now (at least with that one thing).
That's not to say that it's better than doctors or even that it's a good way to address every condition. But there are definitely situations where these models can take in more information than any one doctor has the time to absorb in a 12-minute appointment and consider possibilities across silos and specialties in a way that is difficult to find otherwise.
Something to think about: perhaps the problem is with the duration of the appointment, and the difficulty of getting one in the first place? Elsewhere in the world, doctors can and do spend more than 12 minutes figuring out what's wrong with their patients. It's the healthcare system that's broken, and it _can_ be fixed without resorting to chatgpt. That it won't is the reality, though
Can't really compete with LLMs on duration of attention - SOTA LLMs can ingest years of research on the spot, and spend however long you need on your case. No place on Earth has that many specialists available to people (much less affordable); you'd have to have 50% of the population become MDs, and that would still cover just one sub-specialty of one specialization.
> Elsewhere in the world, doctors can and do spend more than 12 minutes figuring out what's wrong with their patients.
Where? According to "International variations in primary care physician consultation time: a systematic review of 67 countries" Sweden is the only country on the planet with an average consultation length longer than the US.
"We found that 18 countries representing about 50% of the global population spend 5 min or less with their primary care physicians."
GP sessions being around 20 minutes is pretty standard in North American and European countries. You can't have standard hour-long GP sessions, as it'd become impossible to make a timely appointment, no matter which system.
Can confirm having experienced both the USA and Dutch systems now. In both countries is my visit only about 20 minutes + another 15-30 sitting in the lobby because they doctor is always running behind schedule.
In theory, the Dutch system will take care of your more quickly for "real" emergencies as their "urgent care" (spoedpost) is heavily gate kept and you can only walk in to a hospital if you're in the middle of a crisis. I tried to walk into the ER once because I needed an inhaler and they told me to call the call the hotline for the urgent care... this was a couple of months after I moved.
That said, I much prefer paying €1800/year in premiums with a €450 deductible compared to the absolute shitshow that is healthcare in the USA. Now that I've figured out how to operate within the system, it's not so bad. But when you're in the middle of a health crisis, it can be very disorienting to try and figure out how it all works.
Ever wonder why famous people and celbrities always seem so healthy? They have unfettered access to well paid doctors. People with lots of money can spend literal days with GPs, constantly trying and testing things based on feedback loops with the same doctor at the same time.
When people are forced to have a consultation, diagnosis, and treatment in 20 minutes, things are rushed and missed. Amazing things happen when trained doctors can spend unlimited time with a patient.
You make a good point, but the key here is that there are a lot less people with that kind of money. The lower volume of patients is why that's possible. There are a lot more people in the middle class. So sessions have to be limited to ensure everyone has fair, equal and timely access to a doctor.
And of course, GPs typically diagnose more common problems, and refer patients to specialists when needed. Specialists have a lower volume of patients, and are able to take more time with each person individually.
Ever wonder why famous people and celebrities seem so unhealthy with mental health and substance abuse conditions? I'm all for improving affordable access to healthcare but most people wouldn't benefit from spending more time with doctors. It's a waste of scarce resources catering to the "worried well".
While some people are impacted by rare or complex medical conditions that isn't the norm. The health and wellness issues that most consumers have aren't even best handled by physicians in the first place. Instead they could get better results at lower cost from nutritionists, personal trainers, therapists, and social workers.
Having worked in rare disease diagnostics in a non-US country with good public healthcare, most patients had to fight their way to the correct speciality to get their diagnosis. Without the persistence of family/specific doctors, its not possible.
AI might provide the most scalable way to give this level of access/quality to a much wider range of people. If we integrate it well and provide easy ways for doctors to interface with this type of systems, it should be much more scalable, as verification should be faster.
The American Medical Association has long lobbied to reduce the number of medical schools, reduce the number of positions for new doctors, and limit what tasks nurse practitioners can do [1].
I had a friend who has now gotten several out of pocket MRIs essentially against medical advice because she believes her persistent headaches are from brain cancer.
Even after the first MRI essentially ruled this out, she fed the MRI to chatGPT which basically hallucinated that a small artifact of the scan was actually a missed tumor and that she needed another scan. Thousands wasted on pointless medical expenses.
Having friend's in healthcare, they have mentioned how common this is now. Someone coming in and demanding a set of tests based on chatGPT.
They have explained that A, tests with false positives can actually be worse for you (triggers even more invasive tests) B, insurance won't cover any of your chatGPT requests.
Again, being involved in your care is important but disregarding the medical professional in front of you is a great way to set yourself up for substandard care.
No. Absolutely not. The government owes its people a certain duty of care to say “just because you can doesn’t mean you should.”
LLM’s are good for advice 95% of the time, and soon that’ll be 99%. But it is not the job of OpenAI or any LLM creator to determine the rules of what good healthcare looks like.
It is the job of the government.
We have certification rules in place for a reason. And until we can figure out how to independently certify these quasi-counselor robots to some degree of safety, it’s absolutely out of the question to release this on the populace.
We may as well say “actually, counseling degrees are meaningless. Anyone can charge as a therapist. And if they verifiably recommend a path of self-harm, they should not be held responsible.”
I know someone who used ChatGPT to diagnose themselves with a rare and specific disease. They paid out of pocket for some expensive and intrusive diagnostics that their doctor didn't want to perform and it came out, surprise, that they didn't have this disease. The faith of this person in ChatGPT remains nonetheless just as high.
I'm constantly amazed at the attitude that doctors are useless and that their multiple years of medical school and practical experience amounts to little more than a Google search. Or as someone put it, "just because a doctor messed up once it doesn't mean that you are the doctor now".
They're not useless but they're also human with limited time and limited amount of inputs.
To me it's crazy that doctors rarely ask me if I'm taking any medications for example, since meds can have some pretty serious side effects. ChatGPT Health reportedly connects to Apple Health and reads the medications you're on; to me that's huge.
> To me it's crazy that doctors rarely ask me if I'm taking any medications for example, since meds can have some pretty serious side effects.
This sounds very strange to me. Every medical appointment I've ever been to has required me to fill out an intake form where I list medications I'm taking.
Understanding drug interactions is the job of pharmacists (who are also doctors…of pharmacy). Instead of asking apple health or ChatGpt about your meds, please try talking to your pharmacist.
Doctors are wrong all the time as well. There are quite a few studies on this.
I would in no way trust a doctor over ChatGPT at this point. At least with ChatGPT I can ask it to cite the sources proving its conclusions. Then I can verify them. I can’t do that with a doctor it’s all “trust me bro”
Many, many, many doctors (including at a top-rated children's hospital in the US) spent 4+ years unsuccessfully trying to diagnose a very rare disease that my younger daughter had. Scores of appointments and tests. By the time she was 13, she weighed 56 lbs (25 kg) and was barely able to walk 100 yards. Psychiatrists even tried to imply that it was all imaginary and/or that she had an eating disorder.
Eventually, one super-nerdy intern walking rounds with the resident in the teaching hospital remembered a paper she had read, mentioned it during the case review, and they ran tests which confirmed it. They began a course of treatment and my daughter now lives normally (with the aid of daily medication.)
I fed a bunch of the early tests and case notes to ChatGPT and it diagnosed the disease correctly in minutes.
I surely wish we had had this technology a dozen years ago.
Same here, right now (couldn't get up without numb back pain, can barely walk, ChatGPT educated me on the quadratus lumborum muscle and how to solve that ... which was a lot better than my brain going 'well, I'm wheelchair-bound'.
Yep same, with the caveat that any actionable advice requires actual research from reliable sources afterwards (or at least making it cite sources).
i mean; i kinda get the concerns about misleading people but … are people really that dumb? okay if it’s telling you to drink more water, common sense. If you’re scrubbing up to perform an at home leg amputation because it misidentified a bruise then that’s really on you.
Yes, absolutely. The US has measles back in rotation because people are "self-educating" (aka taking to heart whatever idiocy they read online without a 2nd thought), and you think people self diagnosing with a sycophant sentence generator is anything but a recipe for disaster?
If we build a bridge over this river, sure, people can get across the river, but what about if the bridge fails and people fall into the water! Let's not build the bridge instead.
Same here. It’s a double-edged sword, though. I know some people who work in health care, including some doctors. They deal with a lot of hypochondriacs — people who imagine they have all sorts of issues and then try to MacGyver themselves to better health. You can’t read an HN thread on health care issues without dozens of those coming out of the woodwork to share their magical, special way of beating the system. Silicon Valley has a long history of people that did all sorts of weird crap. There's a great anecdote about Steve Jobs turning orange when he was restricting himself to a diet of carrots because he believed god knows what. In the end he died young of pancreatic cancer. Probably not connected but smart person that did some wacky stuff that probably wasn't that good for him.
I'm on statins that have side effects that I'm experiencing. That's a common thing. ChatGPT was useful for me to figure out some of that. I've had other minor issues where even just trying to understand what the medication I'm being prescribed is supposed to do can be helpful. Doctors aren't great at explaining their decisions. "Just take pill x, you'll be fine".
Doctors have to diagnose patients in a way that isn't that different from how I would diagnose a technical issue. Except they are starved for information and have to get all their information out of a 10-15 minute consult with a patient that is only talking about vague symptoms. It's easy to see how that goes wrong sometimes or how they would miss critical things. And they get to deal with all the hypochondriacs as well. So they have to poke through that as well and can't assume the patient is actually being truthful/honest.
LLMs are useful tools if you know how to use them. But they can also lead to a lot of confirmation bias. The best doctors tell you what you need to hear, not what you want to hear. So, tools like this are great and now a reality that doctors need to deal with whether they like it or not.
Some of the Covid crisis intersected with early ChatGPT usage. It wasn't pretty. People bought into a lot of nonsense that they came up with while doom scrolling Reddit, or using early versions of LLMs. But things have improved since then. LLMs are better and less likely to go completely off the rails.
I try to look at this a bit rationally: I know I don't get the best care possible all the time because doctors have to limit time they spend on me and I'm publicly insured in Germany so subject to cost savings. I can help myself to some extent by doing my homework. But in the end, I have to trust my doctor to confirm things. My mode is that I use ChatGPT to understand what's going on and then try to give my doctor a complete picture so he has all the information needed to help me.
I personally don’t care who has access to my health data, but I understand those who might.
Either way, I’m excited for some actual innovation in the personal health field. Apple Health is more about aggregating data than actually producing actionable insights. 23andme was mostly useless.
Today I have a ChatGPT project with my health history as a system prompt and it’s been very helpful. Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).
If an insight led you or a family member to being misdiagnosed and crippled would you just say it’s their or your own fault? If it were a doctor would you have the same opinion?
> But I don’t know if I should be denied access because of those people.
That's the majority of people though, if you really think that I assume you wouldn't have a problem with needing to be licenced to have this kind of access, right?
Depends. If you're talking about a free online test I can take to prove I have basic critical thinking skills, maybe, but that's still a slippery slope. As a legal adult with the right to consent to all sorts of things, I shouldn't have to prove my competence to someone else's satisfaction before I'm allowed autonomy to make my own personal decisions.
If what you're suggesting is a license that would cost money and/or a non-trivial amount of time to obtain, it's a nonstarter. That's how you create an unregulated black market and cause more harm than leaving the situation alone would have. See: the wars on drugs, prostitutes, and alcohol.
People are very good at ignoring warnings, I see it all the time.
There's no way to design it to minimise misinformation, the "ground truth" problem of LLM alignment is still unsolved.
The only system we currently have to allow people to verify they know what they are doing is through licencing: you go to training, you are tested that you understand the training, and you are allowed to do the dangerous thing. Are you ok with needing this to be able to access a potentially dangerous tool for the untrained?
There is no way to stop this at this point. Local and/or open models are capable enough that there is just a short window before attempts at restricting this kind of thing will just lead to a proliferation of services outside the reach of whichever jurisdiction decides to regulate this.
If you want working regulation for this, it will need to focus on warnings and damage mitigation, not denying access.
> Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).
If you don't mind sharing, what kind of useful information is ChatGPT giving you based off of a photo that your doctor didn't give you? Could you have asked the doctor about the data on the instrument and gotten the same info?
I'm mildly interested in this kind of thing, but I have a severe health anxiety and do not need a walking hypochondria-sycophant in my pocket. My system prompts tell the LLMs not to give me medical advice or indulge in diagnosis roulette.
In one case it was a urinary flow test (uroflowmetry). The results go to a lab and then the doctor gets the summary. Was able to diagnose the issue, prevalence, etc. and educate myself about treatment and risks before seeing a doctor. Papers gave me distributions of flow by age, sex, etc. so I knew it was out of range.
In another case I uploaded a CSV of CGM data, analyzed it and identified trends (e.g. Saturday morning blood sugar spikes). All in five minutes on my phone.
What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)?
There is a very real, near definite, chance that giving your, and others', health data to this company will hurt you and others.
Will you still hold this, "I personally don’t care who has access to my health data", position?
I personally have been helped by talking to ChatGPT about my healthcare. That's the evidence. I will take concrete positive health outcomes now, over your fears of the future.
I’m definitely a privacy fist person, but can you explain how health data could hurt you, besides obvious things like being discriminated against for insurance if you have a drug habit or whatever. Like, i’m a fitness conscious 30 something white male, what risk is there of my appendix operation being common knowledge or that i need more iron or something?
Well maybe your health data picks up a heart condition you didn't know about.
Maybe you don't know but your car insurance drops you due to the risk you'll have a cardiac event while driving. Their AI flagged you.
You need a new job but the same AI powers the HR screening and denies you because you'll cost more and might have health problems. You'd never know why.
You try to take out a second on the house to pay for expenses, just to get back on your feet, but the AI-powered risk officer judges your payback potential to be %.001 underneath the target and are denied.
The previously treatable heart condition is now dire due to the additional stress of no job, no car and no house and the financial situation continues to erode.
You apply for assistance but are denied because the heart condition is treatable and you're then obviously capable of working and don't meet the standard.
What if you were a woman seeking medical treatment for an ectopic pregnancy?
‘Being able to access people’s medical records is just another tool in law enforcement’s toolbox to prosecute people for stigmatized care'
They are already using the legal system in order to force their way into your medical records to prosecute you under their new 'anti-abortion' rulings.
Is your point 'I have no major health conditions, so nobody could be hurt by releasing health data'? If so, I don't think I need to point out the gap in this logic.
Actually you maybe do. I am extremely privacy conscious; so i’m on your side on this one but health data is a bit different from handing over all your email and purchase information to google — in that scenario the danger is that the political or religious or whatever attributes i may have could be exposed to a future regime who considers what is acceptable today to no longer be so, uses them to profile and … whatever me, right? What actual danger is there from a government or a us tech company having my blood work details when i actually have nothing to hide like drug abuse or alcohol etc? health data seems much less risky than my political views, religion, sexuality, minor crimes committed and so on.
Something that is not yet known to be an indicator that you’re at risk of a condition.
Perhaps you were given some medication that is later proven harmful. Maybe there’s a sign in your blood test results that in future will strongly correlate with a condition that emerges in your 50s. Maybe a study will show that having no appendix correlates with later issues.
How confident are you that the data will never be used against you by future insurance, work screening, dating apps, immigration processes, etc
Depends on the data - if you had genetic data they might run PGS and infer that even though you are healthy now, your genes might predispose you to something bad and deny insurance based on that. If you truly do not see dangers of health data access remember that they could genotype you even when you came just for ordinary bloodwork.
Fortunately I live in a country where one cannot be denied insurance, but yeah I didnt think of these really, was a bit of a “typed before i really thought” moment maybe i should put the keyboard down ;).
It seems like an easy fix with legislation, at least outside the US, though. Mandatory insurance for all with reasonable banded rates, and maximum profit margins for insurers?
I wasn’t saying there is no danger — just that I didn’t really think about it or see the problem, your sibling comments have changed that. Maybe i am naive but i was asking genuinely not stating i think otherwise.. Unfortunately i have family members in the us and pretty much all of them happily sent their dna off to various services so im fucked either way at this point…
Right. So able bodied, and the gender and race least associated with violence from the state.
> being discriminated against for insurance if you have a drug habit
"drug habit", Why choose an example that is often admonished as a personal failing? How about we say the same, but have something wholly, inarguably, outside of your control, like race, be the discriminating factor?
Now imagine an, lets say 'sympathetic to the Nazi agenda', administration takes control of the US gov's health and state sanctioned violence services. They decide to use those tools to address all of the, what they consider, 'undesirables'.
Your DNA says you have "one drop" of the undesirable's blood, some ancient ancestor you were unaware of, and this admin tells you they are going to discriminate against your insurance because of it based on some racist psuedoscience.
You say, "but I thought i was a 30 something WHITE male!!" and they tell you "welp, you were wrong, we have your medical records to prove it", you get irate that somehow your medical records left the datacenter of that llm company you liked to have make funny cat pictures for you and got in their hands, and they claim your behavior caused them to fear for their lives and now you are in a detention center or a shallow grave.
"That's an absurd exaggeration." You may say, but the current admin is already removing funding, or entire agencies, based on policy(DEI etc) and race(singling out Haitian and Somali immigrants), how is it much different from Jim Crow era policies like redlining?
If you find yourself thinking, "I'm a fitness conscious 30 something white male, why should I care?", it can help to develop some empathy, and stop to think "what if I was anything but a fitness conscious 30 something white male?"
If there's no evidence that it will help you or others, then that's a pretty hard position to argue against. The parent commenter asked about this, and the response basically was that it didn't seem likely to be harmful, and now you're responding to that.
Yes, of course. "Assuming it's entirely useless, why giving your data to anyone" is a hard position to argue against, but unfortunately it's also completely pointless because of the unproven assumption. Besides, there are already enough indications in this thread alone that it is already very useful to many.
Quite - personal data should remain under your control so it's always going to be a bad deal to "give" your data to someone else. It may well make sense to allow them to "use" your data temporarily and for a specific purpose though.
> I personally don’t care who has access to my health data
There's a reason this data is heavily regulated. It's deeply intimate and gives others enormous leverage over you. This is also why the medical industry can charge premium rates while often providing poor service. Something as simple as knowing whether you need insulin to survive might seem harmless, but it creates an asymmetric power dynamic that can be exploited. And we know these companies will absolutely use this data to extract every possible gain.
I'm sorry, but seriously? How could you not care who has your health data?
I think the more plausible comment is "I've been protected my whole life by health data privacy laws that I have no idea what the other side looks like".
Quite frankly, this is even worse as it can and will override doctors orders and feed into people's delusions as an "expert".
I’d rather have all my health data be used in a way that can actually help me, even with a risk of a breach or misuse, than having it in a folder somewhere doing nothing.
In general, health insurance companies (at least in the US) are pretty much prevented from using any health data to set premiums. In fact, many US states prevent insurers from charge smokers higher premiums.
It doesn't have to get to your employer, it just has to get to the enormous industry of grey-market data brokers who will supply the information to a third-party who will supply that information to a third-party who perform recruitment-based analytics which your employer (or their contracted recruitment firm) uses. Employers already use demographic data to bias their decisions all the time. If your issue is "There's no way conversations with ChatGPT would escape the interface in the first place," are you... familiar with Web 2.0?
I’ve had mixed experiences with doctors. Often times they’re glancing at my chart for two minutes before an appointment and that’s the extent of their concern for me.
I’ve also lived in places where I don’t have a choice in doctor.
What is it with you people and privacy? Sure it is a minor problem but to be _this_ affected by it? Your hospitals already have your data. Google probably has your data that you have google searched.
What's the worst that can happen with OpenAI having your health data? Vs the best case? You all are no different from AI doomers who claim AI will take over the world.. really nonsensical predictions giving undue weight to the worst possible outcomes.
Your health data could be used in the future, when technology is more advanced, to infer things about you that we don't even know about, and target you or your family for it.
Health data could also be used now to spot trends and problems that an assembly-line health system doesn't optimize for.
I think in the US, you get out of the system what you put into it - specific queries and concerns with as much background as you can muster for your doctor. You have to own the initiative to get your reactive medical provider to help.
Using your own AI subscription to analyze your own data seems like immense ROI versus a distant theoretical risk.
It feels like everyone is ignoring the major part of the other side’s argument. Sure, sharing the health data can be used against you in the future, but it can be used to help you right now as well. Anyone with any sort of pain in the past will try any available method to get rid of it. And that’s fair when those methods, even with 50% success rate, are useful.
I'm in the same boat as them, I honestly wouldn't care that much if all my health data got leaked. Not saying I'm "correct" about this (I've read the rest of the thread), just saying they're not alone.
It's always been interesting to me how religiously people manage to care about health data privacy, while not caring at all if the NSA can scan all their messages, track their location, etc. The latter is vastly more important to me. (Yes, these are different groups of people, but on a societal/policy level it still feels like we prioritize health privacy oddly more so than other sorts of privacy.)
This is exaggerated. AI is accurate enough that our sniff tests will get us far. ChatGPT just don't hallucinate all that often.
You can have the same problem with doctors who don't give you even 5 minutes of your time and who don't have time to read through all your medical history.
AI-guided self-medication is certainly problematic. Rubber-ducking your symptoms for free for as long as you need and then asking a doctor for their 2-minute opinion is IMHO the best way to go about healthcare in 2026.
I live in a place where I can get anything related to healthcare and even surgery within the same day at an affordable price, and even here I've wasted days going to various specialists who just tried to give me useless meds.
Imagine if one lives in a place where you need an appointment 3 months in advance, you most certainly will benefit from going there showing your last ChatGPT summary.
Thailand is my go-to for healthcare in private hospitals. I heard good things about Singapore too. Taiwan's public hospitals were great too, albeit not as flashy.
I've had serious trouble with my knee and elbow for years and ChatGPT helped me immensely after a good couple dozen of doctors just told me to take Ibuprofen and rest and never talked to me for longer than 3 minutes. I feel like as with most things LLM there are many opponents that say "if you do what an LLM says you will die", which is correct, while most people that look positively towards using LLMs for health advice report that they used ChatGPT to diagnose something. Having a conversation with ChatGPT based on reports and scans and figuring out what follow-up tests to recommend or questions to ask a doctor makes sense for many people. Just like asking an LLM to review your code is awesome and helpful and asking LLM to write your code is an invitation for trouble.
I understand all the chatter about LLMs hallucinating, or making assumptions, or not being able to understand or provide the more human/emotional element of health care.
But the question I ask myself is: is this better than the alternative? if I wasn't asking ChatGPT, where would I go to get help?
The answers I can anticipate are: questionably trustworthy web content; an overconfident friend who may have read questionably trustworthy web content; my mom who is referencing health recommendations from 1972. And as best I can imagine, LLMs are going to likely to provide health advice that's as good but likely better than any of those alternatives.
With that said, I acknowledge that people are likely more inclined to trust ChatGPT more like a licensed medical provider, at which point the comparison may become somewhat more murky, especially with higher severity health concerns.
Chatgpt helped me solve a side effect I had with a medication just by suggesting a changing to dose timing. Solid improvement to my QoL just from one small change. My doctor completely agreed with the suggestion.
When I got worried about an exercise suggestion from an app I'm using (weight being used for prone dumbbell leg curls) Chatgpt confirmed there is a suggested upper limit on weight for that exercise and that I should switch it out. I appreciate not injuring myself. (Gemini gave a horrible response, heh...)
Chatgpt is dangerous because it is still too agreeable and when you do go outside what it knows the answers get wrong fast, but when it is useful it is very useful.
There is nothing wrong with obtaining additional, even false, information from any source that is available to you. (AI, Search, Websites/Blogs, Podcasts, influencers, word-of-mouth, etc)
It's what you do with that information that is important - the correct path is to take your questions to a medical professional. Only a medical professional can give you a diagnosis, they can also answer other questions and address incorrect information.
ChatGPT is very good for providing you with new avenues to follow-up upon, it may even help discover the correct condition which a doctor had missed. However it is not able to deliver a diagnosis, always leave that to a medical professional.
This actually differs very little from people Googling their symptoms - where the result was the same: take the new information to your medical professional, and remember to get a second opinion (or more) for any serious medical condition, or issues which do not seem to be fully resolved.
This is the same as Googling your symptoms, but on a more broad scale. I think the issue here is how many people are going to give themself self-induced health anxiety because of this result.
There is no deny on positive case of people actually being helped by ChatGPT. It's well known that Doctors can often dismiss symptoms of rare conditions, and those people specifically find way more success on the internet because the people with similar conditions tends to gather here. This effect will repeat with ChatGPT.
This isn't feasible for a huge swathe of the USA, often because of costs/insurance but sometimes literally just accessibility/availability. A few years ago it took me nearly 8 months to find a PCP in my city that was accepting new patients (and, wee, they dropped my insurance less than a year after).
Is there a proven and guaranteed way to do this? Because otherwise it sounds very idealistic, almost like "if everything were somehow better, then things would be less bad". Doctor time will always be scarce. It sounds like it delays helping people in the here and now in order to solve some very complicated system-wide problem.
LLMs might make doctors cheaper (and reduce their pay) by lowering demand for them. The law of supply and demand then implies that care will be cheaper. Do we not want cheaper care? Similarly, LLMs reduce the backlog, so patients who do need to see a doctor can be seen faster, and they don't need as many visits.
LLMs can also break the stranglehold of medical schools: It's easier to become an auto-didact using an LLM since an LLM can act like a personal tutor, by answering questions about the medical field directly.
LLMs might be one of the most important technologies in medicine.
Maybe time to ask AI why you’re looking for a technical solution rather than addressing the gaslighting that has left you with such piss-poor medical care in the richest country on earth?
if its not solved in the richest country maybe its not so easy to solve unless you want to hand wave the diffuclt parts and just describe it as "rich people being greedy"
It's such a dysfunctional situation that the "rich people being greedy" is the most likely explanation. Either that or the U.S. citizenry are uniquely stupid amongst rich countries.
It’s a physician who gets paid a subscription by a small panel of patients.
Pros: more time spent with patients, access to a physician basically 24/7, sometimes included are other amenities (labs, imaging, sometimes access to rx at doctors office for simple generics, gym discounts, eye doctor discounts, etc)
Cons: it’s an extra cost to get access to that physician yearly ranging from a few hundred US dollars per year to sometimes thousands $1.5k-3k (or tens of thousands or more), those who aren’t financially lucky to be that well off don’t get such access.
—-
That said, some of us do this on the side to augment our salary a bit as medicine has become too much of a business based on quantity and not quality. Sad that I hear from patients that said a simple small town family doc like myself can spent 20-30mins with a patient when other providers barely spend 3 mins. My regular patients get usually 20-30mins with me on a visit unless it’s a quick one for refills and I don’t leave until they are done and have no questions. My concierge patients get 1 hour minimum and longer if they like. I offer free in-depth medical record review where I get sometimes boxes of old records to review someone’s med history if they are a new concierge patient. Had a lady recently deal with neuropathy and paresthesias for years. Normal blood counts. Long story short. She had moderate iron deficiency and vitamin b 6 deficiency from history of taking isoniazid in a different country for TB and biopsy proven celiac disease. Neuropathy basically gone with iron and b6 supplements and a celiac diet after I recommended a GI eval for endoscopy. It takes time to dig into charts like this and CMS doesn’t pay the bills to keep the clinic lights open to see patients like that all the time and this is why we are in such a bad place healthcare wise in the USA were we have chosen quality than quantity and the powers that be are number crunchers and not actual health care providers. It serves us right for let’s admins take over and we are all paying the price.
So much more I want to say but I don’t think many will read this. But if you read this and don’t like your doctor, please look around. There are still some of us out there that care about quality medicine and do try our best to spend time with the patient. If you got one of those “3 minute doctors” look for one or consider establishing care with a resident clinic at an academic center were you can be seen by resident doctors and their attending physicians. It’s not the most efficient but can almost guarantee those resident physicians will spend a good chunk of time with you to help you as much as they can.
> It’s a physician who gets paid a subscription by a small panel of patients
That's how it works here too, in PCP-Centric plans. The PCP gets paid, regardless if the patient shows up or not. But is also responsible to be the primary contact point for the patient with the health system, and referrals to specialists.
Yes, in my area if you need to find a new doctor you literally can't. This is a major city. The online booking for any major hospital network literally shows no results because the next appointment would be 90+ days out. If you have an existing relationship maybe you can get in in two weeks.
If the GP can handle my problem, I probably didn't need to go to the doctor anyway. A lot of care is done by specialists, and it can _easily_ take weeks or months to get an appointment with one. This is strongly dependent on one's insurance network though.
While you are technically correct, we live in the real world. People are busy and/or broke. Many cannot afford to go to the doctor every time they get the sniffles or have a question. Doing some preliminary research is fine and, I’d argue, responsible.
For better worse, even before the advent of LLMs, people were simply Googling whatever their symptoms were and finding a WebMD or MayoClinic page. Well, if they were lucky. If they weren't lucky, they would find some idiotic blog post by someone who claimed that they cured their sleep apnea by drinking cabbage juice.
I vibe coded an app and recorded all the things happening to my 50-something body. I shared that list with a few MDs -- they were useless. They literally can't handle anything except acute cases.
It's like telling someone to ask their doctor about nutrition. It's not in their scope any longer. They'll tell you to try things and figure it out.
The US medical industry abdicated their thing a long time ago. Doctors do something I'm sure, but discuss/advise/inquire isn't really one of them.
This was multiple doctors, in multiple locations, in various modalities, after blood tests and MRIs and CT scans. I live with literally zero of my issues resolved even a little tiny bit. And I paid a lot of money out of pocket (on top of insurance) for this experience.
I babbled some symptoms I did not understand to a doctor who correctly diagnosed me with a very rare condition in 30 seconds. And that's after spending weeks prodding LLMs (~2 years ago) and getting nowhere.
I think the main point is to not “of course” either side of this. Use every tool and recourse available to you, but don’t bag on people for doing or not doing one or the other. “Ask your doctor” is presumptive for people who have and need more.
It can go both ways. The difference is that Dr. Chat's opinion takes 5 seconds and is free. It can be just as useless as a doctor who prescribes some med to mask your symptoms instead of understanding why you have them.
Medical training is designed to produce operators who will add value to corporate health systems - prescribe pills, do procedures, or anything that can generate 'billable hours'. Actually educating patients to be healthy will only reduce corporate health system profits. Why do you think we have been fighting the 'war on cancer' since the 60s? Now 'personalized medicine' and synthetic peptide and complex immunotherapies are the latest twist with costs into 5 figures (orders of magnitude greater than standard therapies)and efficacy only better by a factor of 2 at best. Many treatments promise improved 'partial response rate' increases from 10 % to 50% yet a partial response is not a significant improvement.
AI is a disaster waiting to happen. As it is simply a regurgitation of what has been already said by real scientists, researchers,and physicians, it will be the 'entry drug' to advertise expensive therapies.
Thank goodness our corporations have not stooped to providing healthcare in exchange for blood donation, skin donation, or other organ donation. But I can imagine United HEalthcare merging with Blackstone so that people who need healthcare can get 'health care loans'.
Actual access to reliable healthcare is a massive assumption to make, not everyone has incredible health insurance or lives in a country with sufficient doctors/med staff. Most places are in crisis for lack of resources, I'd rather ask chatgpt or Gemini for something urgent rather than wait 5+ hours in ER for the doctor to say "just take some aspirin and go to a walk-in tomorrow"
Not to mention, going to an ER for something that doesn't turn out to be an emergency carries a high risk of coming back home with something significantly worse.
Last time I was in ER, I accompanied my wife; we got bounced due to lack of appropriate doctor on site, she ended up getting help in another hospital, and I came back home with severe case of COVID-19.
Related: every pediatrician I've been to with my kids during the flu season says the same thing: if you can't get an appointment in a local clinic, stay home; avoid hospitals unless the kid develops life-threatening symptoms, as visiting such places carry high risk of the kid catching something even worse (usually RSV).
There are only two places I still routinely wear a mask (n95) these days: Airplanes from waiting at the gate until about 10 minutes after takeoff when the air handling system has had time to clear things out (and the same after landing), and hospital/doctors visits. It's such a high ROI.
We used to observe that our kid(s) got sick every time we flew over the winter break to visit family. We no longer have this problem. (we do still have kids.) Not getting sick turns out to be really quite nice. :-) Hanging out in the pediatrician's office surrounded by snotty, coughing children who are not mine...
I'm also Australian and some of these comments have really made me re-appreciate what we have in Medicare. Damn, it's got its issues, but the American attitudes towards their healthcare system are downright bleak. Deeply worrying that the prevailing attitude seems to be "But ChatGPT is so good" rather than "Our healthcare system is so bad." Remind me to visit my GP next week to thank them.
If you don't need to be physically seen to make a determination, most hospitals and networks operate phone lines where you can speak with a nurse who will triage symptoms and either recommend home remedies or an appointment as needed.
I'm not sure if this has switched entirely to video calls or not, but when it became popular it was a great way to avoid overloading urgent care and general physicians with non-urgent help requests.
As someone who was recently injured and waited three months to see a specialist in Seattle, these lines were not helpful ("yes, you should make an appointment"). The only way I was able to see someone was to write a script that blew up my phone when I got a cancellation window email (the first two I missed even though I responded within 30 seconds).
Yeah, those lines are for triage, not specialty care. It's nice when you've got an infant and are a new parent and everything is terrifying, or a fever and want to know if it is bad enough to warrant going in somewhere.
Exactly, they're not an alternative to a doctor, which is the point... it's nearly impossible to see a provider these days if you don't have a pre-existing relationship. I moved recently and finding a PCP who is accepting new patients is also maddening.
I'm not fond of the fact that it's owned by Amazon but I use OneMedical and I can get a call to a doctor ~immediately, or to my regular doctor within a day or so.
I took an at home flu test, messaged my doctor at no cost telling him I’d tested positive (he didn’t even ask for a picture) and paid $25 from a tax free the same day. My doctor is part of a large hospital system too, he didn’t want me to come in just sent the rx.
I have no job and no health insurance. After crafting my prompt correctly (I have W symptoms, X blood markers, have Y lifestyle, and Z demography) ChatGPT accurately diagnosed my problem. (You have REDS and need to eat more food, dumbass.)
Or, I could've gone to a doctor and overloaded our healthcare system even more.
It depends on where you live and what the issue is.
Where I live, doctors are only good for life threatening stuff - the things you probably wouldn't be asking ChatGPT anyway. But for general health, you either:
1. Have to book in advance, wait, and during the visit doctor just says that it's not a big deal, because they really don't have time or capacity for this.
2. You go private, doctor goes on a wild hunt with you, you spend a ton of time and money, and then 3 months later you get the answer ChatGPT could have told you in a few minuites for $20/mo (and probably with better backed, more recent research).
If anything, the only time ChatGPT answers wrong on health related matters is when it tries to be careful and omits details because "be advised, I'm not a doctor, I can't give you this information" bullshit.
Until very recently, it took a week to get an appointment with my primary care doctor, and calls weren't an option. Now that video calls are an option, I get get one in a day or two. I could always go to urgent care to get an answer faster, but that costs more.
"we’ve worked with more than 260 physicians" yet not a single one of their names is proudly featured in this article. Well, the article itself does not even have an author listed. Imagine trusting someone who doesn't even disclose their identity with your sensitive data.
I’m kind of torn on this. From one side, I can’t seem to trust doctors any more. I recently had a tooth removed (by the advice of two different doctors), in a claim that it will resolve my pain, which it did not, and now 3 different doctors don’t know what’s causing my pain.
Most doctor advice boil down to drink some water and take a painkiller, while glancing for 15 seconds at my medical history before they dedicate me 7 minutes, after which they move to yet another patient.
So compared to this, AI that can analyze all my medical history, and has access to the entirety of medical researches that are publicly available, could be a very good tool to have.
But at the same time technofeudalism, dystopia, etc.
Unfortunately doctors are fallible humans, but also the infallible gatekeepers of pharmaceuticals and surgeries. Unfortunately I've become relatively knowledgeable about a condition I have myself, to the extent where I'll ofyen have much more subject knowledge than a given nonspecialist doctor. All the same they have to make on-the-spot decisions about me that will have serious consequences for both of us, in between twenty similar cases on either side.
As an aside, I'm very sorry for what you're going through. Empathy is easy when you've had something similar! I'll say that in my case removing a tooth that was impinging on a nerve did have substantial benfits that only became clear months down the line. I'm not saying that will happen for you, but a bit of irrational optimism seems to be an empirically useful policy.
Before I moved to where I live now, I had a doctor's office open in my neighborhood I could walk to. At first I thought it was amazing and I started going there. It was a really fancy place, state of art, loads of diagnostic equipment and a limited on-site lab, almost a hospital. But pretty soon I realized I was almost always seeing Nurse Practitioners, or Doctors so fresh out of medical school they were still wet behind the ears.
Even worse, they were almost always wrong about the diagnosis and I'd find myself on 3 or 4 rounds of antibiotics, or would go to the pharmacy to pick up something and they'd let me know the cocktail I had just been prescribed had dangerous counterindications. I finally stopped going when I caught a doctor searching webmd when I was on my fourth return visit for a simple sinus infection that had turned into a terrible ear infection.
My next doctor wasn't much better. And I had really started to lose trust in the medical system and in medical training.
We moved a few years ago to a different city, and I hadn't found a doctor yet. One day I took sick with something, went to a local walk-in clinic in a strip mall used mostly by the local underprivileged immigrant community.
Luck would have it I now found an amazing doctor who's been 100% correct in every diagnosis and line of care for both me and my wife since - including some difficult and sometimes hard to diagnose issues. She has basically no equipment except a scale, a light, a sphygmomanometer, and a stethoscope. Does all of her work using old fashioned techniques like listening to breathing or palpation and will refer to the local imaging center or send out to the local lab nearby if something deeper is needed.
The difference in absolutely wild. I sometimes wonder if she and my old doctors are even in the same profession.
I guess what I'm trying to say is, if you don't like your doctor, try some other ones until you find a good one, because they can be a world difference in quality -- and don't be moved by the shine of the office.
Yes, I've found the more financially motivated doctors in the higher end "concierge" type centers are not as skilled or experienced or overall motivated as the ones who seek out the patients with difficult cases at government reimbursement rates. The irony...
This is part of the reason that alternative medicine has become so popular, there are definitely still some trustworthy doctors out there, but I share the same experience as you where I feel left with no recourse but to take care of things myself after seeing multiple doctors who make it very clear that they have no interest or time to listen to me.
You've clearly touched the problem with healthcare in general though. If it's not life threatening, it's not taken seriously.
There are a lot of health related issues humans can experience that affect their lives negatively that are not life threatening.
I'm gonna give you a good example: I suffer from mild skin related issues for as long as I can remember. It's not a big deal, but I want my skin to be in better condition. I went through tens of doctors and they all did essentially some variation of tylenol equivalent for skin treatment. With AI, I've been able to identify the core problems that every licensed professional overlooked.
Brusqueness? More like insensitivity, lack of empathy, and ignorance.
My 12 year old daughter (correctly) diagnosed herself to a food allergy after multiple trips to the ER for stomach pains that resulted in “a few Tylenol/Advil with a glass of water”.
This isn't a criticism of you, I don't know your full story. But I think many people have a misconception of the role of an ER. I know an ER doctor well, and the role of an ER is to, in approximate order of priority:
1. Prevent someone from dying
2. Treat severe injuries
3. Identify if what someone is experiencing is life-threatening or requires immediate treatment to prevent their condition worsening
4. Provide basic treatment and relief for a condition which is determined not to be an imminent threat
In particular, they are not for diagnosing chronic conditions. If an ER determines that someone's stomach pain is not an imminent, severe threat to their health, then they are sending them out of the ER with medication for short-term relief in order to make room for people who are having an emergency. The ER doc I know gets very annoyed at recurring patients who expect the ER to help them diagnose and treat their illness. If you go the ER, they send you home, and the thing happens again, make an appointment with a physician (and also go to the ER if you think it's serious).
Unfortunately, the medical system is very confusing and difficult to navigate. This is a big part of why so many people end up at ERs who should be making appointments with non-emergency doctors - finding a doctor and making appointments is often hard and stressful, while an ER will look at anyone who walks through the doors.
That’s kind of how allergies are discovered though. Doctors will tell you to go on a restrictive food diet and tell you to binary search for it if it doesn’t cause anaphylaxis. Based on my experience with allergies if it’s not anaphylaxis then allergies aren’t considered super important to resolve by doctors. Finally the immune system is complicated and your daughter may have an unusual reaction which may not be IGe mediated. In other words it could be a reaction to a foreign protein and not an anti-body histamine spike in which case: yes it’s extremely unpleasant and feels like an allergy, but because it doesn’t lead to anaphylaxis it’s not a medical concern.
Doctors that treat shit that you can treat on the spot and it gets better or it doesn't tend to be really good. Surgeons in particular. Doctors that treat shit that don't have clear causes and that you give medicine to, and sometimes it kinda improves, they tend to be pretty bad.
This is both a liability and a connectedness issue.
All the downsides you listed can be solved by public open source models.
The ones we have are pretty good already and I would hope that they only get better in the near future.
Once you can it on your machine you can safely give it all your data and much more.
Of course i would still like a human doctor but it might be a better tool for personal research before you see an expert than what we had in the past.
I’m married to a doctor. We both complain about the medical system. It’s terrible. One of my biggest complaints is doctors seem to have no clue how to get symptoms out of patients in a way that translates to diagnosis.
I’ve had longstanding GI issues. I have no idea how to describe my symptoms. They sure seem like a lot of things, so I bring that list to my doc and I’m met with “eh, sounds weird”.
By contrast, I solved my issues via a few sessions with Claude. I was able to rattle off a whole list of symptoms, details about how it’s progressed, diets I’ve tried, recent incidents, supplements/meds I’ve taken. It comes up with hypothesis, research references, and forum discussions (forums are so useful for understanding - even if they can be dangerously wrong). We dive into the science of those leading causes to understand the biochemistry involved. That leads to a really deep understanding of what we can test.
Turns out there’s a very clear correlation with histamine levels and the issues I deal with. I realize a bunched stuff that I thought was healthy (and is for most people) is probably destroying my health. I cut those out and my GI issues have literally disappeared. Massively, massive life improvement with a relatively simple intervention (primarily just avoiding chicken and eggs).
I tell my doctor this and it’s just a blank stare “interesting”.
I seem to struggle with high histamine and histamine-liberating foods. This was challenging because histamines can build up quickly in certain foods that are labelled low-histamine or some foods can "liberate" histamines (even if they're low in histamines themselves). For example, chicken is technically low histamine but quickly builds up histamines and can cause histamine liberation. A "safe" meal like a chicken and rice bowl would destroy me.
Quantity and environment also played a huge role. If my histamine levels were low, I could often tolerate many of my trigger foods. However, if they were high (like during allergy season), the same foods would trigger me.
It took a very, very long time to narrow in on the underlying issue.
> But at the same time technofeudalism, dystopia, etc.
You could run LLMs locally to mitigate this. Of course, running large models like GLM-4.6 is not feasible for most people, but smaller models can run even on Macbooks and sometimes punch way above their weight
Based on recent experiences guiding my parents and younger brother through the medical world, I'm happy with AI as an alternative or complement. There are good doctors out there, but they're often booked solid or you only see them for 5-20 minutes in your parade of specialists you're forced to see to extract as much money from health insurance as possible
A bit invasive and scary to suggest, but I'd push them to rule out any form of cancer. A family member of mine went through a similar ordeal where he went through the removal of multiple teeth, due to continuous pain, and only at a later time with much back and forward between doctors, they went ahead with extensive tests and found he had mandibular cancer.
This is exactly my feelings! I think medicine is a crucial field, but I have very little respect for individual doctors themselves.
- Humans are self healing - if a doctor does absolutely nothing, most issues will resolve on their own or, if they're chronic (e.g. back pain) they won't be worse off compared to the alternative. I'm in a country with subsidised health care, and people go to the doctor immediately for absolutely anything. A doctor could have a 99% success record by handing out placebos.
- Most patients have common issues. I.e. maybe 30 people visit the clinic on a given day, it's possible that all 30 of them have come because they have the flu. Doctors are human, nobody's going to investigate potential pneumonia 30 times a day every day for 6 months. So doctors don't: someone comes in and is coughing, they say it's flu, on to the next patient. If the person really has pneumonia, they'll come back when it gets worse.
- Clinics are overbooked. I don't know if it's licensing, GDP, artificial scarcity, cost regulations or what, but doctors probably don't actually have time to investigate anyways.
- Doctors don't receive any rigorous continuing education. I'm sure there's some restrictions, but I've gone into doctors in the last year and gotten the "stress causes ulcers" explanation for what turned out to be food sensitivity issues (there was no visible ulcer mind you, so it was concluded that it was an invisible ulcer). Slow, gradual maintenance, and heavy reading are hard things that humans are necessarily bad at.
- Patients don't want to hear the truth. Lifestyle changes, the fact that nothing can be done, there's no pills to cure you, etc. Even if doctors could give a proper diagnosis, it could end up being bad PR so doctors are conditioned away from it.
- Doctors don't follow up - they get absolutely no feedback whether most of their treatments actually work. Patients also don't come back when their issue is resolved, but even if they do doctors don't care. Skin issue, doctor prescribed steroidal cream, redness disappeared, doctor declared I was cured, redness came back worse a week later. As a scientific field, there's no excuse for anything but evidence based medicine, but I haven't seen a single doctor even make an attempt to improve things statistically.
I've heard things like, doing tests for each patient would be prohibitively expensive (yes, but it should at least be an option and patients can pay for it) or the amount of things medicine can actually cure today is very small so the ROI would be low for additional work (yes, but in the long term the information could result in furthering research).
I think these are obvious and unavoidable issues (at least with the current system), but at the same time if a doctor who ostensibly became a doctor out of a desire to help people willingly supports this system I think they share some of the blame.
I don't trust AI. Part of me goes, well what if the AI suddenly demands I have some crazy dental surgery? And then I go, wait, the last dentist I went to said I need some crazy dental surgery. That none of the other 3 dentists I went to after that even mentioned. And as you said an AI will at least consider more info...
So I do support this as well. I'd like to have an AI do a proper diagnosis, then maybe a human rubber stamp it or handle escalation if I think there's something wrong...
most doctors - like most mechanics - are the worst debuggers in humankind.
I once went to the UCSF er 2x in the same weekend for an issue. On day one, I was told to take some ibuprofen and drink water - nothing to worry about. I went home, knowing this was not a solution. Day two I returned to the ER because my situation had gotten 10x worse. The (new) doctor on day 2 said "we need to resolve this asap" and I was moved into a room where they gave me a throat-numbing breating apparatus and then shoved a massive spinal tap needle in my throat to drain 20ml of fluid from behind my tonsil. I happened to bump into the doctor from the previous day on my way out and gave him a nice tap on the shoulder saying, thanks for all the help, doc. UCSF tried to bill me 2x for this combined event, but I told them to get fucked due to the negligence on day one. The billing issue disappeared.
I had a Jeep that I took into the shop 2, 3, 4 times for a crazy issue. The radio, seat belt chime, and emergency flashers all become possessed at the same time. Using my turn signal would cause the radio to cut in and out. My seat belt kept saying it was disconnected. No one could fix it. What was the issue? A loose ground on the chassis that all of those different systems were sharing. https://www.wranglerforum.com/threads/2015-rubicon-with-elec...
These are just two examples from my life, but there are countless. I just do everything myself now, because I trust no one else.
I have a very similar story. It went from ibuprofen and water to antibiotics the following day, to different antibiotics the next day, and finally to having my tonsils purged (i.e., cut open) with local anesthesia in 4 days. By this time I no longer could speak. It was the most pain I felt in my life.
I still trust doctors, but this made me much more demanding towards them.
That (peritonsular abscess) was also the most painful thing I have ever experienced in my life. I was genuinely hoping a lightning bolt would come out of the heavens and kill me.
Ah, the rare 'HN person who knows more about everything than everyone', what a joy to see one in the wild.
If you're getting bad troubleshooting it's because you're going to places that value moving units (people) as fast as possible to pay the rent. I assure you neither most mechanics nor most doctors are 'the worst debuggers in humankind'. There are plenty of mechanics and doctors that will apply a 65% solution to you and hope that it pays off, but they're far from the majority.
> Most doctor advice boil down to drink some water and take a painkiller, while glancing for 15 seconds at my medical history before they dedicate me 7 minutes, after which they move to yet another patient.
Most of the time, that's the correct approach. However, you can actually do better by avoiding painkillers, since they can have side effects. There are illnesses that are easily diagnosable and have established medications; doctors typically prescribe what pharmaceutical companies have demonstrated to them. But the rest of the "illnesses," which make up the majority, are pretty much still a mystery.
For the most part, neither you nor your doctor can do much about these. Modern medicine often feels like just a painkiller subscription.
... at the same time ... OpenAI is a business with limited financial success relative to expenses and insurance companies are thirsty for improved models. Currently the entire might of the US Government is on Central American misadventures so you think they're gonna stop them?
Doctors are usually just normal people that had the means, memory, drive and endurance to get through an exclusive education that will guarantee them a life in relative affluence and maximized adoration.
Connected thinking, interest in helping and extensive depth or breadth of knowledge in anything beyond what they need for their chosen specializations day to day work are rare and coincidental.
> Doctors are usually just normal people that had the means, memory, drive and endurance to get through an exclusive education that will guarantee them a life in relative affluence and maximized adoration.
Is that all?!
Also, everyone is just “normal people” in aggregate.
Unfortunately, I feel like I'm in the minority here, but AI has been really helpful to me and my doctor visits when it comes to preparing for a ~10-minute appointment that historically always felt like it was never long enough. I can sit down with an LLM for as long as I need and discuss my concerns and any potential suggestions, have it summarize them in a structure that's useful for my doctor, and send an email ahead of the appointment. For small things, she doesn't even need me to come in anymore and a simple phone call to confirm is enough. With the amount of pressure the healthcare system is under, I think this approach can free up a lot of valuable time for her to spend with the patients who need it most.
Not everyone who uses ChatGPT gets advice on harming or killing themselves, but some people sure do. And some act on it.
Just as we don't necessarily want to eliminate a technology because of a small percentage of bad cases, we shouldn't push a technology just because of a small percentage of good anecdotes.
This genie isn't going back in the bottle but I sure hope it ends up more useful than harmful. Which now that I write it down is kind of the motto of modern LLM companies.
This is just marketing nonsense. You don't have to train models to not retain personal information. They simply have no memory. In order to have a chat with an LLM, every time the whole conversation history gets reprocessed - it is not just the last answer / question gets send to the LLM but all preceding back and forth.
But what they do is exfiltrate facts and emotions from your chats to create a profile of you and feed it back into future conversations to make it more engaging and give it a personal feeling. This is intentionally programmed.
> In order to have a chat with an LLM, every time the whole conversation history gets reprocessed - it is not just the last answer / question gets send to the LLM but all preceding back and forth.
Btw, context caching can overcome this, e.g. https://ai.google.dev/gemini-api/docs/caching . However, this means it needs to persist the (large) state in the server side, so it may have costs associated to it.
I think they mean that they trained the tool-calling capabilities to skip personal information in tool call arguments (for RAG), or something like that. You need to intentionally train it to skip certain data.
>every time the whole conversation history gets reprocessed
Unless they're talking about the memory feature, which is some kind of RAG that remembers information between conversations.
I used to work in healthtech. Information that can be used to identify a person is regulated in America under the Health Insurance Portability and Accountability Act (HIPAA). These regulations are much stricter than the free-for-all that constitutes usage of information in companies that are dependent on ad networks. These regulations are strict and enforceable, so a healthcare company would be fined for failing to protect HIPAA data. OpenAI isn't a healthcare provider yet, but I'm guessing this is the framework they're basing their data retention and protection around for this new app.
Going to a probabilistic system for something that can/should be deterministic sets off a lot of red flags.
I’ve worked medical software packages, specifically a drug interaction checker for hospitals. The system cannot be written like a social media website… it has to fail by default, and only succeed when an exact correct solution was determined. This result must be repeatable given the same inputs. The consequence is people die.
Health and medicine is very far from deterministic. Your drug interaction checker is deterministic because the non-determinism is handled at a higher level (the doctor / patient interaction) in the health care system. Individual patients often respond wildly differently to the same medicine even in the absence of drug interactions.
Similarly the non-determinism in ChatGPT should be handled at a higher level. It can suggest possibilities for diagnosis and treatment, but you should still evaluate those possibilities with the help of a trained physician.
a drug interaction checker can be deterministic, based on a static corpus of drug interaction data
a diagnostic system should not necessarily be deterministic, because it always operates on incomplete data and it necessarily produces estimates of probability as an output
Humans are probabilistic systems?! You might want to inform the world's top neuroscientists and philosophers to down tools. They were STILL trying to figure this out but you've already solved it! Well done.
There's nothing naive about it. Most doctors work off of statistics and probabilities stemming from population based studies. Literally the entire field of medicine is probabilistic and that's what angers people. Yes, 95% chance you're not suffering from something horrible but a lot of people would want to continue diagnostics to rule out that 5% that you now have cancer and the doctor sent you home with antibiotics thinking it's just some infection, or whatever.
I don't think it's a naive response. Perhaps it's obvious to you that human doctors can't produce an "exact correct solution", but quite a lot of people do expect this, and get frustrated when a doctor can't tell them exactly what's wrong with them or recommends a treatment that doesn't end up working.
Despite this, given that health care prices in the USA continue to accelerate from relentless government interference and regulation, perhaps if the deterministic side of things could be figured out, we could monumentally decrease cost and increase access.
Integration with Function is a great use-case. There is a huge category of pre-diagnostic health questions (“Medicine 3.0” as Attia puts it) where personalization and detailed interpretation of results is important, yet insurance typically won’t cover preemptive treatment.
Not to mention that doctors generally don’t have time to explain everything. Recently I’ve been doing my own research and (important failsafe) running the conclusions by my doctor to validate. Tighter integration between physician notes, ChatGPT conversations, and ongoing biomarkers from e.g. function and Apple Health would make it possible to craft individualized health plans without requiring six-figure personal doctor subscriptions.
A great opportunity to improve the status quo here.
Of course - as with software, quality control will be the crux. We don’t want “vibe diagnosing”.
I see many comments like this in here. Where is this so common? I'm not from US but I had impression that health-care while expensive, it is good. If I assume most comments come from US then it is just expensive.
I cannot imagine doctor evaluating just one possibility.
My cousin just finished years of medical school, residency, and his first job as a psychiatrist. He opened up a private practice a year ago and has been working hard to acquire a client base. I fear this will destroy his livelihood. He can't compete on the convenience. To see him, a person has to reach him via phone or email, process their healthcare information, and then physically visit him. All while this tool has been designed to process health information, which can also speak out loud with the patient instantly. Sure he can prescribe medications, but many people he sees do not need medication. Even if the doctor is better, the convenience of this tool will likely win out.
If America wants to take care of its people, it needs to tear down the bureaucracy that is our healthcare system and streamline a single payer system. Otherwise, doctors will be unable to compete with tools like this because our healthcare system is so inconvenient.
As long as the liability precedents set by prior case law and current regulations hold, there should be no problem. OpenAI and the hordes of lawyers working for and with them will have ensured that every appropriate and legally required step has been taken, and at least for now, these are software tools used by individuals. AI is not an agent of itself or the platform hosting it; the user's relative level of awareness of this fact shouldn't be legally relevant as long as OpenAI doesn't make any claims to the contrary.
You also have to imagine that they've got their zero guardrails superpowered internal only next generation bot available to them, which can be used by said lawyer horde to ensure their asses are thoroughly covered. (It'd be staggeringly stupid not to use their AI for things like this.)
The institutions that have artificially capped levels of doctors, strangled and manipulated healthcare for personal gain, allowed insurance and health industries to become cancerous - they should be terrified of what's coming. Tools like this will be able to assist people with deep, nuanced understanding of their healthcare and be a force multiplier for doctors and nurses, of which there are far too few.
It'll also be WebMD on steroids, and every third person will likely be convinced they have stereochromatic belly button cancer after each chat, but I think we'll be better off, anyway.
I'm dealing with a severe health ailment with my cat right now and ChatGPT has been pretty invaluable in helping us understand what's going on. We've been keeping our own detailed medical log that I paste in with the lab and radiology results and it gives pretty good responses on everything so far. Of course I'm treating the results skeptically but so far it has been helpful and kept us more informed on what's going on. We've found it works best if you give it the raw facts and lab results.
The main issue is that medicine and diseases come with so many "it depends" and caveats. Like right now my cat won't eat anything, is it because of nausea from the underlying disease, from the recent stress she's been through, from the bad reaction to the medicine she really doesn't like, from her low potassium levels, something else, all of the above? It's hard to say since all of those things mention "may cause nausea and loss of appetite". But to be fair, even the human vets are making their own educated guesses.
maybe the openai moat is the data we shared along the way.
no seriously, openai seemingly lost interest in being the 'best' model - instead optimizing for other traits such as speech and general human likeness? there's obviously codex but from my experience it's slower and worse than the other big 2 in every single way: cost, speed and accuracy. codex does seem to be loved by vibe coders the most that don't really know how to code at all so maybe it is also what they're optimizing for and why it doesn't personally suit me.
others might have better models, but openai has the users emotionally attached to the models at this point even if they know it or don't. there were several times I recommended switching and the response I got is that "chatgpt knows me better".
haven't done bug hunting with openai so cannot comment, but given the right tools opus can most definitely solve bugs as well.
I think opus is special because it was trained explicitely on very strong reliance on such tools while gpt seems to "know" how things work a lot better reducing the need for tools.
opus being able to dedicate a lot more parameters for these things make it a better model if you give it what it needs, but that's just my observation. it's also much faster as a bonus.
Gemini helped diagnoze me with eosinophilic esophagitis. I have had problems with swallowing all my life and doctors kept dismissing it as a psychological problem. I think there is a great space with ai medical help.
There are pills and a minor balloon dilation. The point is I would not be threated without it. Now I can go with my girlfriend for a dinner. That would be a huge stress before that.
This was expected. People are going to be convinced that this AI knows more than any doctor, will self medicate, and will die, harm others, their kids, etc.
Please look at the post. This is about a GPT which is designed to give you health advice, with all hallucinations, miscommunication, bad training data, lack of critical thinking (or, any thinking, obviously).
I pity the doctors who will now have to deal with such self-diagnosed "patients". Wonder if General Medicine doctors will see a drop in patient, as AI convinces you to see a specialist with its diagnosis?
> researchers found that searching symptoms online modestly boosted patients’ ability to accurately diagnose health issues without increasing their anxiety or misleading them to seek care inappropriately [...] the results of this survey study challenge the common belief among clinicians and policy-makers that using the Internet to search for health information is harmful. [0]
For example, "man Googles rash, discovers he has one-in-a-million rare disease" [1].
> Ian Stedman says medical professionals shouldn't dismiss patients who go looking for answers outside the doctor's office - even if they resort to 'Dr. Google.'
> "Whenever I hear a doctor or nurse complain about someone coming in trying to diagnose themselves, it boils my blood. Because I think, I don't know if I'd be dead if I didn't diagnose myself. You can't expect one person to know it all, so I think you have to empower the patient."
If there's a reasonable audit trail for the doctor to verify that valid differential reasoning was done that they can quickly verify, there's relatively few downsides and lots of upsides for them.
Some physicians are absolutely useless and sometimes worse than not receiving any treatment at all. Medicine is dynamic and changes all the time. Some doctors refuse to move forward.
When I was younger I've had a sports injury. I was misdiagnosed for months until I did my own research and had the issue fixed with a surgery.
I have many more stories of doctors being straight up wrong about basics too.
I see physicians in a major metro area at some of the best hospital networks in the US.
I sadly have to agree with you. I had a 30+ year orthopedic surgeon confidently tell me my ACL wasn't torn.
Two years later when I got it fixed the new surgeon said there was nothing left of the old one on the MRI so it must have been torn 1.5-2+ years ago.
On the other hand, to be fair to doctors, I had a phase of looking into supplements and learned the hard lesson that you really need to dig into the research or find a very trusted source to have any idea of what's real because I definitely thought for a bit a few were useful that were definitely not :)
And also to be fair to doctors I have family members who are the "never wrong" types and are always talking about whatever doctor of the day is wrong about what they need.
My current opinion is using LLMs for this, in regards to it informing or misinforming, is no different than most other things. For some people this will be valuable and potentially dramatically help them, and for others it might serve to send them further down roads of misinformation / conspiracies.
I guess I ultimately think this is a good thing because people capable of informing themselves will be able to do so more effectively and, sadly, the other folks are (realistically) probably a lost cause but at the very least we need to do better educating our children in critical thinking and being ok with being wrong.
I have been misdiagnosed and taken wrong medicine for 4 years for a problem by a top specialist, the doctor kept making up reasons on why I'm not getting better.
By luck I consulted with another specialist due the former doctor not being available at and odd time, and some re-tests help determine that I need a different class of medicines. I was better within months.
4 years of wrong medicines and over confidence from a leading doctor. Now I have a tool to double check what doctor has recommended.
I usually look up my symptoms (not on ChatGPT) and when I finally go to a doctor I just let them do their job, but I usually do it just to have some idea of what's going on by the time I go there. My wife's a nurse (not a Doctor) so sometimes she can tell me if what I read sounds crazy or what have you based on her own personal experience with patients.
Another interesting aspect is that the NHS app makes all your detailed health history (doctors notes, scan results etc.) available to you as the patient.
Which in turn means you have the option of feeding it into ChatGPT. This feels potentially very valuable and a nice way of working around issues with whether doctors themselves are allowed to do it.
I'm not sure this applies to every surgery, but certainly my dad had access to everything immediately when he had a scan.
Out of curiosity, do you know from when this data started being assigned? Is it just from when you install the NHS app and create some form of login or will it have everything there?
Something to note here is that just yesterday (January 6 2026) the FDA announced changes around regulation of wearable & AI enabled devices: https://www.statnews.com/2026/01/06/fda-pulls-back-oversight... ("
FDA announces sweeping changes to oversight of wearables, AI-enabled devices
The changes could allow unregulated generative artificial intelligence tools into clinical workflows")
I always check my blood test and mrı results with ChatGPT before showing to Doctor. Doctor says same thing what ChatGPT says and it's giving more clear and detailed information. However we shouldn't trust chatgpt result 100%. It's just good to take an idea. Also we shouldn't trust any doctor 100%
the amount of people willing to delegate to chatgpt tells me in the near future only rich people will be able to speak with a real doctor. the current top comment about someone's uncle being saved due to chatgpt guidance says it all.
Unfortunately, that's kind of already the case. The standard of care for wealthy people, who often purchase "Personal Medicine" services, can be astoundingly better than what is available to the general public. It's more like having a health team behind you than just a lone GP. They can push you through the system, get treatments, ask colleagues, and collaborate with other teams, way quicker.
Many people in the comments are worried about laypeople using this for medical advice.
I'm worried that hospital admins will see this as a way to boost profit margins. Replace all the doctors by CNAs with ChatGPT. Yes, doctors are in short supply, are overworked, and make mistakes. The solution isn't to get rid of them, but to increase the supply of doctors.
I woder wether this will have the same pitfalls as regular ChatGPT.
The latter implicitly assumes all your questions are personal. It seems to have no concept of context for its longer term retentions.
Certainly for health, non accute things seem matter a lott. This is why yoir personal doctor that has known you for decades will spot things beyond your current symptoms.
But ChatGTP will uncritically retain from that time you helped your teacher relative build her lesson plans that you "are a teacher in secondary education" or that time you helped diagnose a friends car trouble that you "drives a high performance car" just the same as your regular "successfully built a proxmox datacenter".
With health there will be many users aking on behalve of or helping out an elderly relative. I wonder whether all those 'diagnoses' and 'issues' will be correctly attributed to the right 'patient' or just be mixed together and assumed to be all about 'you'.
Would you like to provide actual proof that your favorite toy benefits people's health before daring others to challenge you? The imagined data you’ve yet to provide can't possibly justify the harm it's causing by pushing people on the edge to suicide.
The article is paywalled but appears to concern abusing a cocktail of kratom, alochol, and xanax. I don't really think that's the same. Also, this feature isn't really about making ChatGPT start answering medical questions anyhow, since people are already doing that.
I'm certain it will kill people, but medical error already kills a huge number of people - the exact number is heavily disputed, but in the US the lower bound is in the tens of thousands annually.
Might be useful if they start letting me write my own prescriptions or can send a robot to my house to run tests or perform surgery. Otherwise, I don't really see how this changes anything for me; the doctor - that I already have to see - should just check their analysis with AI on my behalf.
I use it for health advice sometimes.. but.. doesn't this seem like a massive source of liability? Are they just assuming the investor dollars will pay for the lawyers?
I trust they considered the bias in medical research that exists in their training. I wonder if OpenAI will implement morbidity & mortality (M&M) rounds to learn from the mistakes and missed diagnosis.
Based on the reports of various failings on the safety front, I sure hope users will take that into account before they get advice to take 500g of aspirin.
ChatGPT has become an indispensable health tool for me. It serves as a great complement to my doctor. And there's been at least two cases in our house where it provided recommendations that were of great value (one possibly life saving and the other saving from an unnecessary surgery). I think that specialized LLMs will eventually be the front-line doctor/nurse.
Using AI to analyze health data has such a huge potential upside, but it has to be done locally.
I use [insert LLM provider here] all the time to ask generic, health-related questions but I’m careful about what I disclose and how I disclose it to the models. I would never connect data from my primary care’s EHR system directly to one of these providers.
That said, it’ll be interesting to see how the general population responds to this and whether they embrace it or have some skepticism.
I’m not confident we’ll have powerful/efficient enough on-device models to build this before people start adopting the SaaS-based AI health solutions.
ChatGPT’s target market is very clearly the average consumer who may not necessarily care what they do with their data.
I barely trust them with any personal info. Are we (HN crowd) biased in a way that will cause us to miss out? I wonder if we are insiders - having a privileged perspective given our industry is on the front lines - or if we’re just becoming jaded.
>You can further strengthen access controls by enabling multi-factor authentication
Pushing 2fac on users doesn't remove the need for more details on the above.
---
>to enable access to trusted U.S. healthcare providers, we partner with b.well
>wellness apps—like Apple Health, Function, and MyFitnessPal
Right...?
---
>health conversations protected and compartmentalized
Yet OAI will share those conversations with enabled apps, along with "relevant information from memories" and your "IP address, device/browser type, language/region settings, and approximate location"? (per https://help.openai.com/en/articles/20001036-what-is-chatgpt...)
AI already saved me from having an unnecessary surgery by recommending various modern-medicine (not alternative medicine) alternatives which ended up being effective.
Between genetics, blood stool and urine tests, scans (ultrasound, MRI, x-ray, etc), medical history... Doctors don't have time for a patient with non trivial or non obvious issues. AI has the time.
Doctors should be expected to use these tools as part of general competency. Someone I knew had herpes zoster on her face and the doctor had no idea and gave her diabetic medication. A five second inference with Gemini by me, uploading the picture only (after disabling apps activity to protect her privacy), and it said what it was. It's deplorable. And there's no way for this doctor to lose their job. They can keep being worse than Gemini Flash and making money.
There's "Ask ChatGPT" overlay at the bottom of the page, so I asked "how risky is giving my medical data to OpenAI?" Interestingly ChatGPT advised caution ;) In short it said: 1. Safer than standard AI chats, 2. Not as safe as regulated healthcare systems (it reminded that OpenAI is not regulated and does not follow e.g. HIPAA), 3. Still involves inherent cloud risks.
I am amazed at the false dichotomy (ChatGPT vs. Doctors) discussed in the comments. Let's look at it from the perspective of how a complex software is developed with a team of software engineers and with AI assistance. There is GitHub, Slack discussions, Pull Request, reviews, agents, etc...
Why a patient's health cannot be seen this way?
Version controlled source code -> medical records, health data
Slack channels -> discussion forum dedicated to one patient's health: among human doctors (specialists), AI agents and the patient.
In my opinion we are in the stone age compared to the above.
I think you vastly overestimate the number of orgs using “agents” at all in software development, let alone as an active part of the review process for code, and ESPECIALLY the number who consider such bots equally valuable contributors to humans.
They are tools, they are sometimes useful tools in particular domains and on particular teams, but your comment reads like one that assumes they are universally agreed upon already, and thus that the health industry has a trustee example they could follow by watching our industry. I firmly disagree with that basis.
Maybe. But I am working with these agents and I see the sophisticated patterns using LLMs, using ground truth data and human experts working together. It could be much more effective than 'patient asking chatgpt' vs. 'general doctor one-shotting a problem without AI assistance and enough context'.
You put the "agent" word into apostrophes as if I use it as a marketing buzzword. No. An agent is an LLM in a loop with memory usage, with file system access, etc, which is usually more effective than just using an LLM as is, especially if you orchestrate these agents and subagents in a good way. In my opinion using the word 'ChatGPT' (a specific user-facing LLM brand) in these discussions is much more buzzwordy than using the word agent.
All the Americans here arguing why this is a good thing, how your system is so flawed, etc. remember that this will be accessible to people in countries with good, free healthcare.
This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees. People are going to choose asking ChatGPT instead of their local doctor who is not only cheaper(!!!) but also actually educated.
People saying that this is good because the US system specifically is so messed up and useless are missing that the US makes up ~5% of the world's population, yet you think that a medical tool made for the issues of 5% of the population will be AMAZING and LIFE SAVING for the other 95%, more than harmful? Get a grip.
Not to mention shitty doctors, which exist everywhere, likely using this instead of their own brains. Great work guys.
I suspect the rationale at OpenAI at the moment is "If we don't do it, someone else will!", which I last heard in an interview with someone who produces and sells fentanyl.
>> This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees.
Well then I suppose they'd have no need or motivation to use it, right?
Argh of course I posted this and then when I tried one more time after the page finally worked. Ignore the above; the page seems to work now (points to a waitlist).
The old chat gpt models scanning the nih pub med repositories with proper prompting (e.g. …backed by randomized control trial data) was an amazing health care tool. The stripped down cheaper versions today are junk and I’ve had to start relying on grok :-( I’m not convinced OpenAI can make this work
> Most doctors are just people who had a strong ability to pass tests and finish medical school.
This is a tautology. “Most doctors are just (doing lots of work there) people who had the ability to meet the prerequisites for…becoming a doctor.”
Are doctors fallible? Of course. Is there room for improvement with regard to LLM’s? Hopefully. Does that mean there’s reason to spread general distrust in doctors? Fuck no. Until now they were the only chance you had at getting health care.
Zero mention of actual compliance standards or HIPAA on the launch page of a product that is supposed to interconnect with medical records other health apps! No thanks...
As someone who pays for ChatGPT and Claude, and uses them EVERYDAY... I still am not sure how I feel about these consumer apps having access to all my health data. OpenAI doesn't have the best track record of data safety.
Sure OpenAI business side has SOC2/ISO27001/HIPAA compliance, but does the consumer side? In the past their certifications have been very clearly "this is only for the business platform". And yes, I know regular consumer don't know what SOC2 is other than a pair of socks that made it out of the dryer.... but still. It's a little scary when getting into very personal/private health data.
Gattaca is supposed to be a warning, not a prediction. Then again neither was Idiocracy, yet here we are.
ChatGPT arguably saved my fathers life two weeks ago. He was in a rehab center after breaking his hip and his condition suddenly deteriorated.
While waiting for the ambulance at the rehab center, i plugged all his health data from is MyChart and described the symptoms. It accurately predicted (in its top two possibilities) C Diff infection.
Fast forward two days, ER has prescribed in general antibiotics. I pushed the doctors to check for C Diff and sure enough he tested positive for it - and they got him on the right antibiotics for it.
I think it was just in time as he ended up going to the ICU before he got better.
Maybe they would have tested for C Diff anyway, but definitely made me trust ChatGPT. Throughout his stay after every single update in his MyChart I copy and paste the pdf to the long running thread for his health.
I think ChatGPT health- being able to automatically import this directly will be a huge game changer. This is probably my number one use case of AI is health and wellness.
My dad is getting discharged tomorrow (to a different rehab center, thankfully)
I'd rather not talk to a commercial LLM about personal (health) details. Guess how they will/do try to make this completely overhyped chat bots profitable. OpenAI could sell relevant personal health related data to insurance companies, or maybe the hr department of your next job. Just saying...
Is FDA approval worth anything to consumers anymore? The way things are run today, OpenAI could buy any stamp of approval through money and appeasement.
Regulatory oversight of medical tech is important. But it's not because of the law, it's because it's a matter of life and death. The legality aspect becomes less interesting when there's no functioning regulatory framework that protects the people.
Which is to say, OpenAI getting approval wouldn't make this any better if that approval isn't actually worth the paper it's written on.
Yeah, gotta say I'm not enthusiastic about handing over any health data to OpenAI. I'd be more likely to trust Google or maybe even Anthropic with this data and that's saying something.
At first I was reading this like 'oh boy here we go, a marketing ploy by ChatGPT when Gemini 3 does the same thing better', but the integration with data streams and specialized memory is interesting.
One thing I've noticed in healthcare is for the rich it is preventative but for everyone else it is reactive. For the rich everything is an option (homeopathics/alternatives), for everyone else it is straight to generic pharma drugs.
AI has the potential to bring these to the masses and I think for those who care, it will bring a concierge style experience.
I would not trust a company with no path to profitability with my medical health records, because they are more likely to do something unethical and against my interests, like selling insights about me to other companies, out of desperation for new revenue streams.
You are right, what was I even thinking.. I just asked ChatGPT if eating fat will cause obesity and it went all in on counting calories. The standards are hilariously low. Lots of doctors use google and we both know google doesn't work. It will probably tell you humans are made from froot loops, chips, cola and quarter pounders - but do count those calories or it gets to big!
Since there are a lot of positive comments at the top (not surprising given the astroturfing on HN), please watch this video from ChubbyEmu, a real doctor, about a case study from someone who self-diagnosed using AI: https://www.youtube.com/watch?v=yftBiNu0ZNU
For every positive experience there are many more that are negative, if not life threatening or simply deadly.
Another nail in the coffin for apps that depend on AI APIs because the AI companies themselves are working on products using their own APIs (unless you can make the UX significantly better). UX now seems like the prime motivator when building apps.
LLMs themselves are the coffin for most apps, because by its very nature, AI subsumes products.
UX is not going to be a prime motivator, because the product itself is the very thing that stands between user and the thing they want. UX-wise, for most software, it's better for users to have all these products to be reduced to tool calls for AI agents, accessible via a single interface.
The very concept of product itself is limiting users to interactions allowed by the product vendor[0] - meanwhile, used as tools for AI agents allows them to be combined in ways user need[1].
--
[0] - Something that, thanks to move to the web and switching data exchange model from "saving files" to "sharing documents", became the way for SaaS businesses to make money by taking user data hostage - a raison d'être for many products. AI integration threatens that.
[1] - And vendors would very much like users to not be able to. There's going to be some interesting fights here, as general-purpose AI tools are an existential threat to most of the software industry itself.
Depends on the product itself. For example I use an LLM for tracking calorie data by telling it or providing a picture of the food I had, and it does a web search to find those details. The problem is it doesn't actually remember past meals, so I wrapped this API call in a simple app that just tracks the calorie numbers like a traditional app.
Just having an LLM is not the right UX for the vast majority of apps.
Right. But what if you dropped the human-facing UI, and instead exposed the backend (i.e. a database + CRUD API + heavy domain flavoring) to LLM as a tool? Suddenly you not only get a more reliable recognition (you're more likely to eat something that you've eaten before than completely new), but also the LLM can use this data to inform answers to other topics (e.g. diet recommendations, restaurant recommendations), or do arbitrary analytics on-demand, leveraging other tools at its disposal (e.g. Python, JS, SQL, Excel are the most obvious ones), etc. Suddenly the LLM would be more useful at maintaining shopping lists and cross-referencing with deals in local grocery stores - which actually subsumes several classes of apps people use. And so on.
> Just having an LLM is not the right UX for the vast majority of apps.
I argue it is, as most things people do in software doesn't need to be hands-on. Intuition pump: if you can imagine asking someone else - a spouse, a friend, an assistant - to use some app to do something for you, instead of using the app yourself, then turning that app into a set of tools for LLM would almost certainly improve UX.
But I agree it's not fully universal. If e.g. you want to browse the history of your meals, then having to ask an LLM for it is inferior to tapping a button and seeing some charts. My perspective is that tool for LLM > app when you have some specific goal you can express in words, and thus could delegate; conversely, directly operating an app is better when your goal is unclear or hard to put in words, and you just need to "interact with the medium" to achieve it.
Your first paragraph are features I already have slated to work on, as I also ran into the same things, ie if I have 500 calories left for the day, what can I eat within that limit? But not sure why I need to ditch the UI entirely, my app would show the foods as a scrollable list and click on one to get more info. I suppose that is sort of replicating the LLM UI in a way, since it also produces lists of items, but apps with interactive UX over just typing still feels natural to most.
A solution could be, can the AI generate the UI then on the fly? That's the premise of generative UI, which has been floating around even on HN. Of course the issue with it is every user will get different UIs, maybe even in the same session. Imagine the placement of a button changing every time you use an app. And thus we are back to the original concept, a UX driven app that uses AI and LLMs as informational tools that can access other resources.
Yep, building a wrapper for LLM API is never going to build a sustainable business, there's absolutely zero moat. You need more, preferably physical world elements / something else that increases the switching costs.
>>Designed with privacy and security at the core ...Health is built as a dedicated space with added protections for sensitive health information and easy-to-use controls.
Good words at a high level, but it would really help do have some detail about the "dedicated space with added protections"
How isolated is the space? What are the added protections? How are they implemented? What are the ways our info could leak?, and many more.
I wish I didn't have to be so skeptical of something that should be a great good providing more health info to more people, but the leadership of this industry really has, "gone to the dark side".
I've already been using ChatGPT to evaluate my blood results and give me some solutions for my diet and workouts. It's been great so far without any special model.
To be more precise: America could easily afford healthcare, but American elites regularly accept pathetically tiny bribes to pretend that universal healthcare is impossible. They also accept pathetically tiny bribes to allow companies like OpenAI to do whatever they please without meaningful legal consequence, so OpenAI is able to keep getting away with shipping products that cannot work as advertised and will kill people.
this is great. I've been using AI for health optimization (exercise routine, diet, etc.) based on my biomarkers for a while now.
Once ChatGPT recommended me a simple solution to a chronic health issue - a probiotic with a specific bacteria strand. And while I used probiotics with before, apparently they have all different stands of bacteria. The one ChatGPT recommended me really worked.
Such a dystopian nightmare we live in now. The US is cutting our actual healthcare services to subsidize this shit so billionaires can become trillionares while the rest of us suffer
LLMs were actually not that great so far in the preventative side / risk prediction. They are very good if you already have the disease but if you are trending there they were chill about it. The better way would be to first calculate the risks via deterministic algos and then do differential diagnosis. So something that specialized doc would do. This is an example of an online tool, that does it like that: https://www.longevity-tools.com/liver-function-interpreter
I strained my groin/abs a few weeks ago and asked ChatGPT to adjust my training plan to work around the problem. One of its recommendations was planks, which is exactly the exercise that injured me.
My cleaning lady's daughter had trouble with her ear. ChatGPT suggested injecting some oil into it. She did and it became a huge problem, so that she had to go to the hospital.
I'm sure ChatGPT can be great, but take it with a huge grain of salt.
This thread reads like an advertisement for ChatGPT Health.
I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"
OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.
https://consciousdigital.org/chatgpt-health-is-a-marketplace...
> This thread reads like an advertisement for ChatGPT Health.
This thread has a theme I see a lot in ChatGPT users: They're highly skeptical of the answers other people get from ChatGPT, but when they use it for themselves they believe the output is correct and helpful.
I've written before on HN about my friend who decided to take his health into his own hands because he trusted ChatGPT more than his doctors. By the end he was on so many supplements and "protocols" that he was doing enormous damage to his liver and immune system.
The more he conversed with ChatGPT, the better he got at getting it to agree with him. When it started to disagree or advise caution, he'd blame it on overly sensitive guardrails, delete the conversation, and start over with an adjusted prompt. He'd repeat this until he had something to copy and paste to us to "prove" that he was on the right track.
As a broader anecdote, I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people. This combined with the TikTok trend of diagnosing everything as a symptom of ADHD is becoming really alarming. In some cohorts, it's a rarity for someone to believe they don't have ADHD. There are also a lot of complaints from people who are angry their GP wouldn't just write a prescription for Adderall and tips for doctor shopping around to find doctors who won't ask too many questions before dispensing prescriptions.
Great write up. I'd even double down on this statement: "You can opt in to chat history privacy". This is really "You can opt in to chat history privacy on a chat-by-chat basis, and there is no way to set a default opt-out for new chats".
This. It’s the same play with their browser. They are building the most comprehensive data profile on their users and people are paying them to do it.
Is this any worse than Google? Seems like the same business model.
There are lots of companies that do this. Doesn't make it right.
The real "evil" here is that companies like Meta, Google, and now OpenAI sell people a product or service that the customer thinks is the full transaction. I search with Google, they show me ads - that's the transaction. I pay for Chatgpt, it helps me understand XYZ - that's the transaction.
But it isn't. You give them your data and they sell it - that's the transaction. And that obscurity is not ethical in my opinion.
> You give them your data and they sell it - that's the transaction
I think that's the wrong framing. Let's get real: They're pimping you out. Google and Meta are population-scale fully-automated digital pimping operations.
They're putting everyone's ass on the RTB street and in return you get this nice handbag--err, email account/YouTube video/Insta feed. They use their bitches' data to run an extremely sophisticated matchmaking service, ensuring the advertiser Johns always get to (mind)fuck the bitches they think are the hottest.
What's even more concerning about OpenAI in particular is they're poised to be the biggest, baddest, most exploitative pimp in world history. Instead of merely making their hoes turn tricks to get access to software and information, they'll charge a premium to Johns to exert an influence on the bitches and groom them to believe whatever the richest John wants.
Goodbye democracy, hello pimp-ocracy. RTB pimping is already a critical national security threat. Now AI grooming is a looming self-governance catastrophe.
I think you just wrote a treatment for the next HBO Max sunday drama
Does Google have your medical records? It doesn't have mine.
They tried to at one point with "google health". They are still somewhat trying to get that information with the fitbit acquisition.
People email about their medical issues and google for medical help using Gmail/Google Search. So yes, Google has people's medical records.
If you hear me talking to someone about needing to pick up some flu medicine after work do you have my medical records?
No, but if I hear you telling someone you have the flu and are picking up flu medicine after work then I have a portion of your medical records. Why is it hard for people on HN to believe that normal people do not protect their medical data and email about it or search Google for their conditions? People in the "real world" hook up smart TV's to the internet and don't realize they are being tracked. They use cars with smart features that let them be tracked. They have apps on their phone that track their sentiments, purchases, and health issues... All we are seeing here is people getting access to smart technology for their health issues in such a manner that they might lower their healthcare costs. If you are an American you can appreciate ANY effort in that direction.
how do you know they don't?
Since when is Google the model to emulate?
Depends on your goals. If you are starting a business and you see a company surpass the market cap of Apple, again, then you might view their business model as successful. If you are a privacy advocate then you will hate their model.
Well you said "is this any _worse_" (emphasis mine) and I could only assume you meant ethically worse. At which point the answer is kind of obvious because Google hasn't proven to be the most ethical company w.r.t. user data (and lots of other things).
since always
May your piece stay at the highest level of this comment section.
I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.
My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.
The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.
The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.
Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.
My uncle had an issue with his balance and slurred speech. Doctors claimed dementia and sent him home. It kept becoming worse and worse. Then one day I entered the symptoms in ChatGPT (or was it Gemini?) and asked it for the top 3 hypotheses. The first one was related to dementia. The second was something else (I forget the long name). I took all 3 to his primary care doc who had kept ignoring the problem, and asked her to try the other 2 hypotheses. She hesitantly agreed to explore the second one, and referred him to a specialist in that area. And guess what? It was the second one! They did some surgery and now he's fine as a fiddle.
I've heard a lot of such anecdotes. I'm not saying its ill-intentioned, but the skeptic in me is cautious that this is the type of reasoning which propels the anti-vax movement.
I wish / hope the medical community will address stories like this before people lose trust in them entirely. How frequent are mis-diagnosis like this? How often is "user research" helping or hurting the process of getting good health outcomes? Are there medical boards that are sending PSAs to help doctors improve common mis-diagnosis? Whats the role of LLMs in all of this?
I think the ultimate answer is that people must take responsibility for their own health and that of their children and loved ones. That includes research and double-checking your doctors. True, the result is that a good number of people will be convinced they have something (eg. autism) that they don't. But the anecdotes are piled up into giant mountains at this point. A good number of people in my family have had at least one doctor that has been useless in dealing with a particular problem. It required trying to figure out what was wrong, then finding a doctor that could help before there were correct diagnoses and treatments.
Patients should always advocate for their own care.
This includes researching their own condition, looking into alternate diagnoses/treatments, discussing them with a physician, and potentially getting a second opinion.
Especially the second opinion. There are good and bad physicians everywhere.
But advocating also does not mean ignoring a physician's response. If they say it's unlikely to be X because of Y, consider what they're saying!
Physicians are working from a deep well of experience in treating the most frequent problems, and some will be more or less curious about alternate hypotheses.
When it comes down to it, House-style medical mysteries are mysteries because they're uncommon. For every "doc missed Lyme disease" story there are many more "it's just flu."
> Patients should always advocate for their own care. This includes researching their own condition
I believe you do not fully appreciate how long and exhausting this is especially when sick...
Nothing he stated suggests this. Not giving a nod to how difficult it is doesn't mean people don't care. Unfortunately it is still true, we all have to advocate for our own care and pay attention to ourselves. The fact that this negatively affects the people who need the most care and attention is a harrowing part of humanity we often gloss over.
A boxing referee says "Protect yourself at all times."
They do this not because it isn't their job to protect fighters from illegal blows, but because the consequences of illegal blows are sometimes unfixable.
An encouragement for patients to co-own their own care isn't a removal of a physician's responsibility.
It's an acknowledgement that (1) physicians are human, fallible, and not omniscient, (2) most health systems have imperfect information sync'ing across multiple parties, and (3) no one is going to care more about you than you (although others might be much more informed and capable).
Self-advocacy isn't a requirement for good care -- it's due diligence and personal responsibility for a plan with serious consequences.
If a doc misses a diagnosis and a patient didn't spend any effort themselves, is that solely the doctor's fault?
PS to parent's insinuation: 20 years in the industry and 15 years of managed cancer in immediate family, but what do I know?
This applies to all areas of life, not just medicine.
We trade away our knowledge and skills for convenience. We throw money at doctors so they'll solve the issue. We throw money at plumbers to turn a valve. We throw money at farmers to grow our veggies.
Then we wonder why we need help to do basic things.
> researching their own condition what a joke. so if I am sufferring with cancer, I should learn the lay of the land, treatments available ... wow. if I need to do everything, what am I paying for ?
Face-time. Their knowledge, training, and ability to write letters. Just because it's expensive, doesn't mean they are spending their evenings researching possible patient conditions and expanding their knowledge. Some might, but this isn't TV.
Anyway, what are you paid for? Guessing a programmer, you just sit in a chair all day and press buttons on a magical box. As your customer, why am I having to explain what product I want and what my requirements are? Why don't you have all my answers immediately? How dare you suggest a different specialism? You made a mistake?!?
But we are idiots.
There's a reason why flour has iron and salt has iodine, right? Individual responsibility simply does not scale.
We are idiots who will bear the consequences of our own idiocy. The big issue with all transactions done under significant information asymmetry is moral hazard. The person performing the service has far less incentive to ensure a good outcome past the conclusion of the transaction than the person who lives with the outcome.
Applies doubly now that many health care interactions are transactional and you won't even see the same doctor again.
On a systemic level, the likely outcome is just that people who manage their health better will survive, while people who don't will die. Evolution in action. Managing your health means paying attention when something is wrong and seeking out the right specialist to fix it, while also discarding specialists who won't help you fix it.
> We are idiots who will bear the consequences of our own idiocy
This is just factually not true. Healthy people subsidize the unhealthy (even those made unhealthy by their own idiocy) to a truly absurd degree.
Well, the biggest consequences aren't financial, they're losing your quality of life, or your life itself.
But the effects aren't just financial, look in an ER. People who for one reason or another haven't been able to take care of themselves in the emergency room for things that aren't an emergency, and it means your standard of care is going to take a hit.
Ah yeah, good point.
Sure?
So they do end up bearing most of the brunt of their own decisions. But you're also right, it's not entirely on them.
Neither does collective responsibility, for the same reason, particularly in any sort of representative government. Or did you expect people to pause being idiots as soon as they stepped into the ballot box to choose the people they wanted to have collective responsibility?
>But the anecdotes are piled up into giant mountains at this point
This is disorganized thinking. Anecdotes about what? Does my uncle having an argument with his doctor over needing more painkillers, combine with an anecdote about my sister disagreeing with a midwife over how big her baby would be, combined with my friend outliving their stage 4 cancer prognosis all add up to "therefore I'm going to disregard nutrition recommendations"? Even if they were all right and the doctors were all wrong, they still wouldn't aggregate in a particular direction the way that a study on processed foods does.
And frankly it overlooks psychological and sociological dynamics that drive this kind of anecdotal reporting, which I think are more about tribal group emotional support in response to information complexity.
In fact, reasoning from separate instances that are importantly factually different is a signature line of reasoning used by alien abduction conspiracy theorists. They treat the cultural phenomenon of "millions" of people reporting UFOs or abduction experiences over decades as "proof" of aliens writ large, when the truth is they are helplessly incompetent interpreters of social data.
> Does my uncle having an argument with his doctor over needing more painkillers, combine with an anecdote about my sister disagreeing with a midwife over how big her baby would be, combined with my friend outliving their stage 4 cancer prognosis all add up to "therefore I'm going to disregard nutrition recommendations"?
Not sure about your sister and uncle, but from my observations the anecdotes combine into “doctor does not have time and/or doesn’t care”. People rightfully give exactly zero fucks about Bayes theorem, national health policy, insurance companies, social dynamics or whatever when the doctor prescribes Alvedon after 5 minutes of listening to indistinct story of a patient with a complicated condition which would likely be solved with additional tests and dedicated time. ChatGPT is at least not in a hurry.
You can tell me that I'm as crazy as people who believe they've been abducted, but I'm still going to be my own health advocate. :)
As of course you should be. Doctors, who are generally pretty caring and empathetic humans, try to invoke the mantra "You can't care about your patient's health more than they do" due to how deeply frustrating it is to try to treat someone who's not invested in the outcome.
It's when "being your own health advocate" turns into "being your own doctor" that the system starts to break down.
They’re not saying you’re crazy they’re saying you may be helplessly incompetent when it comes to interpreting social data. You probably aren’t a good reader either if crazy was your takeaway.
> I wish / hope the medical community will address stories like this before people lose trust in them entirely.
Too late for me. I have a similar story. ChatGPT helped me diagnose an issue which I had been suffering with my whole life. I'm a new person now. GPs don't have the time to spend hours investigating symptoms for patients. ChatGPT can provide accurate diagnoses in seconds. These tools should be in wide use today by GPs. Since they refuse, patients will take matters into their own hands.
FYI, there are now studies showing ChatGPT outperforms doctors in diagnosis. (https://www.uvahealth.com/news/does-ai-improve-doctors-diagn...) I can believe it.
GPs don't have time to do the investigation, but they also have biases.
My own story is one of bias. I spent much of the last 3 years with sinus infections (the part I wasn't on antibiotics). I went to a couple ENTs and one observed allergic reaction in my sinuses, did a small allergy panel, but that came back negative. He ultimately wanted to put me on a CPAP and nebulizer treatments. I fed all the data I got into ChatGPT deep research and it came back with an NIH study that said 25% of people in a study had localized allergic reactions that would show up one place, but not show up elsewhere on the body in an allergy test. I asked my ENT about it and he said "That's not how allergies work."
I decided to just try second generation allergy tablets to see if they helped, since that was an easy experiment. It's been over 6 months since I've had a sinus infection, where before this I couldn't go 6 weeks after antibiotics without a reoccurrence.
There are over a million licensed physicians in the US. If we assume that each one interacts with five patients per weekday, then in the six months since you had this experience, that would conservatively be six-hundred-million patient interactions in that time.
Now, obviously none of this math would actually hold up to any scrutiny, and there's a bevy of reasons that the quality of those interactions would not be random. But just as a sense of scale, and bearing in mind that a lot of people will easily remember a single egregious interaction for the rest of their life, and (very reasonably!) be eager to share their experience with others, it would require a frankly statistically impossibly low error rate to not be able to fill threads like these with anecdotes of the most heinous, unpleasant, ignorant, and incompetent anecdotes anyone could ever imagine.
And this is just looking at the sheer scale of medical care, completely ignoring the long hours and stressful situations many doctors work in, patients' imperfect memories and one-sided recollections (that doctors can never correct), and the fundamental truth that medicine is always, always a mixture of probabilistic and intuitive judgement calls that can easily, routinely be wrong, because it's almost never possible to know for sure what's happening in s given body, let alone what will happen.
That E.N.T. wasn't up to date on the latest research on allergies. They also weren't an allergy specialist. They also were the one with the knowledge, skills, and insight to consider and test for allergies in the first place.
Imagine if we held literally any other field to the standard we hold doctors. It's, on the one hand, fair, because they do something so important and dangerous and get compensated comparitively well. But on the other hand, they're humans with incomplete, flawed information, channeling an absurdly broad and deep well of still insufficient education that they're responsible for keeping up-to-date while looking at a unique system in unique circumstances and trying to figure out what, if anything, is going wrong. It's frankly impressive that they do as well as they do.
If you fully accept everything BobaFloutist says, what do you do differently?
Nothing. You just... feel more sympathetic to doctors and less confident that your own experience meant anything.
Notice what's absent: any engagement with whether the AI-assisted approach actually worked, whether there's a systemic issue with ENTs not being current on allergy research, whether patients should try OTC interventions as cheap experiments, whether the 25% localized-reaction finding is real and undertaught.
The actual medical question and its resolution get zero attention.
Also though...
You are sort of just telling people "sometimes stuff is going to not work out, oh also there's this thing that can help, and you probably shouldn't use it?"
What is the action you would like people to take after reading your comment? Not use ChatGPT to attempt to solve things they have had issues solving with their human doctors?
This is a doctor feeding the LLM a case scenario, which means the hard part of identifying relevant signal from the extremely noisy and highly subjective human patient is already done.
For every one "ChatGPT accurately diagnosed my weird disease" anecdote, how many cases of "ChatGPT hallucinated obvious bullshit we ignored" are there? 100? 10,000? We'll never know, because nobody goes online to write about the failure cases.
> nobody goes online to write about the failure cases.
Why wouldn't they? This would seem to be engagement bait for a certain type of Anti-AI person? Why would you expect this to be the case? "My dad died because he used that dumb machine" -- surely these will be everywhere right?
Let's make our beliefs pay rent in anticipated experiences!
Failure cases aren't just "patient died." They also include all the times where ChatGPT's "advice" aligned with their doctor's advice, and when ChatGPT's advice was just totally wrong and the patient correctly ignored it. Nobody knows how numerous these cases are.
So your failure cases are now "it agreed with the doctor" and "the patient correctly identified bad advice."
Where's the failure?
These are failures to provide useful advice over and above what could be gotten from a professional. In the sense that ChatGPT is providing net-neutral (maybe slightly positive since it confirms the doctor's diagnosis) or net-negative benefits (in the case that it's just wasting the user's time with garbage).
> The study, from UVA Health’s Andrew S. Parsons, MD, MPH and colleagues, enlisted 50 physicians in family medicine, internal medicine and emergency medicine to put Chat GPT Plus to the test. Half were randomly assigned to use Chat GPT Plus to diagnose complex cases, while the other half relied on conventional methods such as medical reference sites
This is not ChatGPT outperforming doctors. It is doctors using ChatGPT.
The problem doctors have is that 99/100 times ABC is caused by xyz, so they prescribe 123 and the problem goes away.
Overtime, as a human, the doctors just turn into ABC -> 123 machines.
If you keep hearing anecdotes at what point is it statistically important ? IBM 15 years ago was selling a story about a search engine they created specifically for the medical field(they had it on jeopardy) where doctors spent 10 years before they figured this poor patients issue. They plugged the original doctors notes into it and the 4th result was the issue they took a decade to figure out. Memorizing dozens of medical books and being able to recall and correlate all that information in a human brain is a rare skill to be good at. The medical system works hard to ensure everyone going through can memorize but clearly search engines/llms can be a massive help here.
> If you keep hearing anecdotes at what point is it statistically important ?
Fair question but one has to keep in mind about ALL the other situations we do NOT hear about, namely all the failed attempts that did take time from professionals. It doesn't the successful attempts are not justified, solely that a LOT of positive anecdotes might give the wrong impressions that they are not radically most negative ones that are simply not shared. It's hard to draw conclusions either way without both.
I hear about people winning the lottery all the time. There were two $100m+ winners just this week. The anecdotes just keep piling up! That doesn't mean the lottery is a valid investment tool. People just do not understand how statistically insignificant anecdotes are in a sufficiently large dataset. Just for the US population, a 1 in a million chance of something happening to a person should happen enough to be reported on a new person every weekday of the year.
You guys are getting downvoted but you're 100% right. You never hear the stories about someone typing symptoms into ChatGPT and getting back wrong, bullshit answers--or the exact answer their doctor would have told them. Because those stories are boring. You only hear about the miraculous cases where ChatGPT accurately diagnosed an unusual condition. What's the ratio of miracle:bullshit? 1:100? 1:10,000?
> You guys are getting downvoted but you're 100% right.
Classic HN. /s
> the skeptic in me is cautious that this is the type of reasoning which propels the anti-vax movement
I think there's a difference between questioning your doctor, and questioning advice given by almost every doctor. There are plenty of bad doctors out there, or maybe just doctors who are bad fits for their patients. They don't always listen or pay close attention to your history. And in spite of their education they don't always choose the correct diagnosis.
I also think there's an ever-increasing difference between AI health research and old-school WebMD research.
I also don't know. Additional point to consider: vast majority of doctors have no clue about Bayes theorem.
well, to the credit of Bayes, dementia is likely a safe choice (depending on age/etc.) but dementia is largely a diagnosis of exclusion and most doctors, besides being unfamiliar with Bayes, are also just plain lazy and/or dumb and shouldn't immediately jump to the most likely explanation when it's one with the worst prognosis and fewest treatments...
I work in biomed. Every textbook on epidemiology or medical statistics that I've picked up has had a section on Bayes, so I'm not inclined to believe this.
Here is research about doctors interpreting test results. It seems to favor GP's view that many doctors struggle to weigh test specificity and sensitivity vs disease base rate.
https://bmjopen.bmj.com/content/bmjopen/5/7/e008155.full.pdf
I'm on some anti rejection meds post-transplant and chatgptd some of my symptoms and it said they were most likely caused by my meds. Two different nephrologists told me that the meds I'm on didn't cause those symptoms before looking it up themselves and confirming they do. I think LLMs have a place in this as far as being able to quick come up with hyphotesese that can be looked into and confirmed/disproved. If I hadn't had chatGPT, I wouldnt have brought it or my team would have just blamed lifestyle rather than meds.
I can see why, but this is doc+patient in collab. And driven by using science in the form of applying llm-as-database-of-symptoms-and-treatments.
Anti-vax otoh is driven by ignorance and failure to trust science in the form of neither doctors, nor new types of science. Plus, anti-vax works like flat earth; a signaling mechanism of poor epostemic judgment."
Linking this anecdote to anti-vaxxing really seems a stretch, and I would like to see the reasoning behind that. My impression is that anti-vaxxers have more issues with vaccines themselves than with doctors who recommend them
I think that completely misreads a comment that was already painstakingly clear, they're specifically talking about the phenomenon of reasoning by anecdote. It wasn't a one-to-one equivalence between LLM driven medicine consultations and the full range of dynamics found in the anti-vax movement. Remember to engage in charitable interpretation.
“Asking inquisitive questions and thinking for themselves? Must be an anti-vaxxer!”
They are closely related. The authority of the medical establishment is more and more questioned. And whenever it is correctly questioned, they lose a bit of their authority. It is only their authority that gets people vaccinated.
"My impression is that anti-vaxxers have more issues" - I think you could have left it at that!
The fact is that many doctors do suck. Nearly all of my family members have terrible doctor stories, one even won a huge malpractice law suit. We can’t hide the real problems because we’re afraid of anti-vaxxers.
Generally the medical system is in a bad place. Doctors are often frustrated with patients who demand more attention to their problems. You can even see it for yourself on doctor subreddits when things like Fibromyalgia is brought up. They ridicule these patients for trying to figure out why their quality of life has dropped like a rock.
I think similar to tech, Doctors are attracted to the money, not the work. The AMA(I think, possibly another org) artificially restricts the number of slots for new doctors restricting doctor supply while private equity squeezes hospitals and buys up private practices. The failure doctors sit on the side of insurance trying to prevent care from being performed and it's up to the doctor who has the time/energy to fight insurance and the hospital to figure out what's wrong.
The AMA has no authority over the number of slots for new doctors. The primary bottleneck is the number of residency slots. Teaching hospitals are free to add more slots but generally refuse to do so due to financial constraints without more funding from Medicare. At one point the AMA lobbied Congress to restrict that funding but they reversed that position some years back. If you want more doctors then ask your members of Congress to boost residency funding.
https://savegme.org/
You must not be involved in the medical field to realize how bad it is especially when it come to diagnosis.
yea specially because he is not saying what diagnosis It was, if you want to say doctors were unscientific at least be scientific and give the proper medical account of the symptoms and diagnosis
The fact is that doctors are human, so they have cognitive biases and make mistakes and sometimes miss things, just like all other humans.
Humans are extraordinarily lazy sometimes too. A good LLM does not possess that flaw.
A doctor can also have an in-the-moment negatively impactful context: depression, exhaustion, or any number of life events going on, all of which can drastically impact their performance. Doctors get depressed like everybody else. They can care less due to something affecting them. These are not problems a good LLM has.
Did you get the flu shot this year tho? Be honest.
> cautious that this is the type of reasoning which propels the anti-vax movement
I hear you but there are two fundamentally different things:
1. Distrust of / disbelief in science 2. Doctors not incentivized to spend more than a few minutes on any given patients
There are many many anecdotes related to the second, many here in this thread. I have my own as well.
I can talk to ChatGPT/whatever at any time, for any amount of time, and present in *EXHAUSTIVE* detail every single datapoint I have about my illness/problem/whatever.
If I was a billionaire I assume I could pay a super-smart, highly-experienced human doctor to accommodate the same.
But short of that, we have GPs who have no incentive to spend any time on you. That doesn't mean they're bad people. I'm sure the vast majority have absolutely the best of intentions. But it's simply infeasible, economically or otherwise, for them to give you the time necessary to actually solve your problem.
I don't know what the solution to this is. I don't know nearly enough about the insurance and health industries to imagine what kind of structure could address this. But I am guessing that this might be what is meant by "outcome-based medicine," i.e., your job isn't done until the patient actually gets the desired outcome.
Right now my GP has every incentive to say "meh" and send me home after a 3-minute visit. As a result I more or less stopped bothering making doctor appointments for certain things.
> ...this is the type of reasoning which propels the anti-vax movement.
So what? Am I supposed to clutch pearls and turn off my brain at the stopword now?
> How frequent are mis-diagnosis like this?
The anecdote in question is not about mis-diagnosis, it's about a delayed diagnosis. And yeah, the inquiry sent a doctor down three paths, one of which led to a diagnosis, so let's be clear: no, the doctor didn't get it completely on their own, and: ChatGPT was, at best, 33% correct.
The biggest problem in medicine right now (that's creating a lot of the issues people have with it I'd claim) is twofold:
- Engaging with it is expensive, which raises the expectations of quality of service substantially on the part of the patients and their families
- Virtually every doctor I've ever talked to complains about the same things: insufficient time to give proper care and attention to patients, and the overbearingness of insurance companies. And these two lead into each other: so much of your doc's time is spent documenting your case. Basically every hour of patient work on their part requires a second hour of charting to document it. Imagine having to write documentation for an hour for every hour of coding you did, I bet you'd be behind a lot too. Add to it how overworked and stretched every medical profession is from nursing to doctors themselves, and you have a recipe for a really shitty experience on the part of the patients, a lot of whom, like doctors, spend an inordinate amount of time fighting with insurance companies.
> How often is "user research" helping or hurting the process of getting good health outcomes?
Depends on the quality of the research. In the case of this anecdote, I would say middling. I would also say though if the anecdotes of numerous medical professionals I've heard speak on the topic are to be believed, this is an outlier in regard to it actually being good. The majority of "patient research" that shows up is new parents upset about a vaccine schedule they don't understand, and half-baked conspiracy theories from Facebook. Often both at once.
That said, any professional, doctors included, can benefit from more information from whomever they're serving. I have a great relationship with my mechanic because by the time I take my car to him, I've already ruled out a bunch of obvious stuff, and I arrive with detailed notes on what I've done, what I've tried, what I've replaced, and most importantly: I'm honest about it. I point exactly where my knowledge on the vehicle ends, and hope he can fill in the blanks, or at least he'll know where to start poking. The problem there is the vast majority of the time, people don't approach doctors as "professionals who know more than me who can help me solve a problem," they approach them as ideological enemies and/or gatekeepers of whatever they think they need, which isn't helpful and creates conflict.
> Are there medical boards that are sending PSAs to help doctors improve common mis-diagnosis?
Doctors have shitloads of journals and reading materials that are good for them to go through, which also factors into their overworked-ness but nevertheless; yes.
> Whats the role of LLMs in all of this?
Honestly I see a lot of applications of them in the insurance side of things, unless we wanted to do something cool and like, get a decent healthcare system going.
I'm married to a provider. It is absolutely insane what she has to do for insurance. She's not a doctor, but she oversees extensive therapy for 5-10 kids at a time. Insurance companies completely dictate what she can and can't do, and frequently she is unable to do more in-depth, best-practice analysis because insurance won't pay for it. So her industry ends up doing a lot of therapy based on educated guesswork. Every few months, she has to create a 100+ page report for insurance. And on top of it, insurance denies the first submissions all the time which then cause her to burn a bunch of time on calls with the company appealing the peer review. And the "peer review" is almost always done by people who have no background in her field. It's basically akin to a cardiologist reviewing a family therapist's notes and deciding what is or isn't necessary. Except that my wife's job can be the difference between a child ever talking or not, or between a child being institutionalized or not when they become an adult. People who think private insurance companies are more efficient than government-run healthcare are nuts. Private insurance companies are way worse and actively degrade the quality of care.
> Insurance companies completely dictate what she can and can't do, and frequently she is unable to do more in-depth, best-practice analysis because insurance won't pay for it.
The distinction between "can't do" and "can't get paid for" seems to get lost a lot with medical providers. I'm not saying this is necessarily what's happening with your wife, but I've had it happen to me where someone says, "I can't do this test. Your insurance won't pay for it," and then I ask what it costs and it's a few hundred or a couple thousand dollars and I say, "That's OK. I'll just pay for the test myself," and something short-circuits and they still can't understand that they can do it.
The most egregious example was a prescription I needed that my insurance wouldn't approve. It was $49 without insurance. But the pharmacy wouldn't sell it to me even though my doctor had prescribed it because they couldn't figure out how to take my money directly when I did have insurance.
I get that when insurance doesn't cover something, most patients won't opt to pay for it anyway, but it feels like we need more reminders on both the patient and the provider side that this doesn't mean it can't be done.
> The distinction between "can't do" and "can't get paid for" seems to get lost a lot with medical providers. I'm not saying this is necessarily what's happening with your wife, but I've had it happen to me where someone says, "I can't do this test. Your insurance won't pay for it," and then I ask what it costs and it's a few hundred or a couple thousand dollars and I say, "That's OK. I'll just pay for the test myself," and something short-circuits and they still can't understand that they can do it.
Tell me you've never lived in poverty without telling me.
An unexpected expense of several hundred to a couple thousand dollars, for most of my lived life both as a child and a young adult, would've ruined me. If it was crucial, it would've been done, and I would've been hounded by medical billing and/or gone a few weeks without something else I need.
This is inhumanity, plain as.
I generally agree (and sympathize with your wife), but let's not present an overly rosy view of government run healthcare or single-payer systems. In many countries with such systems, extensive therapy simply isn't available at all because the government refuses to pay for it. Every healthcare system has limited resources and care is always going to be rationed, the only question is how we do the rationing.
Every healthcare system has problems, yes. However the spectre of medical debt and bankruptcy is a uniquely American one, so, IMHO, even if we moved to single-payer healthcare and every other problem stayed the same, but we no longer shoved people into the capitalist fuck-barrel for things completely outside their control, I think that's an unmitigated, massive improvement.
Well now you're talking about a different problem and moving the goalposts. It would be impossible for every other problem to stay the same under a single-payer system. That would solve some existing problems and create other new problems. In particular the need to hold down government budgets would necessarily force increased care rationing and longer queues. Whether that would be a net positive or negative is a complex question with no clear answers.
The statistics you see about bankruptcy due to medical debt are highly misleading. While it is a problem, very few consumers are directly forced into bankruptcy by medical expenses. What tends to happen is that serious medical problems leave them unable to work and then with no income and then with no income all of their debts pile up. What we really need there is a better disability welfare system to keep consumers afloat.
> Well now you're talking about a different problem and moving the goalposts.
I am absolutely not. I am reacting to what's been replied to what I've said. In common vernacular, this is called a "conversation."
To recap: the person who replied to me left a long comment about the various strugglings and limitations of healthcare when subjected to the whims of insurance companies. You then replied:
> I generally agree (and sympathize with your wife), but let's not present an overly rosy view of government run healthcare or single-payer systems. In many countries with such systems, extensive therapy simply isn't available at all because the government refuses to pay for it. Every healthcare system has limited resources and care is always going to be rationed, the only question is how we do the rationing.
Which, at least how I read it, attempts to lay the blame for the lack of availability of extensive therapies at the feet of a government's unwillingness to pay, citing that every system has limited resources and care is always being rationed.
I countered, implying that while that may or may not be true, that lack of availability is effectively status quo for the majority of Americans under our much more expensive, and highly exploitative insurance-and-pay-based healthcare system, and that, even if those issues around lack of availability persisted through a transition to a single-payer healthcare system, it would at least alleviate us from the uniquely American scourge of people being sent to the poorhouse, sometimes poor-lack-of-house, for suffering illnesses or injuries they are in no way responsible for which in my mind is still a huge improvement.
> The statistics you see about bankruptcy due to medical debt are highly misleading. While it is a problem, very few consumers are directly forced into bankruptcy by medical expenses. What tends to happen is that serious medical problems leave them unable to work and then with no income and then with no income all of their debts pile up.
I mean we can expand this if you like into a larger conversation about how insurance itself being tied to employment and everyone being kept broke on purpose to incentivize them to take on debt to survive, placing them on a debt treadmill their entire lives which has been demonstrably shown to reduce quality and length of life, as well as introducing the notion that missing any amount of work for no matter how valid a reason has the potential to ruin your life, is probably a highly un-optimal and inhumane way to structure a society.
> What we really need there is a better disability welfare system to keep consumers afloat.
On that at least, we can agree.
I get where you’re coming from. I would argue the mistakes doctors make and the amount of times they are wrong literally dwarfs the amount of anti vaxers in existence.
Also the anti vax movement isn’t completely wrong. It’s now confirmed (officially) that the covid-19 vaccine isn’t completely safe and there are risks taking it that don’t exist in say something like the flu shot. The risk is small but very real and quite deadly. Source: https://med.stanford.edu/news/all-news/2025/12/myocarditis-v... This was something many many doctors originally claimed was completely safe.
The role of LLMs is they take the human bias out of the picture. They are trained on formal medical literature and actual online anecdotal accounts of patients who will take a shit on doctors if need be (the type of criticism a doctor rarely gets in person). The generalization that comes from these two disparate sets of data is actually often superior to a doctor.
Key word is “often”. Less often (but still often in general) the generalization can be an hallucination.
Your post irked me because I almost got the sense that there’s a sort of prestige, admiration and respect given to doctors that in my opinion is unearned. Doctors in my opinion are like car mechanics and that’s the level of treatment they deserve. They aren’t universally good, a lot of them are shitty, a lot are manipulative and there’s a lot of great car mechanics I respect as well. That’s a fair outlook they deserve… but instead I see them get these levels of respect that matches mother Theresa as if they devoted their careers to saving lives and not money.
No one and I mean no one should trust the medical establishment or any doctor by default. They are like car mechanics and should be judged on a case by case basis.
You know for the parent post, how much money do you think those fucking doctors got to make a wrong diagnosis of dementia? Well over 700 for less than an hour of there time. And they don’t even have the kindness to offer the patient a refund for incompetence on their part.
How much did ChatGPT charge?
> This was something many many doctors originally claimed was completely safe.
I never heard any doctors claim any of the covid vaccines were completely safe. Do you mind if I ask which doctors, exactly? Not institutions, not vibes, not headlines. Individual doctors. Medicine is not a hive mind, and collapsing disagreement, uncertainty, and bad messaging into “many doctors” is doing rhetorical work that the evidence has to earn.
> The role of LLMs is they take the human bias out of the picture.
That is simply false. LLMs are trained on human writing, human incentives, and human errors. They can weaken certain authority and social pressures, which is valuable, but they do not escape bias. They average it. Sometimes that helps. Sometimes it produces very confident nonsense.
> Your post irked me because I almost got the sense that there’s a sort of prestige, admiration and respect given to doctors that in my opinion is unearned. Doctors in my opinion are like car mechanics and that’s the level of treatment they deserve.
> No one and I mean no one should trust the medical establishment or any doctor by default. They are like car mechanics and should be judged on a case by case basis.
You are entitled to that opinion, but I wanted to kiss the surgeon who removed my daughter’s gangrenous appendix. That reaction was not to their supposed prestige, it was recognition that someone applied years of hard won skill correctly at a moment where failure had permanent consequences.
Doctors make mistakes. Some are incompetent. Some are cynical. None of that justifies treating the entire profession as functionally equivalent to a trade whose failures usually cost money rather than lives.
And if doctors are car mechanics, then patients are machines. That framing strips the humanity from all of us. That is nihilism.
No one should trust doctors by default. Agreed. But no one should distrust them by default either. Judgment works when it is applied case by case, not when it is replaced with blanket contempt.
> I never heard any doctors claim any of the covid vaccines were completely safe. Do you mind if I ask which doctors, exactly? Not institutions, not vibes, not headlines. Individual doctors. Medicine is not a hive mind, and collapsing disagreement, uncertainty, and bad messaging into “many doctors” is doing rhetorical work that the evidence has to earn.
There’s no data here. Many aspects of life are not covered by science because trials are expensive and we have to go with vibes.
And even on just vibes we often can get accurate judgements. Do you need clinical trials to confirm there’s a ground when you leap off your bed? No. Only vibes unfortunately.
If you ask people (who are not doctors) to remember this time they will likely tell you this is what they remember. I also do have tons of anecdotal accounts of doctors saying the Covid 19 vaccine is safe and you can find many yourself by searching. Here’s one: https://fb.watch/Evzwfkc6Mp/?mibextid=wwXIfr
The pediatrician failed to communicate the risks of the vaccine above and made the claim it was safe.
At the time to my knowledge the actual risks of the vaccine were not fully known and the safety was not fully validated. The overarching intuition was that the risk of detrimental of effects from the vaccine was less than the risk+consequence of dying from Covid. That is still the underlying logic (and best official practice) today even with the knowledge about the heart risk covid vaccines pose.
This doctor above did not communicate this risk at all. And this was just from a random google search. Anecdotal but the fact that I found one just from a casual search is telling. These people are not miracle workers.
> That is simply false. LLMs are trained on human writing, human incentives, and human errors. They can weaken certain authority and social pressures, which is valuable, but they do not escape bias. They average it. Sometimes that helps. Sometimes it produces very confident nonsense.
No it’s not false. Most of the writing on human medical stuff is scientific in nature. Formalized with experimental trials which is the strongest form of truth humanity has both practically and theoretically. This “medical science” is even more accurate than other black box sciences like psychology as clinical trials have ultra high thresholds and even test for causality (in contrast to much of science only covers correlation and assumes causality through probabilistic reasoning)
This combined with anecdotal evidence that the LLM digests in aggregate is a formidable force. We as humans cannot quantify all anecdotal evidence. For example, I heard anecdotal evidence of heart issues with rna vaccines BEFORE the science confirmed it and LLMs were able to aggregate this sentiment through sheer volumetric training on all complaints of the vaccine online and confirm the same thing BEFORE that Stanford confirmation was available.
> You are entitled to that opinion, but I wanted to kiss the surgeon who removed my daughter’s gangrenous appendix. That reaction was not to their supposed prestige, it was recognition that someone applied years of hard won skill correctly at a moment where failure had permanent consequences.
Sure I applaud that. True hero work for that surgeon. I’m talking about the profession in aggregate. In aggregate in the US 800000k patients die or get permanently injured from a misdiagnosis every year. Physicians fuck up and it’s not occasionally. It’s often and all the fucking time. You were safer getting on the 737 max the year before they diagnosed the mcas errors then you are NOT getting a misdiagnosis and dying from a doctor. Those engineers despite widespread criticism did more for your life and safety than doctors in general. That is not only a miracle of engineering but it also speaks volumes of the medical profession itself which DOES not get equivalent criticism for mistakes. That 800000k statistic is swept under the rug like car accidents.
I am entitled to my own opinion just as you are to yours but I’m making a bigger claim here. My opinion is not just an opinion. It’s a ground truth general fact backed up by numbers.
> And if doctors are car mechanics, then patients are machines. That framing strips the humanity from all of us. That is nihilism.
There is nothing wrong with car mechanics. It’s an occupation and it’s needed. And those cars if they fail they can cause accidents that involve our very lives.
But car mechanics are fallible and that fallibility is encoded into the respect they get. Of course there are individual mechanics who are great and on a case by case basis we pay those mechanics more respect.
Doctors need to be treated the same way. It’s not nilhism. It’s a quantitative analysis grounded in reality. The only piece of evidence you provided me in your counter is your daughter’s life being saved. That evidence warrants respect for the single doctor who saved your daughter’s life and not for the profession in general. The numbers agree with me.
And treatment for say the corporation responsible for the mcas failures and the profession responsible for medical misdiagnosis that killed people is disproportionate. Your own sentiment and respect for doctors in general is one piece of evidence for this.
> If you ask people (who are not doctors) to remember this time they will likely tell you this is what they remember. I also do have tons of anecdotal accounts of doctors saying the Covid 19 vaccine is safe and you can find many yourself by searching. Here’s one: https://fb.watch/Evzwfkc6Mp/?mibextid=wwXIfr
> No it’s not false. Most of the writing on human medical stuff is scientific in nature. Formalized with experimental trials which is the strongest form of truth humanity has both practically and theoretically. This “medical science” is even more accurate than other black box sciences like psychology as clinical trials have ultra high thresholds and even test for causality (in contrast to much of science only covers correlation and assumes causality through probabilistic reasoning)
Sorry, but these kinds of remarks wreck your credibility and make it impossible for me to take you seriously.
If you disagree with me then it is better to say you disagree and state your reasoning why. If the reasoning is too foundational than it is better to state it as such and exit.
Saying something like my "credibility is wrecked" and impossible to take me "seriously" crosses a line into deliberate attack and insult. It's like calling me an idiot but staying technically within the HN rules. You didn't need to go there and breaking those rules in spirit is just as bad imo.
Yeah I agree I think the conversation is over. I suggest we don't talk to each other again as I don't really appreciate how you shut down the conversation with deliberate and targeted attacks.
With the pandemic, I've lost my faith in the medical community. They recommended a lot of unproven medicines. They where more based in ideology than science. I trust more an LLM the the average doctor.
That's a tip I recommend people to try when they are using LLMs to solve stuff. Instead of asking "how to..", ask "what alternatives are there to...". A top-k answer is way better, and you get to engage more with whatever you are trying to learn/solve.
Same if you are coding, ask "Is it possible" not "How do I" as the second one will more quickly result in hallucinations when you are asking it for something that isn't possible.
"Is it possible" is the conservative choice if you don't want to get an explanation of something that in fact, cannot be done.
But it seems "is it possible" also leads it into answering "no, it can't" probably modelling a bunch of naysayers.
Sometimes, if you coax it a little bit, it will tell you how to do a thing which is quite esoteric.
General doctors aren't trained for problem solving, they're trained for memorization. The doctors that are good at problem solving aren't general doctors.
That's a sweeping generalization unsupported by facts.
In reality you'll find the vast majority of GPs are highly intelligent and quite good at problem solving.
In fact, I'd go so far as to say their training is so intensive and expansive that laypeople who make such comments are profoundly lacking in awareness on the topic.
Physicians are still human, so like anything there's of course bad ones, specialists included. There's also healthcare systems with various degrees of dysfunction and incentives that don't necessarily align with the patient.
None of that means GPs are somehow less competent at solving problems; not only is it an insult but it's ridiculous on the face of it.
Even if they are good at problem solving, a series of 10-minute appointments spaced out in 2-3 month intervals while they deal with a case load of hundreds of other patients will not let them do it. That's the environment that most GPs work under in the modern U.S. health care system.
Pay for concierge medicine and a private physician and you get great health care. That's not what ordinary health insurance pays for.
You followed up a sweeping generalization with a sweeping generalization and a touch of bias.
I imagine the issue with problem solving more lays in the system doctors are stuck in and the complete lack of time they have to spend on patients.
>You followed up a sweeping generalization with a sweeping generalization and a touch of bias.
As opposed to what, proving that GPs are highly trained, not inherently inferior to other types of physicians, and regularly conduct complex problem solving?
Heck, while I'm at it I may as well attempt to prove the sky is blue.
>I imagine the issue with problem solving more lays in the system doctors are stuck in and the complete lack of time they have to spend on patients.
Bingo.
Maybe they are, but for most of my interactions with GP's in recent years, and several with specialists, for anything much beyond the very basics, I've had to educate them, and it didn't require much knowledge to exceeds theirs on specific conditions.
In one case, a specialist made arguments that were trivially logically fallacious and went directly against the evidence from treatment outcomes.
In other cases, sheer stupidity of pattern matching with rational thinking seemingly totally turned off. E.g. hearing I'd had a sinus infection for a long time, and insisting that this meant it was chronic and chronic meant the solution was steroids rather than antibiotic, despite a previous course having done nothing, and despite the fact that an antibiotic course had removed most of the symptoms both indicating the opposite - in the end, after bypassing my GP at the time and explaining and begging an advance nurse practitioner, I got two more courses of antibiotic and the infection finally fully went.
I'm sure all of them could have done better, and that a lot of it is down to dysfunction, such as too little time allotted to actually look at things properly, but some of the interactions (the logical fallacy in particular) have also clearly been down to sheer ignorance.
I also expect they'd eventually get there, but doing your own reading and guiding things in the right direction can often short-circuit a lot of bullshit that might even deliver good outcomes in a cost effective way on a population level (e.g. I'm sure the guidance on chronic sinus issues is right the vast majority of time - most bacterial sinus infections either clear by themselves or are stopped early enough not to "pattern match" as chronic), but might cause you lots of misery in the meantime...
Your personal experience is anecdotal and thus not as reliable as statistical facts. This alone is not a good metric.
However your anecdotal experience is not only inline with my own experience. It is actually inline with the facts as well.
When the person your responding to said that what you wasn’t backed up by facts I’m going to tell you straight up that, that statement was utter bullshit. Everything you’re saying here is true and generally true and something many many patients experience.
>When the person your responding to said that what you wasn’t backed up by facts I’m going to tell you straight up that, that statement was utter bullshit.
The person you just replied to here isn't the same person I replied to.
> In reality you'll find the vast majority of GPs are highly intelligent and quite good at problem solving.
Is this statement supported by facts? If anything this statement is just your internal sentiment. If you claim it’s not supported by facts the proper thing you should do is offer facts to counter his statement. Don’t claim his statement isn’t supported by facts than make a counter claim without facts yourself.
https://www.statnews.com/2023/07/21/misdiagnoses-cost-the-u-...
Read that fact. 800,000 deaths from misdiagnosis a year is pretty pathetic. And this is just deaths. I can guarantee you the amount of mistakes unreported that don’t result in deaths dwarfs that number.
Boeing the air plane manufacurwe who was responsible for the crashing Boeing 737 mcas units have BETTER outcomes than this. In the year that those planes crashed you have a 135x better survival rate of getting on a 737 max then you are getting an important diagnosis from a doctor and not dying from a misdiagnosis. Yet doctors are universally respected and Boeing as a corporation was universally reviled that year.
I will say this GPs are in general not very competent. They are about as competent and trust worthy as a car mechanic. There are good ones, bad ones, and also ones that bullshit and lie. Don’t expect anything more than that, and this is supported by facts.
>Is this statement supported by facts?
Yeah, the main fact here is called medical school.[0]
>Read that fact. 800,000 deaths from misdiagnosis a year is pretty pathetic. And this is just deaths.
Okay, and if that somehow flows from GPs (but not specialists!) being uniquely poor at problem solving relative to all other types of physicians—irrespective of wider issues inherent in the U.S. healthcare system—then I stand corrected.
>135x better survival rate of getting on a 737 max
The human body isn't a 737.
>I will say this GPs are in general not very competent. They are about as competent and trust worthy as a car mechanic.
Ignorant.
[0] https://medstudenthandbook.hms.harvard.edu/md-program-object...
How is going to medical school a measurement of problem solving ability? You need to cite a metric involving ACTUAL problem solving. For example, a misdiagnosis is a FAILURE at solving a problem.
Instead you say “medical school” and cite the Harvard handbook as if everyone went to Harvard and that the medical book was a quantitative metric on problem solving success or failure. Come on man. Numbers. Not manuals.
> The human body isn't a 737
Are you joking? You know a 737 is responsible for ensuring the survival of human bodies hurdling through the air at hundreds of miles per hour at altitudes higher than Mount Everest? The fact that your risk of dying is lower going through that then getting a correct diagnosis from a doctor is quite pathetic.
This statement you made here is manipulative. You know what I mean by that comparison. Don’t try to spin it like I'm not talking about human lives.
> Ignorant.
Being a car mechanic is a respectable profession. They get the typical respect of any other occupation and nothing beyond that. I’m saying doctors deserve EXACTLY the same thing. The problem is doctors sometimes get more than that and that is not deserved at all. Respect is earned and the profession itself doesn’t earn enough of that respect.
Are you yourself a doctor? If so your response speaks volumes about the treatment your patients will get.
My current one sure as hell is.
My previous one was, too.
The one I had as kid, well. He was old, stuck in old ways, but I still think he was decent at it.
But seeing the doctor is a bit more difficult these days, since the assistants are backstopping. They do some heavy lifting / screening.
I think an LLM could help with symptoms and then looking at the most probable cause, but either way I wouldn't take it too serious. And that is the general issue with ML: people take the output too serious, at face value. What matters is: what are the cited sources?
You still need 2 deviations above the average college student to get to med school. As a rough proxy for intelligence. The bottom threshold for doctors is certainly higher than lawyers
It doesn't matter how intelligent they are, if they only have 5 minutes to spend on your case.
It’s 120 to 130 which is similar to engineers.
They aren’t that much smarter. The selection criteria is more about the ability to handle pressure than it is about raw intelligence.
Tons of bottom feeders go to medical schools in say Kansas, so there’s a lot of leeway here in terms of intelligence.
What a weird comment. There are several good medical schools in Kansas. In particular the University of Kansas School of Medicine is top notch.
Only wierd for you because you got triggered by the Kansas part. In general the comment is true.
There’s a school in Kansas that sits right on top of Caribbean schools in terms of reputation. I know several people who had to go there.
In general your comment was false. You're just lying and making things up. There are lower-tier medical schools in California, Massachusetts, and most every other state. The state, whether it's Kansas or somewhere else, is almost totally irrelevant to the quality of physicians produced.
No I'm not. I'm referring to a specific bad school(s) in kansas. I never made a comment about Kansas itself.
I never said the state is correlated with the quality of the doctor, or even if the quality of the school is associated with the quality of the doctor. You made that up. Which makes you the liar.
If you're referring to a specific school then name the school instead of making lame low-effort comments about a state.
>If you're referring to a specific school then name the school instead of making lame low-effort comments about a state.
You're fucking right. I should've named the specific school. (And I didn't make a comment about the state I made a comment about school(s) in the state which is not about all schools in the state.)
That's would I should do. What you should do is: Don't accuse me of lying and then lie yourself. Read the comment more carefully. Don't assume shit.
No point in continue this. We both get it and this thread is going nowhere.
I can attest to this from personal experience.
After undergoing stomach surgery 8 years ago I started experiencing completely debilitating stomach aches. I had many appointments with my GP and a specialist leading to endoscopies, colonoscopies, CAT scans, and MRI scans all to no avail and they just kept prescribing more and more anti-acids and stronger painkillers.
It was after seven years of this that I paid for a private food allergy test to find that I am allergic to Soya protein. Once I stopped eating anything with Soya in it the symptoms almost completely vanished.
At my next GP appointment I asked why no-one had suggested it could be an allergic reaction only to be told that it is not one of the things they check for or even suggest. My faith in the medical community took a bit of a knock that day.
On a related note, I never knew just how many foods contain Soya flour that you wouldn't expect until I started checking.
Soy is in just about everything it is a staple food. You're unlucky, really. You're in good company though, it is one in three hundred or so.
They’re not even trained for memorization. They’re trained for mitigation and I don’t really blame them for the crap pay they receive. Over the course of a 40-year career they basically make what a typical junior dev makes. It’s fast becoming a rich man’s hobby career.
What ? In most countries, including the U.S, they are a very highly paid profession (I'm not talking about the internship phase)
"Rich doctor" is a thing only in the U.S., and that's due to collusion and price fixing, not because American doctors are better somehow.
In the rest of the world doctors are basically like white-collar car mechanics, and often earn less money and respect.
It's about the same pay as a (professional) engineer. In the US, both engineers and doctors are very highly paid. In the UK and Japan they are paid about 50-100k if experienced, which is somewhere about 2-4x less than their US counterparts.
This is a reach. Can you share a few examples of Western countries where that is the case?
That's false. One example:
"According to the Government of Canada Job Bank, the median annual salary for a General Practitioner (GP) in Canada is $233,726 (CAD) as of January 23, 2024."
That's roughly $170,000 in the US. If you adjust for anything reasonable, such as GDP per capita or median income between the US & Canada, that $170k figure matches up very well with the median US general practitioner figure of around $180k-$250k (sources differ, all tend to fall within that range). The GPs in Canada may in fact be slightly better paid than in the US.
Are there any mortgage products for software developers that let them get a jumbo mortgage right out of school with 100 percent LTV?
https://www.pnc.com/insights/personal-finance/borrow/physici...
It's an incentives issue, not a training issue
This.
I wouldn't be surprised if AI was better than going to GP or many other specialists in majority of cases.
And the issue is not with the doctors themselves, but the complexity of human body.
Like many digestive issues can cause migraines or a ton of other problems. I am yet to see when someone is referred to gut health professional because of the migraine.
And a lot of similar cases when absolutely random system causes issues in seemingly unrelated system.
A lot of these problems are not life threatening thus just get ignored as they would take too much effort and cost to pinpoint.
AI on the other hand should be pretty good at figuring out those vague issues that you would never figured out otherwise.
> AI on the other hand should be pretty good at figuring out those vague issues that you would never figured out otherwise.
Not least because it almost certainly has orders of magnitude more data to work with than your average GP (who definitely doesn't have the time to keep up with reading all the papers and case studies you'd need to even approach a "full view".)
And speaking of migraines, even neurological causes can apparently be tricky: Around here, cluster headaches would go without proper diagnosis for about 10 years on average. In my case, it also took about 10 years and 3 very confused GPs before one would refer me to a neurologist who in turn would come up with the diagnosis in about 30 seconds.
Since someone else asked and you said you didn't remember, do you think he may have had Normal Pressure Hydrocephalus (NPH)? And the surgery which he had may have been a VP shunt (ventricular-peritoneal) -- something to move fluid away from his brain?
Quite a mouthful for the layman and the symptoms you are describing would fit. NPH has one of my favorite mnemonic in medicine for students learning about the condition, describing the hallmark symptoms as: "Wet, Wobbly and Wacky."
Wet referring to urinary incontinence, Wobbly referring to ataxia/balance issues and Wacky referring to encephalopathy (which could mimic dementia symptoms).
Now that you mention it, it may have been NPH. The thing is, I did the chatting with ChatGPT and handed the printout to the doc. Biology was never my strong suit, so my eyes glaze over when I see words like "Hydrocephalus" :-D
You might find it in the chat history.
Glad to hear your uncle improved! Would you mind sharing the other two hypotheses and what the diagnosis ultimately was?
The first was "dementia" (or something related to it, I don't remember the exact medical term). The second was something to do with fluid in some spinal column (I am sorry once again, I do not remember the medical term; they operated on him to drain it, which is why I remember it). I don't remember the third one, unfortunately.
Perhaps a CSF leak due to a dural sack tear in the spine? Was his symptom only having headaches while standing? Happened to my wife. 6 weeks of absolute hell.
On second thought — the opposite. A bulge/blockage of CSF?
Apparently, as I've recently learned due to a debilitating headache, CSF pressure (both high and low) can cause a whole host of symptoms, ranging from mild headache and blurred vision to coma and death.
It's pretty wild that a doctor wouldn't have that as a hypothesis.
Thanks for sharing. I struggled with long-term undiagnosed issues for so long. It took me 15 years of trying with doctors until one did a colonoscopy and found an H.Pylori infection in 2018. Prescribed the right kind of antibiotics and changed my life. In hindsight, my symptoms matched many of the infection's. No doctor figured it out.
So many doctors never bothered to conduct any tests. Many said it's in my head. Some told me to just exercise. I tried general doctors, specialists. At some point, I was so desperate that I went to homeopathy route.
15 years wasted. Why did it take 15 years for the current system?
I'd bet that if I had ChatGPT earlier, it could have helped me in figuring out the issue much faster. When you're sick, you don't give a damn who might have your health data. You just want to get better.
H. Pylori is like one of the most common infections there is. How did your doctors not look for that?
Programmers have the benefit of being able to torture and kill our patients at scale (unit and integration testing), doctors less so. The diagnostic skills one hits in any given doctor may be relatively shallow, plus tired, overworked, or annoyed by a patients self expression… the results I’ve seen are commonly abysmal and care providers are never shocked by poor and misdiagnosis from other practitioners.
I have some statistically very common conditions and a family medical history with explicit confirmation of inheritable genetic conditions. Yet, if I explain my problems A to Z I’m a Zebra whose female hysteria has overwhelmed his basic reasoning and relationship to reality. Explained Z to A, well I can’t get past Z because, holy crap is this an obvious Horse and there’s really only one cause of Horse-itis and if your mom was a Horse then you’re Horse enough for Chronic Horse-itis.
They don’t have time to listen, their ears aren’t all that great, and the mind behind them isn’t necessarily used to complex diagnostics with misleading superficial characteristics. Fire that through a 20 min appointment, 10 of which is typing, maybe in a second language or while in pain, 6 plus month referral cycles and… presto: “it took a decade to identify the cause of the hoof prints” is how you spent your 30s and early 40s.
No clue. But H Pylori can cause many different symptoms. Sometimes it has no symptoms and people don't notice.
Anyways, that's why I'm so bullish on LLMs for healthcare.
15 years of funding someone else’s pensions on healthcare dividends.
I thought Hy.Pylori was diagnosed from a stool sample which in my experience is the first thing you’re asked for if you have any gastric issues. Was it only possible to find via the colonoscopy in your case or did the doctors never do a stool test?
The sensitivity from stool samples seems to be less than 80%. The gold standard is gastroscopy, which is often performed anyway to rule out ulcers etc. It is the first time I heard about colonoscopy for H Pylori.
Good to know, thanks!
You could do a breath test for h pylori. The colonoscopy was done as a general check by a specialist doctor. So the doctor wasn’t sure but a colonoscopy covered h pylori check.
That story says a lot about where the gaps really are. Most doctors aren’t lacking raw intelligence, they’re just crushed for time and constrained by whatever diagnostic playbook their clinic rewards. A chatbot isn’t magic insight, it’s just the only “colleague” people can brainstorm with for as long as they need. In your uncle’s case it nudged the GP out of autopilot and back into actual differential diagnosis. I’d love a world where physicians get protected time and incentives to do that kind of broader reasoning without a patient having to show up with a print‑out from Gemini, but until then these tools are becoming the second opinion patients can actually obtain.
I can give you the exact opposite anecdote for myself. Spent weeks with Dr Google and one or another LLMs (few years ago so not current SOTA) describing myself and getting like 10 wrong possibilities. Took my best guess with me to a doctor who listened to me babble for 5 minutes and immediately gave me a correct diagnosis of a condition I had not remotely considered. Problem was most likely that I was not accurately describing my symptoms because it was difficult to put it into words. But also I was probably priming queries with my own expected (and mistaken) outcomes. Not sure if current models would have done a better job, but in my case at least, a human doctor was far superior.
what was the second diagnosis?
Here’s something: my chatGPT quietly assumed I had ADHD for around 9 months, up until October 2025. I don’t suffer from ADHD. I only found out through an answer that began “As you have ADHD..”
I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.
I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.
This is a class action waiting to happen.
> nine months previous
It likely just hallucinated the ADHD thing in this one chat and then made this up when you pushed it for an explanation. It has no way to connect memories to the exact chats they came from AFAIK.
ChatGPT used the name on my credit card, a name which isn't uncommon, and started talking about my business, XYZ, that I don't have and never claimed to.
Did some digging and there was an obscure reference to a company that folded a long time ago associated with someone who has my name.
What makes it creepier is that they have the same middle name, which isn't in my profile or on my credit card.
When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out[1] that they're required to adhere to by law in several places.
Also, given that my name isn't rare, there are unfortunately some people with unsavory histories documented online with the name. I can't wait to be confused for one of them.
[1] https://privacy.openai.com/policies/en/
“ When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out …”
You did all of that but then you gave them your real name?
Visa/MC payment network has no ability to transfer or check card holder name. Merchants act as if it does, but it doesn’t. You can enter Mickey Mouse as your first name and last name… It won’t make any difference.
Only AMEX and Discover have the ability to validate names.
FWIW, I have a paid account with OpenAI, for using ChatGPT, and I gave them no personal information.
Wasn't aware of this, I've had payments denied because of simple typos in my name on the card or in my billing address.
I wouldn't be surprised it's because people self diagnose and talk about their """adhd""" all the time on reddit &co. where chatgpt was trained a lot.
Do you think the majority of those people are lying or do you think it's possible that our pursuit of algorithmic consumption is actually rewiring our neural pathways into something that looks/behaves more like ADHD?
Personally, I'm on the fence. I suspect that I've always had a bit of that, but anecdotally, it does seem to have gotten worse in the past decade, but perhaps it's just a symptom of old age (31 hehehe).
> Do you think the majority of those people are lying
I don’t think they’re lying, but it is very clear that ADHD has entered the common vernacular and is now used as a generic term like OCD.
People will say “I’m OCD about…” as a way of saying they like to be organized or that they care about some detail.
Now it’s common to say “My ADHD made me…” to refer to getting distracted or following an impulse.
> or do you think it's possible that our pursuit of algorithmic consumption is actually rewiring our neural pathways into something that looks/behaves more like ADHD?
Focus is, and always has been, something that can be developed through practice. Ability to focus starts to decrease when you don’t practice it much.
The talk about “rewiring the brain” and blaming algorithms is getting too abstract, in my opinion. You’re just developing bad habits and not investing time and energy into maintaining the good habits.
If you choose to delete those apps from your phone or even just use your phone’s time limit features today, you could start reducing time spent on the bad habits. If you find something to replace it with like reading a book (ideally physical book to avoid distractions) or even just going outside for a 10 minute walk with your phone at home, I guarantee you’ll find that what you see as an adult-onset “ADHD” will start to diminish and you will begin returning to the focus you remember a decade ago.
Or you could continue scrolling phones and distractions, which will probably continue the decline.
This is a good place to note that a lot of people think getting a prescription will fix the problem, but a very common anecdote in these situations is that the stimulant without a concomitant habit change just made them hyperfocus on their distractions or even go deeper into more obsessive focus on distractions. Building the better habits is a prerequisite and you can’t shortcut out of it.
I generally and broadly agree with your comment.
> Focus is, and always has been, something that can be developed through practice. Ability to focus starts to decrease when you don’t practice it much.
> The talk about “rewiring the brain” and blaming algorithms is getting too abstract, in my opinion. You’re just developing bad habits and not investing time and energy into maintaining the good habits.
> If you choose to delete those apps from your phone ...
I would like to add that focus is one of the many aspects of adhd, and for many people, isn't even the biggest thing.
For many people, it's about the continuous noise in their mind. Brown noise or music can partly help with parts of that.
For many, it's about emotional responses. It's the difference between hearing your boss criticise you and getting heart palpitations while mentally thinking "Shit, I'm going to get fired again", vs "Ahh next time I'll take care of this specific aspect". (Googling "RSD ADHD" will give more info.)
It's the difference between wanting to go to the loo because you haven't peed in 6 hours but you can't pull yourself off your chair, and... pulling yourself off your chair.
Focus is definitely one aspect. But between the task positive network, norepinephrine and the non-focus aspects of dopamine (including - more strength! Less slouching, believe it or not!), there are a lot of differences.
Medications can help with many of these, albeit at the "risk" of tolerance.
(I agree this is a lot of detail and nuance for a random comment online, but I just felt it had to be said. Btw - all those examples... might've been from personal experience - without vs with meds.)
> Now it’s common to say “My ADHD made me…” to refer to getting distracted or following an impulse.
this is an older thing than "I'm OCD when ..."
I have what you would call metric shittons of ADHD. Medically diagnosed. Was kicked outta university for failing grades and all. Pills saved me. If you think you have it, the best thing you can do for yourself is at least get a diagnosis done. In b4 people come in and chime it can be faked. Yes the symptoms can be faked. But why would you if you really want to know what is wrong with you if any? (Hoping you aren't a TikTok content creator lurking here)
I really hope this doesn't get lost in the sea of comments and don't feel pressured to answer any of them but:
what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
do you feel the difference between being medicated and (strong?) coffee?
have you felt the effects weaken over time?
if you did drink coffee, have you noticed a difference between the medication effects weakening on the same scale as caffeine?
is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
this is a personal pet-project of observing how different people deal with ADHD.
I take ritalin as needed, 20-30mg a day. A black coffee will usually make me just a little sleepier, if anything at all. a couple more will do the same. Ritalin can make me sleepy if I'm already deeply tired, but after ~30min will actually allow me to partially focus on off days, and be able to get more work done on normal days. I may not need it every day.
> is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
I am now 20, admittedly "early" in my career. Through high school and the first 2 years of university I have banged my head against ADHD and tried to just "power through it" or adapt. Medication isn't a magic bullet, but it is clear to me at least now that I am at least able to rely on it as a crutch in order to improve myself and my lifestyle to deal with what is at least for me, truly a disability. Maybe one day I won't need it, but in the mean time I see no reason why attempt #3289 will work for real this time to turn around my life.
> what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
Given that ADHD people tend to commit suicide 2x-4x times more often than general population [0] keep in mind that it's not detrimental until it suddenly is.
Also it gets worse with age, so it's better to get under doctor's control sooner than later.
[0]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5371172/
Not the person you asked but:
ADHD is a debilitating neurological disorder, not a mild inconvenience.
Believe me, I wish that just drinking coffee and "trying harder" was a solution. I started medication because I spent two decades actively trying every other possible solution.
> what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
If your neurological issues aren't impacting your life negatively, they aren't neurological issues. I don't know what else to say to this. Of course you shouldn't treat non-disorders with medication.
> do you feel the difference between being medicated and (strong?) coffee?
These do not exist in the same universe. It's not remotely comparable.
> have you felt the effects weaken over time?
Only initially, after the first few days. It stabilizes pretty well after that.
> if you did drink coffee, have you noticed a difference between the medication effects weakening on the same scale as caffeine?
Again, not even in the same universe. Also, each medication has different effects in terms of how it wears off at the end of the day. For some it's a pretty sudden crash, for others it tapers, and some are mostly designed to keep you at a long term level above baseline (lower peaks, but higher valleys).
> is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
If I could have solved the biological issue "naturally" I would have. ADHD comes with really pernicious effects that makes adaptation very challenging.
Medicated, diagnosed ADHD-haver speaking.
Unmanaged ADHD is dangerous, and incredibly detrimental to people's lives, but the level of such may not be entirely apparent to somebody until after they receive treatment. I think the attitude of being against medication for neurological issues where that is recommended by medical professionals (including where that for something perceived to not be detrimental enough) is, to say the least, risky.
I would perhaps encourage you to do some reading into the real-world ways ADHD affects people's lives beyond just what medical websites say.
To answer your questions, though:
* Medication vs coffee: yes, I don't notice any effect from caffeine
* Meds weakening over time: nope
* Medication cost: so worth it (£45/mo for the drugs alone in the UK) because I was increasingly not able to adapt or cope and continuing to try to do so may well have destroyed me
Probably a bit of both, it's trendy do have a quirk, and modern life fucks up your attention span. Everyone wants to put a label on everything, remember when facebook had a dropdown of like 60+ genders? I also know people who talk about "being on the spectrum" all the time, at first I thought it was a meme, but they genuinely believe they're autistic because they're #notliketheothers. At the end of the day everything is a spectrum and nobody is normal, I'm not sure it's healthy to want to put a label on everything or medicate to fall back on the baseline.
> attention span
The meme of 'ADHD as the "fucked up attention span disorder"' has done immeasurable damage to people, neurotypical and ADHD alike. it is the attribute that is the least important to my life, but most centered towards the neurotypical, or the other people it bothers.
> modern life fucks up your attention span
That said, this statement is true, it's just a fundamental misunderstanding of ADHD as "dog like instinct to go chase a squirrel" or whatever. Google is free, so is Chatgpt if that's too hard.
> I'm not sure it's healthy to want to put a label on everything
I don't particularly care for microlabeling, but it's usually harmless, nothing suggest the alternative of "just stop talking about your problems" is better. People create language usually because they want to label a shared idea. This is boomer talk (see "remember facebook?" no)
> or medicate to fall back on the baseline
I'm not sure "If you have ADHD you should simply suffer because medicine is le bad" is a great stance, but you're allowed I suppose
> it is the attribute that is the least important to my life
still one of the most common symptom, and the one everyone use to self diagnose...
> because medicine is le bad
idk man, I've seen the ravage of medicine one people close to me. Years of adhd medicine, anti depressants pills, anti obesity treatments... They're still non functional, obese and depressed, but now they're broke and think there truly is no way out of the pit because they "tried everything" (everything besides not playing video games 16 hours a day, eating junk food 24/7 and never going out of their bedroom, but the doctors don't seem to view this as a root cause)
Whatever you think, I believe some things are over prescribed to the point of being a net negative to society. I never said adhd doesn't exist or shouldn't be treated btw, you seem to be projecting a lot of things. If it works for you good, personally I prefer to change my environment to fit how my brain/body works, not influence my body/brain by swallowing side effects riddled pills until death to fit in the fucked up world we created and call "normality"
Disable memories so each chat is independent.
If you want chats to shared info, then use a project.
Unfortunately I don't think that's a good solution. Memories are an excellent feature and you see them on.... most similar services now.
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
I have not really seen ChatGPT learn who I “am”, what I “like” etc. With memories enabled it seems to mostly remember random one-off things from one chat that are definitely irrelevant for all future chats. I much prefer writing a system prompt where I can decide what's relevant.
I know what you mean, but the issue the parent comment brought up is real and "bad" chats can contaminate future ones. Before switching off memories, I found I had to censor myself in case I messed up the system memory.
I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.
> Memories are an excellent feature
Maybe if they worked correctly they would be. I've had answers to questions be influenced needlessly by past chats and I had to tell it to answer the question at hand and not use knowledge of a previous chat that was completely unrelated other than being a programming question.
This idea that it is so much more better for OpenAI to have all this information about because it can make some suggestions seem ludicrous. How has humanity survived thus far without this. This seems like you just need more connections with real people.
> The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful.
I could not disagree more. A major failure mode of LLMs in my experience is their getting stuck on a specific train of thought. Being forced to re-explain context each time is a very useful sanity check.
Not the parent poster but I’ve disabled memory and history and I can still see ChatGPT reference previous answers or shape responses based on previous instructions. I don’t know what I’m doing wrong or how to fix it.
Wasn’t there a static memory store from before the wider memory capabilities were released?
I remember having conversations asking ChatGPT to add and remove entries from it, and it eventually admitting it couldn’t directly modify it (I think it was really trying, bless its heart) - but I did find a static memory store with specific memories I could edit somewhere.
It doesn’t have itself as a data source to reference, so asking “tell me when you said this” etc will never work
This actually highlights a big privacy problem with health AI.
Say I’m interested in some condition and want to know more about it so I ask a chatbot about it.
It decides “asking for a friend” means I actually have that condition and then silent passes that information on data brokers.
Once it’s in the broker network it’s truth.
We lack the proper infrastructure for to control our own personal data.
Hell, I bet there’s anyone alive that can even name every data broker, let alone contacts them to police what information they’re passing about.
What's the difference between Googling diseases/symptoms and asking ChatGPT?
An opaque layer of transformation.
The former shows you things people (hopefully) have written.
The latter shows you a made-up string of text "inspired by" things people have written.
That's a good question. I guess it could be depth of discussion but in the end they both come down to trust.
I guess unless you have an offline system you are at the mercy of whoever is running the services you use.
Googling and reading yourself allows you to assess and compare sources, and apply critical thinking and reasoning specific to yourself and your own condition. Using AI takes all this control away from you and trusts a machine to do the reasoning and assessing, which may be based on huge amounts of data which differ from yourself.
Googling allows you to choose the sources you trust, AI forces you to trust it as a source.
Who is "we"? Americans?
"We" as in the whole world really.
I know in Europe we have the GDPR regulations and in theory you can get bad information corrected but in practice you still need to know that someone is holding it to take action.
Then there's laundering of data between brokers.
One broker might acquire data via dubious and then transfer that to another. In some jurisdictions once that happens the second company can do what they like with it without having to worry about the original source.
I feel like the right legal solution is to make the service providers liable in the same way if you offered a service where you got diagnosed by a human and they fucked up, the service is liable. And real liability, with developers and execs going to jail or fined heavily.
The AI models are just tools, but the providers who offer them are not just providing a tool.
This also means if you run the model locally, you're the one liable. I think this makes the most sense and is fairly simple to draw a line.
this seems to be a memory problem with ChatGPT, in your case, I bet it was changing a lot of answers due to that. For me, it really liked referring to the fact that I have an ADU in my backyard, almost pointlessly, something like "Since you walk the dogs before work, and you have a backyard ADU, you should consider these items for breakfast..."
I wonder if that's because so many people claim to have ADHD for dubious reasons, often some kind of self-diagnosis. Maybe because being "neurodivergent" is somewhat trendy, or maybe to get some amphetamines.
ChatGPT may have picked that up and give people ADHD for no good reason.
Perhaps you do ;-)
Machine learning has been used in healthcare forever now
Machine learning isn't ChatGPT
I help take care of my 80-ish year old mother. ChatGPT figured out in 5 minutes the reason behind a pretty serious chronic problem that her very good doctors hadn't been able to figure out in 3 years. Her doctors came around to the possibility, tested out the hypothesis, and it was 100% right. She's doing great now (at least with that one thing).
That's not to say that it's better than doctors or even that it's a good way to address every condition. But there are definitely situations where these models can take in more information than any one doctor has the time to absorb in a 12-minute appointment and consider possibilities across silos and specialties in a way that is difficult to find otherwise.
Something to think about: perhaps the problem is with the duration of the appointment, and the difficulty of getting one in the first place? Elsewhere in the world, doctors can and do spend more than 12 minutes figuring out what's wrong with their patients. It's the healthcare system that's broken, and it _can_ be fixed without resorting to chatgpt. That it won't is the reality, though
Can't really compete with LLMs on duration of attention - SOTA LLMs can ingest years of research on the spot, and spend however long you need on your case. No place on Earth has that many specialists available to people (much less affordable); you'd have to have 50% of the population become MDs, and that would still cover just one sub-specialty of one specialization.
> Elsewhere in the world, doctors can and do spend more than 12 minutes figuring out what's wrong with their patients.
Where? According to "International variations in primary care physician consultation time: a systematic review of 67 countries" Sweden is the only country on the planet with an average consultation length longer than the US.
"We found that 18 countries representing about 50% of the global population spend 5 min or less with their primary care physicians."
GP sessions being around 20 minutes is pretty standard in North American and European countries. You can't have standard hour-long GP sessions, as it'd become impossible to make a timely appointment, no matter which system.
Can confirm having experienced both the USA and Dutch systems now. In both countries is my visit only about 20 minutes + another 15-30 sitting in the lobby because they doctor is always running behind schedule.
In theory, the Dutch system will take care of your more quickly for "real" emergencies as their "urgent care" (spoedpost) is heavily gate kept and you can only walk in to a hospital if you're in the middle of a crisis. I tried to walk into the ER once because I needed an inhaler and they told me to call the call the hotline for the urgent care... this was a couple of months after I moved.
That said, I much prefer paying €1800/year in premiums with a €450 deductible compared to the absolute shitshow that is healthcare in the USA. Now that I've figured out how to operate within the system, it's not so bad. But when you're in the middle of a health crisis, it can be very disorienting to try and figure out how it all works.
Ever wonder why famous people and celbrities always seem so healthy? They have unfettered access to well paid doctors. People with lots of money can spend literal days with GPs, constantly trying and testing things based on feedback loops with the same doctor at the same time.
When people are forced to have a consultation, diagnosis, and treatment in 20 minutes, things are rushed and missed. Amazing things happen when trained doctors can spend unlimited time with a patient.
You make a good point, but the key here is that there are a lot less people with that kind of money. The lower volume of patients is why that's possible. There are a lot more people in the middle class. So sessions have to be limited to ensure everyone has fair, equal and timely access to a doctor.
And of course, GPs typically diagnose more common problems, and refer patients to specialists when needed. Specialists have a lower volume of patients, and are able to take more time with each person individually.
Ever wonder why famous people and celebrities seem so unhealthy with mental health and substance abuse conditions? I'm all for improving affordable access to healthcare but most people wouldn't benefit from spending more time with doctors. It's a waste of scarce resources catering to the "worried well".
While some people are impacted by rare or complex medical conditions that isn't the norm. The health and wellness issues that most consumers have aren't even best handled by physicians in the first place. Instead they could get better results at lower cost from nutritionists, personal trainers, therapists, and social workers.
Having worked in rare disease diagnostics in a non-US country with good public healthcare, most patients had to fight their way to the correct speciality to get their diagnosis. Without the persistence of family/specific doctors, its not possible.
AI might provide the most scalable way to give this level of access/quality to a much wider range of people. If we integrate it well and provide easy ways for doctors to interface with this type of systems, it should be much more scalable, as verification should be faster.
The American Medical Association has long lobbied to reduce the number of medical schools, reduce the number of positions for new doctors, and limit what tasks nurse practitioners can do [1].
[1] https://petrieflom.law.harvard.edu/2022/03/15/ama-scope-of-p...
Do you mind sharing the chat log?
>She's doing great now (at least with that one thing).
This is the problem with all the old people.
The massive costs like this.
Now next thing to do to hospital for.
I had a friend who has now gotten several out of pocket MRIs essentially against medical advice because she believes her persistent headaches are from brain cancer.
Even after the first MRI essentially ruled this out, she fed the MRI to chatGPT which basically hallucinated that a small artifact of the scan was actually a missed tumor and that she needed another scan. Thousands wasted on pointless medical expenses.
Having friend's in healthcare, they have mentioned how common this is now. Someone coming in and demanding a set of tests based on chatGPT. They have explained that A, tests with false positives can actually be worse for you (triggers even more invasive tests) B, insurance won't cover any of your chatGPT requests.
Again, being involved in your care is important but disregarding the medical professional in front of you is a great way to set yourself up for substandard care.
No. Absolutely not. The government owes its people a certain duty of care to say “just because you can doesn’t mean you should.”
LLM’s are good for advice 95% of the time, and soon that’ll be 99%. But it is not the job of OpenAI or any LLM creator to determine the rules of what good healthcare looks like.
It is the job of the government.
We have certification rules in place for a reason. And until we can figure out how to independently certify these quasi-counselor robots to some degree of safety, it’s absolutely out of the question to release this on the populace.
We may as well say “actually, counseling degrees are meaningless. Anyone can charge as a therapist. And if they verifiably recommend a path of self-harm, they should not be held responsible.”
I wouldn't say the US gov has a good track record regarding it's "job" in healthcare.
Hell of a lot better than without the government though. Read more history.
Okay. How long should we wait until that happens?
These doctors? Credentialed and all? https://www.youtube.com/watch?v=U_c7CcVspfI
There’s a lot of negativity here. I’ll just say I’m extremely glad I had ChatGPT when I was going through some health issues last year.
I know someone who used ChatGPT to diagnose themselves with a rare and specific disease. They paid out of pocket for some expensive and intrusive diagnostics that their doctor didn't want to perform and it came out, surprise, that they didn't have this disease. The faith of this person in ChatGPT remains nonetheless just as high.
I'm constantly amazed at the attitude that doctors are useless and that their multiple years of medical school and practical experience amounts to little more than a Google search. Or as someone put it, "just because a doctor messed up once it doesn't mean that you are the doctor now".
They're not useless but they're also human with limited time and limited amount of inputs.
To me it's crazy that doctors rarely ask me if I'm taking any medications for example, since meds can have some pretty serious side effects. ChatGPT Health reportedly connects to Apple Health and reads the medications you're on; to me that's huge.
> To me it's crazy that doctors rarely ask me if I'm taking any medications for example, since meds can have some pretty serious side effects.
This sounds very strange to me. Every medical appointment I've ever been to has required me to fill out an intake form where I list medications I'm taking.
Understanding drug interactions is the job of pharmacists (who are also doctors…of pharmacy). Instead of asking apple health or ChatGpt about your meds, please try talking to your pharmacist.
Pharmacists are the last person I see on the way out. They do ask if I have any allergies, but by that time the doctor already washed his hands.
Doctors are wrong all the time as well. There are quite a few studies on this.
I would in no way trust a doctor over ChatGPT at this point. At least with ChatGPT I can ask it to cite the sources proving its conclusions. Then I can verify them. I can’t do that with a doctor it’s all “trust me bro”
You can sue doctors for malpractice.
Money from a lawsuit is nice but I'd rather get better than have the money.
Many, many, many doctors (including at a top-rated children's hospital in the US) spent 4+ years unsuccessfully trying to diagnose a very rare disease that my younger daughter had. Scores of appointments and tests. By the time she was 13, she weighed 56 lbs (25 kg) and was barely able to walk 100 yards. Psychiatrists even tried to imply that it was all imaginary and/or that she had an eating disorder.
Eventually, one super-nerdy intern walking rounds with the resident in the teaching hospital remembered a paper she had read, mentioned it during the case review, and they ran tests which confirmed it. They began a course of treatment and my daughter now lives normally (with the aid of daily medication.)
I fed a bunch of the early tests and case notes to ChatGPT and it diagnosed the disease correctly in minutes.
I surely wish we had had this technology a dozen years ago.
(I know, the plural of anecdote is not data.)
Same here, right now (couldn't get up without numb back pain, can barely walk, ChatGPT educated me on the quadratus lumborum muscle and how to solve that ... which was a lot better than my brain going 'well, I'm wheelchair-bound'.
Yep same, with the caveat that any actionable advice requires actual research from reliable sources afterwards (or at least making it cite sources).
i mean; i kinda get the concerns about misleading people but … are people really that dumb? okay if it’s telling you to drink more water, common sense. If you’re scrubbing up to perform an at home leg amputation because it misidentified a bruise then that’s really on you.
> are people really that dumb?
Yes, absolutely. The US has measles back in rotation because people are "self-educating" (aka taking to heart whatever idiocy they read online without a 2nd thought), and you think people self diagnosing with a sycophant sentence generator is anything but a recipe for disaster?
ChatGPT frequently pushes back against the things I say. I think you can dial down the level of sycophancy.
If we build a bridge over this river, sure, people can get across the river, but what about if the bridge fails and people fall into the water! Let's not build the bridge instead.
Same here. It’s a double-edged sword, though. I know some people who work in health care, including some doctors. They deal with a lot of hypochondriacs — people who imagine they have all sorts of issues and then try to MacGyver themselves to better health. You can’t read an HN thread on health care issues without dozens of those coming out of the woodwork to share their magical, special way of beating the system. Silicon Valley has a long history of people that did all sorts of weird crap. There's a great anecdote about Steve Jobs turning orange when he was restricting himself to a diet of carrots because he believed god knows what. In the end he died young of pancreatic cancer. Probably not connected but smart person that did some wacky stuff that probably wasn't that good for him.
I'm on statins that have side effects that I'm experiencing. That's a common thing. ChatGPT was useful for me to figure out some of that. I've had other minor issues where even just trying to understand what the medication I'm being prescribed is supposed to do can be helpful. Doctors aren't great at explaining their decisions. "Just take pill x, you'll be fine".
Doctors have to diagnose patients in a way that isn't that different from how I would diagnose a technical issue. Except they are starved for information and have to get all their information out of a 10-15 minute consult with a patient that is only talking about vague symptoms. It's easy to see how that goes wrong sometimes or how they would miss critical things. And they get to deal with all the hypochondriacs as well. So they have to poke through that as well and can't assume the patient is actually being truthful/honest.
LLMs are useful tools if you know how to use them. But they can also lead to a lot of confirmation bias. The best doctors tell you what you need to hear, not what you want to hear. So, tools like this are great and now a reality that doctors need to deal with whether they like it or not.
Some of the Covid crisis intersected with early ChatGPT usage. It wasn't pretty. People bought into a lot of nonsense that they came up with while doom scrolling Reddit, or using early versions of LLMs. But things have improved since then. LLMs are better and less likely to go completely off the rails.
I try to look at this a bit rationally: I know I don't get the best care possible all the time because doctors have to limit time they spend on me and I'm publicly insured in Germany so subject to cost savings. I can help myself to some extent by doing my homework. But in the end, I have to trust my doctor to confirm things. My mode is that I use ChatGPT to understand what's going on and then try to give my doctor a complete picture so he has all the information needed to help me.
I personally don’t care who has access to my health data, but I understand those who might.
Either way, I’m excited for some actual innovation in the personal health field. Apple Health is more about aggregating data than actually producing actionable insights. 23andme was mostly useless.
Today I have a ChatGPT project with my health history as a system prompt and it’s been very helpful. Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).
If an insight led you or a family member to being misdiagnosed and crippled would you just say it’s their or your own fault? If it were a doctor would you have the same opinion?
I understand enough about these systems to know they’re not perfect but I agree some people might be misled.
But I don’t know if I should be denied access because of those people.
I just had a deja vu, I'm sure I read this some months ago
Did you previously write this exact comment before?
Had a look and it does not show up anywhere else: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
> But I don’t know if I should be denied access because of those people.
That's the majority of people though, if you really think that I assume you wouldn't have a problem with needing to be licenced to have this kind of access, right?
Depends. If you're talking about a free online test I can take to prove I have basic critical thinking skills, maybe, but that's still a slippery slope. As a legal adult with the right to consent to all sorts of things, I shouldn't have to prove my competence to someone else's satisfaction before I'm allowed autonomy to make my own personal decisions.
If what you're suggesting is a license that would cost money and/or a non-trivial amount of time to obtain, it's a nonstarter. That's how you create an unregulated black market and cause more harm than leaving the situation alone would have. See: the wars on drugs, prostitutes, and alcohol.
Yes, the threshold for restricting freedom should be harm to others not harm to oneself.
Are we at the level of needing a license to read a medical textbook too?
A medical textbook doesn't engage in trying to diagnose you.
So just diagnostic manuals then?
A diagnostic manual doesn't engage in trying to diagnose you.
If they pepper it with warnings and add safe guards, then I'm fine.
I think they can design it to minimize misinformation or at least blind trust.
People are very good at ignoring warnings, I see it all the time.
There's no way to design it to minimise misinformation, the "ground truth" problem of LLM alignment is still unsolved.
The only system we currently have to allow people to verify they know what they are doing is through licencing: you go to training, you are tested that you understand the training, and you are allowed to do the dangerous thing. Are you ok with needing this to be able to access a potentially dangerous tool for the untrained?
There is no way to stop this at this point. Local and/or open models are capable enough that there is just a short window before attempts at restricting this kind of thing will just lead to a proliferation of services outside the reach of whichever jurisdiction decides to regulate this.
If you want working regulation for this, it will need to focus on warnings and damage mitigation, not denying access.
> Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).
If you don't mind sharing, what kind of useful information is ChatGPT giving you based off of a photo that your doctor didn't give you? Could you have asked the doctor about the data on the instrument and gotten the same info?
I'm mildly interested in this kind of thing, but I have a severe health anxiety and do not need a walking hypochondria-sycophant in my pocket. My system prompts tell the LLMs not to give me medical advice or indulge in diagnosis roulette.
In one case it was a urinary flow test (uroflowmetry). The results go to a lab and then the doctor gets the summary. Was able to diagnose the issue, prevalence, etc. and educate myself about treatment and risks before seeing a doctor. Papers gave me distributions of flow by age, sex, etc. so I knew it was out of range.
In another case I uploaded a CSV of CGM data, analyzed it and identified trends (e.g. Saturday morning blood sugar spikes). All in five minutes on my phone.
Are you giving your vitals to Sam Altman just like that? What instrument?
Yes, if it will help me and others. Again, I understand those who disagree.
But this is a 'cart before the horse' situation.
What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)?
There is a very real, near definite, chance that giving your, and others', health data to this company will hurt you and others.
Will you still hold this, "I personally don’t care who has access to my health data", position?
I personally have been helped by talking to ChatGPT about my healthcare. That's the evidence. I will take concrete positive health outcomes now, over your fears of the future.
I’m definitely a privacy fist person, but can you explain how health data could hurt you, besides obvious things like being discriminated against for insurance if you have a drug habit or whatever. Like, i’m a fitness conscious 30 something white male, what risk is there of my appendix operation being common knowledge or that i need more iron or something?
Well maybe your health data picks up a heart condition you didn't know about.
Maybe you don't know but your car insurance drops you due to the risk you'll have a cardiac event while driving. Their AI flagged you.
You need a new job but the same AI powers the HR screening and denies you because you'll cost more and might have health problems. You'd never know why.
You try to take out a second on the house to pay for expenses, just to get back on your feet, but the AI-powered risk officer judges your payback potential to be %.001 underneath the target and are denied.
The previously treatable heart condition is now dire due to the additional stress of no job, no car and no house and the financial situation continues to erode.
You apply for assistance but are denied because the heart condition is treatable and you're then obviously capable of working and don't meet the standard.
What if you were a woman seeking medical treatment for an ectopic pregnancy?
‘Being able to access people’s medical records is just another tool in law enforcement’s toolbox to prosecute people for stigmatized care'
They are already using the legal system in order to force their way into your medical records to prosecute you under their new 'anti-abortion' rulings.
https://pennsylvaniaindependent.com/reproductive_rights/texa...
Is your point 'I have no major health conditions, so nobody could be hurt by releasing health data'? If so, I don't think I need to point out the gap in this logic.
Actually you maybe do. I am extremely privacy conscious; so i’m on your side on this one but health data is a bit different from handing over all your email and purchase information to google — in that scenario the danger is that the political or religious or whatever attributes i may have could be exposed to a future regime who considers what is acceptable today to no longer be so, uses them to profile and … whatever me, right? What actual danger is there from a government or a us tech company having my blood work details when i actually have nothing to hide like drug abuse or alcohol etc? health data seems much less risky than my political views, religion, sexuality, minor crimes committed and so on.
Something that is not yet known to be an indicator that you’re at risk of a condition.
Perhaps you were given some medication that is later proven harmful. Maybe there’s a sign in your blood test results that in future will strongly correlate with a condition that emerges in your 50s. Maybe a study will show that having no appendix correlates with later issues.
How confident are you that the data will never be used against you by future insurance, work screening, dating apps, immigration processes, etc
Absolutely not confident at all; thanks, i hadn’t considered some of those.
Depends on the data - if you had genetic data they might run PGS and infer that even though you are healthy now, your genes might predispose you to something bad and deny insurance based on that. If you truly do not see dangers of health data access remember that they could genotype you even when you came just for ordinary bloodwork.
Fortunately I live in a country where one cannot be denied insurance, but yeah I didnt think of these really, was a bit of a “typed before i really thought” moment maybe i should put the keyboard down ;).
It seems like an easy fix with legislation, at least outside the US, though. Mandatory insurance for all with reasonable banded rates, and maximum profit margins for insurers?
Isn't it more productive to regulate health insurance and make health a protected attribute of a person like disability etc?
Not danger as in being kidnapped by government agents, danger in terms of being denied a job or insurance or anything else.
Your comment is extraordinarily naive.
I wasn’t saying there is no danger — just that I didn’t really think about it or see the problem, your sibling comments have changed that. Maybe i am naive but i was asking genuinely not stating i think otherwise.. Unfortunately i have family members in the us and pretty much all of them happily sent their dna off to various services so im fucked either way at this point…
Good point, you did ask in good faith for an explanation and just fired off a quick comment that didn’t serve to further the discussion!
When your health data can say you are trans, and the government decides to persecute you, then yes, it important to maintain privacy.
I find it really really really hard to believe that there exists a person in this planet who:
1. Is transexual but does not tell anybody they are and it is also not blatantly obvious
2. Writes down in a health record they are transexual (instead of whatever sex they are now)
3. Someone doxxes they/them medical records
4. Because of 3, and only because of 3, people find out that said person is transexual
5. And then ... the government decides to persecute they/them
Let's be real, you're really stretching it here. You're talking about a 0.1% of a 0.1% of a 0.1% of a 0.1% of a 0.1% situation here.
If they're an athlete this situation could literally be happening right now.
@cyberpunk's question is pretty clear.
You could try to answer that instead of making up a strawman.
Dialogue 101 but some people still ignore it.
> i’m a fitness conscious 30 something white male
Right. So able bodied, and the gender and race least associated with violence from the state.
> being discriminated against for insurance if you have a drug habit
"drug habit", Why choose an example that is often admonished as a personal failing? How about we say the same, but have something wholly, inarguably, outside of your control, like race, be the discriminating factor?
You medical records may be your DNA.
The US once had a racist legal principle called the "one drop rule": https://en.wikipedia.org/wiki/One-drop_rule
Now imagine an, lets say 'sympathetic to the Nazi agenda', administration takes control of the US gov's health and state sanctioned violence services. They decide to use those tools to address all of the, what they consider, 'undesirables'.
Your DNA says you have "one drop" of the undesirable's blood, some ancient ancestor you were unaware of, and this admin tells you they are going to discriminate against your insurance because of it based on some racist psuedoscience.
You say, "but I thought i was a 30 something WHITE male!!" and they tell you "welp, you were wrong, we have your medical records to prove it", you get irate that somehow your medical records left the datacenter of that llm company you liked to have make funny cat pictures for you and got in their hands, and they claim your behavior caused them to fear for their lives and now you are in a detention center or a shallow grave.
"That's an absurd exaggeration." You may say, but the current admin is already removing funding, or entire agencies, based on policy(DEI etc) and race(singling out Haitian and Somali immigrants), how is it much different from Jim Crow era policies like redlining?
If you find yourself thinking, "I'm a fitness conscious 30 something white male, why should I care?", it can help to develop some empathy, and stop to think "what if I was anything but a fitness conscious 30 something white male?"
These points seem to be arguments against giving your health data to anybody, not just to an AI company.
If there's no evidence that it will help you or others, then that's a pretty hard position to argue against. The parent commenter asked about this, and the response basically was that it didn't seem likely to be harmful, and now you're responding to that.
Yes, of course. "Assuming it's entirely useless, why giving your data to anyone" is a hard position to argue against, but unfortunately it's also completely pointless because of the unproven assumption. Besides, there are already enough indications in this thread alone that it is already very useful to many.
Quite - personal data should remain under your control so it's always going to be a bad deal to "give" your data to someone else. It may well make sense to allow them to "use" your data temporarily and for a specific purpose though.
And what if it harms you?
What if you have to pay health insurrance because of the collected data or what if you don't get certain insurrances?
Most People don't have ap roblem that someone gets their medical data but that these information is used to their disadvantage.
I was going to be mad at this, but I remember our doctors are already using it without our consent
Your doctor is bound by HIPAA, you could consider doing something about it. OpenAI may not be bound by HIPAA so your available recourse is lesser.
> I personally don’t care who has access to my health data
There's a reason this data is heavily regulated. It's deeply intimate and gives others enormous leverage over you. This is also why the medical industry can charge premium rates while often providing poor service. Something as simple as knowing whether you need insulin to survive might seem harmless, but it creates an asymmetric power dynamic that can be exploited. And we know these companies will absolutely use this data to extract every possible gain.
The medical industry doesn't use your medical data to overcharge for insulin. That's more a question of your financial and insurance data.
I'm sorry, but seriously? How could you not care who has your health data?
I think the more plausible comment is "I've been protected my whole life by health data privacy laws that I have no idea what the other side looks like".
Quite frankly, this is even worse as it can and will override doctors orders and feed into people's delusions as an "expert".
I’d rather have all my health data be used in a way that can actually help me, even with a risk of a breach or misuse, than having it in a folder somewhere doing nothing.
It can also help you in not getting a job because your health data says you'll be sick in 6 months.
It would be absolutely amazing if any sort of tech could say that I'm going to have a serious health problem 6 months ahead of time.
How do you think insurance premiums are calculated?
In general, health insurance companies (at least in the US) are pretty much prevented from using any health data to set premiums. In fact, many US states prevent insurers from charge smokers higher premiums.
(Life insurance companies are different.)
How are they calculated? Based on what data? Your google searches? If they don't use goolge search history, why would they use chatgpt history?
Yeah man, when would technology ever be abused to monitor health data. https://www.mirror.co.uk/news/health/period-tracking-apps-ou...
How do you think that can happen realistically? Like seriously can you explain clearly how the data from ChatGPT gets to your employer?
It doesn't have to get to your employer, it just has to get to the enormous industry of grey-market data brokers who will supply the information to a third-party who will supply that information to a third-party who perform recruitment-based analytics which your employer (or their contracted recruitment firm) uses. Employers already use demographic data to bias their decisions all the time. If your issue is "There's no way conversations with ChatGPT would escape the interface in the first place," are you... familiar with Web 2.0?
Edit: Literally on the HN front page right now. https://news.ycombinator.com/item?id=46528353
You're supposed to share it with a doctor you trust, if nobody qualified asked for it it's probably because it's no longer relevant.
I’ve had mixed experiences with doctors. Often times they’re glancing at my chart for two minutes before an appointment and that’s the extent of their concern for me.
I’ve also lived in places where I don’t have a choice in doctor.
What is it with you people and privacy? Sure it is a minor problem but to be _this_ affected by it? Your hospitals already have your data. Google probably has your data that you have google searched.
What's the worst that can happen with OpenAI having your health data? Vs the best case? You all are no different from AI doomers who claim AI will take over the world.. really nonsensical predictions giving undue weight to the worst possible outcomes.
> What is it with you people and privacy?
There are no doubt many here that might wish they had as consequence-free a life as this question suggests you have had thus far.
I'm happy for you, truly, but there are entire libraries written in answer to that question.
I don't care either. Why should I? I go to the doctor once a year and it's always the same. Not much to do with that data
Your health data could be used in the future, when technology is more advanced, to infer things about you that we don't even know about, and target you or your family for it.
Health data could also be used now to spot trends and problems that an assembly-line health system doesn't optimize for.
I think in the US, you get out of the system what you put into it - specific queries and concerns with as much background as you can muster for your doctor. You have to own the initiative to get your reactive medical provider to help.
Using your own AI subscription to analyze your own data seems like immense ROI versus a distant theoretical risk.
It feels like everyone is ignoring the major part of the other side’s argument. Sure, sharing the health data can be used against you in the future, but it can be used to help you right now as well. Anyone with any sort of pain in the past will try any available method to get rid of it. And that’s fair when those methods, even with 50% success rate, are useful.
I'm in the same boat as them, I honestly wouldn't care that much if all my health data got leaked. Not saying I'm "correct" about this (I've read the rest of the thread), just saying they're not alone.
It's always been interesting to me how religiously people manage to care about health data privacy, while not caring at all if the NSA can scan all their messages, track their location, etc. The latter is vastly more important to me. (Yes, these are different groups of people, but on a societal/policy level it still feels like we prioritize health privacy oddly more so than other sorts of privacy.)
Unless you're a doctor you don't know what it's made up though.
That's the trouble with AI. You can only be impressed if you know a subject well enough to know it's not just bullshitting like usual.
This is exaggerated. AI is accurate enough that our sniff tests will get us far. ChatGPT just don't hallucinate all that often.
You can have the same problem with doctors who don't give you even 5 minutes of your time and who don't have time to read through all your medical history.
AI-guided self-medication is certainly problematic. Rubber-ducking your symptoms for free for as long as you need and then asking a doctor for their 2-minute opinion is IMHO the best way to go about healthcare in 2026.
I live in a place where I can get anything related to healthcare and even surgery within the same day at an affordable price, and even here I've wasted days going to various specialists who just tried to give me useless meds.
Imagine if one lives in a place where you need an appointment 3 months in advance, you most certainly will benefit from going there showing your last ChatGPT summary.
Your healthcare situation seems pretty darn good. What country is this?
Thailand is my go-to for healthcare in private hospitals. I heard good things about Singapore too. Taiwan's public hospitals were great too, albeit not as flashy.
>23andme was mostly useless.
23andme was massively successful in their mission.
Sidenote: their mission was not about helping you understand your genomic information.
Yeah, their mission was to make money and collect data for AI training and other usages.
I've had serious trouble with my knee and elbow for years and ChatGPT helped me immensely after a good couple dozen of doctors just told me to take Ibuprofen and rest and never talked to me for longer than 3 minutes. I feel like as with most things LLM there are many opponents that say "if you do what an LLM says you will die", which is correct, while most people that look positively towards using LLMs for health advice report that they used ChatGPT to diagnose something. Having a conversation with ChatGPT based on reports and scans and figuring out what follow-up tests to recommend or questions to ask a doctor makes sense for many people. Just like asking an LLM to review your code is awesome and helpful and asking LLM to write your code is an invitation for trouble.
I understand all the chatter about LLMs hallucinating, or making assumptions, or not being able to understand or provide the more human/emotional element of health care.
But the question I ask myself is: is this better than the alternative? if I wasn't asking ChatGPT, where would I go to get help?
The answers I can anticipate are: questionably trustworthy web content; an overconfident friend who may have read questionably trustworthy web content; my mom who is referencing health recommendations from 1972. And as best I can imagine, LLMs are going to likely to provide health advice that's as good but likely better than any of those alternatives.
With that said, I acknowledge that people are likely more inclined to trust ChatGPT more like a licensed medical provider, at which point the comparison may become somewhat more murky, especially with higher severity health concerns.
Chatgpt helped me solve a side effect I had with a medication just by suggesting a changing to dose timing. Solid improvement to my QoL just from one small change. My doctor completely agreed with the suggestion.
When I got worried about an exercise suggestion from an app I'm using (weight being used for prone dumbbell leg curls) Chatgpt confirmed there is a suggested upper limit on weight for that exercise and that I should switch it out. I appreciate not injuring myself. (Gemini gave a horrible response, heh...)
Chatgpt is dangerous because it is still too agreeable and when you do go outside what it knows the answers get wrong fast, but when it is useful it is very useful.
There is nothing wrong with obtaining additional, even false, information from any source that is available to you. (AI, Search, Websites/Blogs, Podcasts, influencers, word-of-mouth, etc)
It's what you do with that information that is important - the correct path is to take your questions to a medical professional. Only a medical professional can give you a diagnosis, they can also answer other questions and address incorrect information.
ChatGPT is very good for providing you with new avenues to follow-up upon, it may even help discover the correct condition which a doctor had missed. However it is not able to deliver a diagnosis, always leave that to a medical professional.
This actually differs very little from people Googling their symptoms - where the result was the same: take the new information to your medical professional, and remember to get a second opinion (or more) for any serious medical condition, or issues which do not seem to be fully resolved.
This is the same as Googling your symptoms, but on a more broad scale. I think the issue here is how many people are going to give themself self-induced health anxiety because of this result.
There is no deny on positive case of people actually being helped by ChatGPT. It's well known that Doctors can often dismiss symptoms of rare conditions, and those people specifically find way more success on the internet because the people with similar conditions tends to gather here. This effect will repeat with ChatGPT.
> if I wasn't asking ChatGPT, where would I go to get help?
To an MD?
This isn't feasible for a huge swathe of the USA, often because of costs/insurance but sometimes literally just accessibility/availability. A few years ago it took me nearly 8 months to find a PCP in my city that was accepting new patients (and, wee, they dropped my insurance less than a year after).
> often because of costs/insurance but sometimes literally just accessibility/availability.
These are self inflicted problems, we should work on these and improve them, not give up and rely on llms for everything
Is there a proven and guaranteed way to do this? Because otherwise it sounds very idealistic, almost like "if everything were somehow better, then things would be less bad". Doctor time will always be scarce. It sounds like it delays helping people in the here and now in order to solve some very complicated system-wide problem.
LLMs might make doctors cheaper (and reduce their pay) by lowering demand for them. The law of supply and demand then implies that care will be cheaper. Do we not want cheaper care? Similarly, LLMs reduce the backlog, so patients who do need to see a doctor can be seen faster, and they don't need as many visits.
LLMs can also break the stranglehold of medical schools: It's easier to become an auto-didact using an LLM since an LLM can act like a personal tutor, by answering questions about the medical field directly.
LLMs might be one of the most important technologies in medicine.
Every other industrialised nation on the planet has figured this out, still some idiots play dumb and ask if the problem is really solvable
What do you do when we're finally under the critical mass of doctors needed to make new discoveries ?
Who's responsible when the llm fucks up ?
&c.
All of your points sound like the classic junior "I can code that in 2 days" naive take on a problem.
Maybe time to ask AI why you’re looking for a technical solution rather than addressing the gaslighting that has left you with such piss-poor medical care in the richest country on earth?
Everyone knows this is a problem. No-one it effects has enough power to change it
Already know the answer, don't need AI for that one.
Maybe time to use a genuinely useful tech instead of trying to solve an actual hard problem by handwaving difficult problems?
Seems to me like the "difficult problem" is solved in pretty much every other rich country in the world.
if its not solved in the richest country maybe its not so easy to solve unless you want to hand wave the diffuclt parts and just describe it as "rich people being greedy"
It's such a dysfunctional situation that the "rich people being greedy" is the most likely explanation. Either that or the U.S. citizenry are uniquely stupid amongst rich countries.
Unless you're paying for a concierge doctor, MDs frequently will not spend the time to give you useful advice. Especially for relatively minor issues.
I've googled what a "concierge doctor" is, and it just sounds like a fancy term for a family doctor.
It’s a physician who gets paid a subscription by a small panel of patients.
Pros: more time spent with patients, access to a physician basically 24/7, sometimes included are other amenities (labs, imaging, sometimes access to rx at doctors office for simple generics, gym discounts, eye doctor discounts, etc)
Cons: it’s an extra cost to get access to that physician yearly ranging from a few hundred US dollars per year to sometimes thousands $1.5k-3k (or tens of thousands or more), those who aren’t financially lucky to be that well off don’t get such access.
—-
That said, some of us do this on the side to augment our salary a bit as medicine has become too much of a business based on quantity and not quality. Sad that I hear from patients that said a simple small town family doc like myself can spent 20-30mins with a patient when other providers barely spend 3 mins. My regular patients get usually 20-30mins with me on a visit unless it’s a quick one for refills and I don’t leave until they are done and have no questions. My concierge patients get 1 hour minimum and longer if they like. I offer free in-depth medical record review where I get sometimes boxes of old records to review someone’s med history if they are a new concierge patient. Had a lady recently deal with neuropathy and paresthesias for years. Normal blood counts. Long story short. She had moderate iron deficiency and vitamin b 6 deficiency from history of taking isoniazid in a different country for TB and biopsy proven celiac disease. Neuropathy basically gone with iron and b6 supplements and a celiac diet after I recommended a GI eval for endoscopy. It takes time to dig into charts like this and CMS doesn’t pay the bills to keep the clinic lights open to see patients like that all the time and this is why we are in such a bad place healthcare wise in the USA were we have chosen quality than quantity and the powers that be are number crunchers and not actual health care providers. It serves us right for let’s admins take over and we are all paying the price.
So much more I want to say but I don’t think many will read this. But if you read this and don’t like your doctor, please look around. There are still some of us out there that care about quality medicine and do try our best to spend time with the patient. If you got one of those “3 minute doctors” look for one or consider establishing care with a resident clinic at an academic center were you can be seen by resident doctors and their attending physicians. It’s not the most efficient but can almost guarantee those resident physicians will spend a good chunk of time with you to help you as much as they can.
> It’s a physician who gets paid a subscription by a small panel of patients
That's how it works here too, in PCP-Centric plans. The PCP gets paid, regardless if the patient shows up or not. But is also responsible to be the primary contact point for the patient with the health system, and referrals to specialists.
Getting a potential answer right away is certainly temping over waiting weeks to get an appointment
You have to wait weeks to be seen by a family doctor?
Yes, in my area if you need to find a new doctor you literally can't. This is a major city. The online booking for any major hospital network literally shows no results because the next appointment would be 90+ days out. If you have an existing relationship maybe you can get in in two weeks.
If the GP can handle my problem, I probably didn't need to go to the doctor anyway. A lot of care is done by specialists, and it can _easily_ take weeks or months to get an appointment with one. This is strongly dependent on one's insurance network though.
That's just a very arrogant take, from many patients that couldn't be more wrong.
Obviously a GP refers to specialists when necessary, but he is qualified to triage issues and perform initial treatment in many cases.
In the uk, yes.
And then 6⁺ months to be seen be a specialist.
In the US, yes.
While you are technically correct, we live in the real world. People are busy and/or broke. Many cannot afford to go to the doctor every time they get the sniffles or have a question. Doing some preliminary research is fine and, I’d argue, responsible.
Under non-urgent cases this sometimes takes 3-4 months in the US every time I experience the need to "ask an MD"
If the symptoms are severe enough, sure.
For better worse, even before the advent of LLMs, people were simply Googling whatever their symptoms were and finding a WebMD or MayoClinic page. Well, if they were lucky. If they weren't lucky, they would find some idiotic blog post by someone who claimed that they cured their sleep apnea by drinking cabbage juice.
soon(?) mostly a proxy for LLM
> if I wasn't asking ChatGPT, where would I go to get help?
Is this serious question? Can't you call/visit doctor?
I vibe coded an app and recorded all the things happening to my 50-something body. I shared that list with a few MDs -- they were useless. They literally can't handle anything except acute cases.
It's like telling someone to ask their doctor about nutrition. It's not in their scope any longer. They'll tell you to try things and figure it out.
The US medical industry abdicated their thing a long time ago. Doctors do something I'm sure, but discuss/advise/inquire isn't really one of them.
This was multiple doctors, in multiple locations, in various modalities, after blood tests and MRIs and CT scans. I live with literally zero of my issues resolved even a little tiny bit. And I paid a lot of money out of pocket (on top of insurance) for this experience.
I babbled some symptoms I did not understand to a doctor who correctly diagnosed me with a very rare condition in 30 seconds. And that's after spending weeks prodding LLMs (~2 years ago) and getting nowhere.
I think the main point is to not “of course” either side of this. Use every tool and recourse available to you, but don’t bag on people for doing or not doing one or the other. “Ask your doctor” is presumptive for people who have and need more.
AI is a lot better now than it was 2 years ago. There wasn't even a reasoning model until the end of 2024!
Either way, nobody is arguing that doctors aren't great. Doctors are great!
The argument is that doctors are not accessible enough and getting additional advice from AI is beneficial.
It can go both ways. The difference is that Dr. Chat's opinion takes 5 seconds and is free. It can be just as useless as a doctor who prescribes some med to mask your symptoms instead of understanding why you have them.
Medical training is designed to produce operators who will add value to corporate health systems - prescribe pills, do procedures, or anything that can generate 'billable hours'. Actually educating patients to be healthy will only reduce corporate health system profits. Why do you think we have been fighting the 'war on cancer' since the 60s? Now 'personalized medicine' and synthetic peptide and complex immunotherapies are the latest twist with costs into 5 figures (orders of magnitude greater than standard therapies)and efficacy only better by a factor of 2 at best. Many treatments promise improved 'partial response rate' increases from 10 % to 50% yet a partial response is not a significant improvement.
AI is a disaster waiting to happen. As it is simply a regurgitation of what has been already said by real scientists, researchers,and physicians, it will be the 'entry drug' to advertise expensive therapies.
Thank goodness our corporations have not stooped to providing healthcare in exchange for blood donation, skin donation, or other organ donation. But I can imagine United HEalthcare merging with Blackstone so that people who need healthcare can get 'health care loans'.
> Why do you think we have been fighting the 'war on cancer' since the 60s?
Actually, we have made huge progress in the war on cancer, so this example doesn’t seem to support your narrative.
Actual access to reliable healthcare is a massive assumption to make, not everyone has incredible health insurance or lives in a country with sufficient doctors/med staff. Most places are in crisis for lack of resources, I'd rather ask chatgpt or Gemini for something urgent rather than wait 5+ hours in ER for the doctor to say "just take some aspirin and go to a walk-in tomorrow"
Not to mention, going to an ER for something that doesn't turn out to be an emergency carries a high risk of coming back home with something significantly worse.
Last time I was in ER, I accompanied my wife; we got bounced due to lack of appropriate doctor on site, she ended up getting help in another hospital, and I came back home with severe case of COVID-19.
Related: every pediatrician I've been to with my kids during the flu season says the same thing: if you can't get an appointment in a local clinic, stay home; avoid hospitals unless the kid develops life-threatening symptoms, as visiting such places carry high risk of the kid catching something even worse (usually RSV).
There are only two places I still routinely wear a mask (n95) these days: Airplanes from waiting at the gate until about 10 minutes after takeoff when the air handling system has had time to clear things out (and the same after landing), and hospital/doctors visits. It's such a high ROI.
We used to observe that our kid(s) got sick every time we flew over the winter break to visit family. We no longer have this problem. (we do still have kids.) Not getting sick turns out to be really quite nice. :-) Hanging out in the pediatrician's office surrounded by snotty, coughing children who are not mine...
I'm Australian, but from what I understand from my friends in America, no.
They only go when it's urgent/very worrying.
I'm also Australian and some of these comments have really made me re-appreciate what we have in Medicare. Damn, it's got its issues, but the American attitudes towards their healthcare system are downright bleak. Deeply worrying that the prevailing attitude seems to be "But ChatGPT is so good" rather than "Our healthcare system is so bad." Remind me to visit my GP next week to thank them.
If you don't need to be physically seen to make a determination, most hospitals and networks operate phone lines where you can speak with a nurse who will triage symptoms and either recommend home remedies or an appointment as needed.
I'm not sure if this has switched entirely to video calls or not, but when it became popular it was a great way to avoid overloading urgent care and general physicians with non-urgent help requests.
As someone who was recently injured and waited three months to see a specialist in Seattle, these lines were not helpful ("yes, you should make an appointment"). The only way I was able to see someone was to write a script that blew up my phone when I got a cancellation window email (the first two I missed even though I responded within 30 seconds).
Yeah, those lines are for triage, not specialty care. It's nice when you've got an infant and are a new parent and everything is terrifying, or a fever and want to know if it is bad enough to warrant going in somewhere.
Exactly, they're not an alternative to a doctor, which is the point... it's nearly impossible to see a provider these days if you don't have a pre-existing relationship. I moved recently and finding a PCP who is accepting new patients is also maddening.
I'm not fond of the fact that it's owned by Amazon but I use OneMedical and I can get a call to a doctor ~immediately, or to my regular doctor within a day or so.
I took an at home flu test, messaged my doctor at no cost telling him I’d tested positive (he didn’t even ask for a picture) and paid $25 from a tax free the same day. My doctor is part of a large hospital system too, he didn’t want me to come in just sent the rx.
People with public health care may have a hard time understanding the costs of medical advice and pharma here in the US. We're in deep doo-doo.
I have no job and no health insurance. After crafting my prompt correctly (I have W symptoms, X blood markers, have Y lifestyle, and Z demography) ChatGPT accurately diagnosed my problem. (You have REDS and need to eat more food, dumbass.)
Or, I could've gone to a doctor and overloaded our healthcare system even more.
ChatGPT serves as a good sanity check.
It depends on where you live and what the issue is.
Where I live, doctors are only good for life threatening stuff - the things you probably wouldn't be asking ChatGPT anyway. But for general health, you either:
1. Have to book in advance, wait, and during the visit doctor just says that it's not a big deal, because they really don't have time or capacity for this.
2. You go private, doctor goes on a wild hunt with you, you spend a ton of time and money, and then 3 months later you get the answer ChatGPT could have told you in a few minuites for $20/mo (and probably with better backed, more recent research).
If anything, the only time ChatGPT answers wrong on health related matters is when it tries to be careful and omits details because "be advised, I'm not a doctor, I can't give you this information" bullshit.
A lot of doctors also give bad and incorrect advice. I actually find that to be the norm
Until very recently, it took a week to get an appointment with my primary care doctor, and calls weren't an option. Now that video calls are an option, I get get one in a day or two. I could always go to urgent care to get an answer faster, but that costs more.
With what money?
Reach further in to your local community? You need to look outside the screens. Your life will be better for it.
"we’ve worked with more than 260 physicians" yet not a single one of their names is proudly featured in this article. Well, the article itself does not even have an author listed. Imagine trusting someone who doesn't even disclose their identity with your sensitive data.
I’m kind of torn on this. From one side, I can’t seem to trust doctors any more. I recently had a tooth removed (by the advice of two different doctors), in a claim that it will resolve my pain, which it did not, and now 3 different doctors don’t know what’s causing my pain.
Most doctor advice boil down to drink some water and take a painkiller, while glancing for 15 seconds at my medical history before they dedicate me 7 minutes, after which they move to yet another patient.
So compared to this, AI that can analyze all my medical history, and has access to the entirety of medical researches that are publicly available, could be a very good tool to have.
But at the same time technofeudalism, dystopia, etc.
Unfortunately doctors are fallible humans, but also the infallible gatekeepers of pharmaceuticals and surgeries. Unfortunately I've become relatively knowledgeable about a condition I have myself, to the extent where I'll ofyen have much more subject knowledge than a given nonspecialist doctor. All the same they have to make on-the-spot decisions about me that will have serious consequences for both of us, in between twenty similar cases on either side.
As an aside, I'm very sorry for what you're going through. Empathy is easy when you've had something similar! I'll say that in my case removing a tooth that was impinging on a nerve did have substantial benfits that only became clear months down the line. I'm not saying that will happen for you, but a bit of irrational optimism seems to be an empirically useful policy.
Part of being a fallible human is recognizing when you're not sure of something.
We must not ignore the complex interpersonal dynamic of the doctor's certainty vs the patient's insistence.
People get better healthcare outcomes with strong self advocacy. Not everyone is good at that.
A parallel exists where not everyone negotiates for a better job offer.
Before I moved to where I live now, I had a doctor's office open in my neighborhood I could walk to. At first I thought it was amazing and I started going there. It was a really fancy place, state of art, loads of diagnostic equipment and a limited on-site lab, almost a hospital. But pretty soon I realized I was almost always seeing Nurse Practitioners, or Doctors so fresh out of medical school they were still wet behind the ears.
Even worse, they were almost always wrong about the diagnosis and I'd find myself on 3 or 4 rounds of antibiotics, or would go to the pharmacy to pick up something and they'd let me know the cocktail I had just been prescribed had dangerous counterindications. I finally stopped going when I caught a doctor searching webmd when I was on my fourth return visit for a simple sinus infection that had turned into a terrible ear infection.
My next doctor wasn't much better. And I had really started to lose trust in the medical system and in medical training.
We moved a few years ago to a different city, and I hadn't found a doctor yet. One day I took sick with something, went to a local walk-in clinic in a strip mall used mostly by the local underprivileged immigrant community.
Luck would have it I now found an amazing doctor who's been 100% correct in every diagnosis and line of care for both me and my wife since - including some difficult and sometimes hard to diagnose issues. She has basically no equipment except a scale, a light, a sphygmomanometer, and a stethoscope. Does all of her work using old fashioned techniques like listening to breathing or palpation and will refer to the local imaging center or send out to the local lab nearby if something deeper is needed.
The difference in absolutely wild. I sometimes wonder if she and my old doctors are even in the same profession.
I guess what I'm trying to say is, if you don't like your doctor, try some other ones until you find a good one, because they can be a world difference in quality -- and don't be moved by the shine of the office.
Yes, I've found the more financially motivated doctors in the higher end "concierge" type centers are not as skilled or experienced or overall motivated as the ones who seek out the patients with difficult cases at government reimbursement rates. The irony...
This is part of the reason that alternative medicine has become so popular, there are definitely still some trustworthy doctors out there, but I share the same experience as you where I feel left with no recourse but to take care of things myself after seeing multiple doctors who make it very clear that they have no interest or time to listen to me.
> Most doctor advice boil down to drink some water and take a painkiller
Forgive my brusqueness here, but this could only be written by someone who has not yet been seriously ill.
You've clearly touched the problem with healthcare in general though. If it's not life threatening, it's not taken seriously.
There are a lot of health related issues humans can experience that affect their lives negatively that are not life threatening.
I'm gonna give you a good example: I suffer from mild skin related issues for as long as I can remember. It's not a big deal, but I want my skin to be in better condition. I went through tens of doctors and they all did essentially some variation of tylenol equivalent for skin treatment. With AI, I've been able to identify the core problems that every licensed professional overlooked.
Brusqueness? More like insensitivity, lack of empathy, and ignorance.
My 12 year old daughter (correctly) diagnosed herself to a food allergy after multiple trips to the ER for stomach pains that resulted in “a few Tylenol/Advil with a glass of water”.
This isn't a criticism of you, I don't know your full story. But I think many people have a misconception of the role of an ER. I know an ER doctor well, and the role of an ER is to, in approximate order of priority:
1. Prevent someone from dying
2. Treat severe injuries
3. Identify if what someone is experiencing is life-threatening or requires immediate treatment to prevent their condition worsening
4. Provide basic treatment and relief for a condition which is determined not to be an imminent threat
In particular, they are not for diagnosing chronic conditions. If an ER determines that someone's stomach pain is not an imminent, severe threat to their health, then they are sending them out of the ER with medication for short-term relief in order to make room for people who are having an emergency. The ER doc I know gets very annoyed at recurring patients who expect the ER to help them diagnose and treat their illness. If you go the ER, they send you home, and the thing happens again, make an appointment with a physician (and also go to the ER if you think it's serious).
Unfortunately, the medical system is very confusing and difficult to navigate. This is a big part of why so many people end up at ERs who should be making appointments with non-emergency doctors - finding a doctor and making appointments is often hard and stressful, while an ER will look at anyone who walks through the doors.
Where I live there is a lack of family doctors.
That’s kind of how allergies are discovered though. Doctors will tell you to go on a restrictive food diet and tell you to binary search for it if it doesn’t cause anaphylaxis. Based on my experience with allergies if it’s not anaphylaxis then allergies aren’t considered super important to resolve by doctors. Finally the immune system is complicated and your daughter may have an unusual reaction which may not be IGe mediated. In other words it could be a reaction to a foreign protein and not an anti-body histamine spike in which case: yes it’s extremely unpleasant and feels like an allergy, but because it doesn’t lead to anaphylaxis it’s not a medical concern.
> Doctors will tell you to go on a restrictive food diet
I would have mentioned that if it happened. It didn’t.
did you confirm it with a nutritionist and elimination diet?
No, I confirmed it by watching her get a rash immediately after eating anything with tartrazine.
Doctors that treat shit that you can treat on the spot and it gets better or it doesn't tend to be really good. Surgeons in particular. Doctors that treat shit that don't have clear causes and that you give medicine to, and sometimes it kinda improves, they tend to be pretty bad.
This is both a liability and a connectedness issue.
Give medicine to who?
I mean, unless you have a life-threatening emergency, it's the way the entire Danish healthcare system runs.
All the downsides you listed can be solved by public open source models. The ones we have are pretty good already and I would hope that they only get better in the near future. Once you can it on your machine you can safely give it all your data and much more. Of course i would still like a human doctor but it might be a better tool for personal research before you see an expert than what we had in the past.
I’m married to a doctor. We both complain about the medical system. It’s terrible. One of my biggest complaints is doctors seem to have no clue how to get symptoms out of patients in a way that translates to diagnosis.
I’ve had longstanding GI issues. I have no idea how to describe my symptoms. They sure seem like a lot of things, so I bring that list to my doc and I’m met with “eh, sounds weird”.
By contrast, I solved my issues via a few sessions with Claude. I was able to rattle off a whole list of symptoms, details about how it’s progressed, diets I’ve tried, recent incidents, supplements/meds I’ve taken. It comes up with hypothesis, research references, and forum discussions (forums are so useful for understanding - even if they can be dangerously wrong). We dive into the science of those leading causes to understand the biochemistry involved. That leads to a really deep understanding of what we can test.
Turns out there’s a very clear correlation with histamine levels and the issues I deal with. I realize a bunched stuff that I thought was healthy (and is for most people) is probably destroying my health. I cut those out and my GI issues have literally disappeared. Massively, massive life improvement with a relatively simple intervention (primarily just avoiding chicken and eggs).
I tell my doctor this and it’s just a blank stare “interesting”.
Out of curiosity, what was the stuff you thought was healthy but was destroying your health?
I seem to struggle with high histamine and histamine-liberating foods. This was challenging because histamines can build up quickly in certain foods that are labelled low-histamine or some foods can "liberate" histamines (even if they're low in histamines themselves). For example, chicken is technically low histamine but quickly builds up histamines and can cause histamine liberation. A "safe" meal like a chicken and rice bowl would destroy me.
Quantity and environment also played a huge role. If my histamine levels were low, I could often tolerate many of my trigger foods. However, if they were high (like during allergy season), the same foods would trigger me.
It took a very, very long time to narrow in on the underlying issue.
> But at the same time technofeudalism, dystopia, etc.
You could run LLMs locally to mitigate this. Of course, running large models like GLM-4.6 is not feasible for most people, but smaller models can run even on Macbooks and sometimes punch way above their weight
Based on recent experiences guiding my parents and younger brother through the medical world, I'm happy with AI as an alternative or complement. There are good doctors out there, but they're often booked solid or you only see them for 5-20 minutes in your parade of specialists you're forced to see to extract as much money from health insurance as possible
A bit invasive and scary to suggest, but I'd push them to rule out any form of cancer. A family member of mine went through a similar ordeal where he went through the removal of multiple teeth, due to continuous pain, and only at a later time with much back and forward between doctors, they went ahead with extensive tests and found he had mandibular cancer.
> I’m kind of torn on this. From one side, I can’t seem to trust doctors any more.
that's alright, if this idea takes off your insurance won't see any need to pay for you to speak to one, when they can dump you onto "AI" instead
I am not a doctor but this could be a neuropathy. Try talking to a neurologist, and possibly try cutting all caffeine.
Sinus infection?
This is exactly my feelings! I think medicine is a crucial field, but I have very little respect for individual doctors themselves.
- Humans are self healing - if a doctor does absolutely nothing, most issues will resolve on their own or, if they're chronic (e.g. back pain) they won't be worse off compared to the alternative. I'm in a country with subsidised health care, and people go to the doctor immediately for absolutely anything. A doctor could have a 99% success record by handing out placebos.
- Most patients have common issues. I.e. maybe 30 people visit the clinic on a given day, it's possible that all 30 of them have come because they have the flu. Doctors are human, nobody's going to investigate potential pneumonia 30 times a day every day for 6 months. So doctors don't: someone comes in and is coughing, they say it's flu, on to the next patient. If the person really has pneumonia, they'll come back when it gets worse.
- Clinics are overbooked. I don't know if it's licensing, GDP, artificial scarcity, cost regulations or what, but doctors probably don't actually have time to investigate anyways.
- Doctors don't receive any rigorous continuing education. I'm sure there's some restrictions, but I've gone into doctors in the last year and gotten the "stress causes ulcers" explanation for what turned out to be food sensitivity issues (there was no visible ulcer mind you, so it was concluded that it was an invisible ulcer). Slow, gradual maintenance, and heavy reading are hard things that humans are necessarily bad at.
- Patients don't want to hear the truth. Lifestyle changes, the fact that nothing can be done, there's no pills to cure you, etc. Even if doctors could give a proper diagnosis, it could end up being bad PR so doctors are conditioned away from it.
- Doctors don't follow up - they get absolutely no feedback whether most of their treatments actually work. Patients also don't come back when their issue is resolved, but even if they do doctors don't care. Skin issue, doctor prescribed steroidal cream, redness disappeared, doctor declared I was cured, redness came back worse a week later. As a scientific field, there's no excuse for anything but evidence based medicine, but I haven't seen a single doctor even make an attempt to improve things statistically.
I've heard things like, doing tests for each patient would be prohibitively expensive (yes, but it should at least be an option and patients can pay for it) or the amount of things medicine can actually cure today is very small so the ROI would be low for additional work (yes, but in the long term the information could result in furthering research).
I think these are obvious and unavoidable issues (at least with the current system), but at the same time if a doctor who ostensibly became a doctor out of a desire to help people willingly supports this system I think they share some of the blame.
I don't trust AI. Part of me goes, well what if the AI suddenly demands I have some crazy dental surgery? And then I go, wait, the last dentist I went to said I need some crazy dental surgery. That none of the other 3 dentists I went to after that even mentioned. And as you said an AI will at least consider more info...
So I do support this as well. I'd like to have an AI do a proper diagnosis, then maybe a human rubber stamp it or handle escalation if I think there's something wrong...
most doctors - like most mechanics - are the worst debuggers in humankind.
I once went to the UCSF er 2x in the same weekend for an issue. On day one, I was told to take some ibuprofen and drink water - nothing to worry about. I went home, knowing this was not a solution. Day two I returned to the ER because my situation had gotten 10x worse. The (new) doctor on day 2 said "we need to resolve this asap" and I was moved into a room where they gave me a throat-numbing breating apparatus and then shoved a massive spinal tap needle in my throat to drain 20ml of fluid from behind my tonsil. I happened to bump into the doctor from the previous day on my way out and gave him a nice tap on the shoulder saying, thanks for all the help, doc. UCSF tried to bill me 2x for this combined event, but I told them to get fucked due to the negligence on day one. The billing issue disappeared.
I had a Jeep that I took into the shop 2, 3, 4 times for a crazy issue. The radio, seat belt chime, and emergency flashers all become possessed at the same time. Using my turn signal would cause the radio to cut in and out. My seat belt kept saying it was disconnected. No one could fix it. What was the issue? A loose ground on the chassis that all of those different systems were sharing. https://www.wranglerforum.com/threads/2015-rubicon-with-elec...
These are just two examples from my life, but there are countless. I just do everything myself now, because I trust no one else.
I have a very similar story. It went from ibuprofen and water to antibiotics the following day, to different antibiotics the next day, and finally to having my tonsils purged (i.e., cut open) with local anesthesia in 4 days. By this time I no longer could speak. It was the most pain I felt in my life.
I still trust doctors, but this made me much more demanding towards them.
That (peritonsular abscess) was also the most painful thing I have ever experienced in my life. I was genuinely hoping a lightning bolt would come out of the heavens and kill me.
Ah, the rare 'HN person who knows more about everything than everyone', what a joy to see one in the wild.
If you're getting bad troubleshooting it's because you're going to places that value moving units (people) as fast as possible to pay the rent. I assure you neither most mechanics nor most doctors are 'the worst debuggers in humankind'. There are plenty of mechanics and doctors that will apply a 65% solution to you and hope that it pays off, but they're far from the majority.
> Most doctor advice boil down to drink some water and take a painkiller, while glancing for 15 seconds at my medical history before they dedicate me 7 minutes, after which they move to yet another patient.
Most of the time, that's the correct approach. However, you can actually do better by avoiding painkillers, since they can have side effects. There are illnesses that are easily diagnosable and have established medications; doctors typically prescribe what pharmaceutical companies have demonstrated to them. But the rest of the "illnesses," which make up the majority, are pretty much still a mystery.
For the most part, neither you nor your doctor can do much about these. Modern medicine often feels like just a painkiller subscription.
Is it serious pain? Lasting a very specific amount of time? In one side of your face only? From zero to 10 pretty quickly?
... at the same time ... OpenAI is a business with limited financial success relative to expenses and insurance companies are thirsty for improved models. Currently the entire might of the US Government is on Central American misadventures so you think they're gonna stop them?
Doctors are usually just normal people that had the means, memory, drive and endurance to get through an exclusive education that will guarantee them a life in relative affluence and maximized adoration.
Connected thinking, interest in helping and extensive depth or breadth of knowledge in anything beyond what they need for their chosen specializations day to day work are rare and coincidental.
> Doctors are usually just normal people that had the means, memory, drive and endurance to get through an exclusive education that will guarantee them a life in relative affluence and maximized adoration.
Is that all?!
Also, everyone is just “normal people” in aggregate.
Unfortunately, I feel like I'm in the minority here, but AI has been really helpful to me and my doctor visits when it comes to preparing for a ~10-minute appointment that historically always felt like it was never long enough. I can sit down with an LLM for as long as I need and discuss my concerns and any potential suggestions, have it summarize them in a structure that's useful for my doctor, and send an email ahead of the appointment. For small things, she doesn't even need me to come in anymore and a simple phone call to confirm is enough. With the amount of pressure the healthcare system is under, I think this approach can free up a lot of valuable time for her to spend with the patients who need it most.
Not everyone who uses ChatGPT gets advice on harming or killing themselves, but some people sure do. And some act on it.
Just as we don't necessarily want to eliminate a technology because of a small percentage of bad cases, we shouldn't push a technology just because of a small percentage of good anecdotes.
This genie isn't going back in the bottle but I sure hope it ends up more useful than harmful. Which now that I write it down is kind of the motto of modern LLM companies.
"we built foundational protections (...) including (...) training our models not to retain personal information from user chats"
Can someone please ELI5 - why is this a training issue, rather than basic design? How does one "train" for this?
This is just marketing nonsense. You don't have to train models to not retain personal information. They simply have no memory. In order to have a chat with an LLM, every time the whole conversation history gets reprocessed - it is not just the last answer / question gets send to the LLM but all preceding back and forth.
But what they do is exfiltrate facts and emotions from your chats to create a profile of you and feed it back into future conversations to make it more engaging and give it a personal feeling. This is intentionally programmed.
> In order to have a chat with an LLM, every time the whole conversation history gets reprocessed - it is not just the last answer / question gets send to the LLM but all preceding back and forth.
Btw, context caching can overcome this, e.g. https://ai.google.dev/gemini-api/docs/caching . However, this means it needs to persist the (large) state in the server side, so it may have costs associated to it.
I think they mean that they trained the tool-calling capabilities to skip personal information in tool call arguments (for RAG), or something like that. You need to intentionally train it to skip certain data.
>every time the whole conversation history gets reprocessed
Unless they're talking about the memory feature, which is some kind of RAG that remembers information between conversations.
Same question. I wonder if they use ML to try to classify a chat as health information and not add it to their training data in that case.
I also wonder what the word "foundational" is supposed to mean here.
I assume they want to retain all other info from user chats, and they're using an LLM to classify the info as "personal" or not.
Could be telling the memory feature not to remember these specific details
I used to work in healthtech. Information that can be used to identify a person is regulated in America under the Health Insurance Portability and Accountability Act (HIPAA). These regulations are much stricter than the free-for-all that constitutes usage of information in companies that are dependent on ad networks. These regulations are strict and enforceable, so a healthcare company would be fined for failing to protect HIPAA data. OpenAI isn't a healthcare provider yet, but I'm guessing this is the framework they're basing their data retention and protection around for this new app.
Going to a probabilistic system for something that can/should be deterministic sets off a lot of red flags.
I’ve worked medical software packages, specifically a drug interaction checker for hospitals. The system cannot be written like a social media website… it has to fail by default, and only succeed when an exact correct solution was determined. This result must be repeatable given the same inputs. The consequence is people die.
Health and medicine is very far from deterministic. Your drug interaction checker is deterministic because the non-determinism is handled at a higher level (the doctor / patient interaction) in the health care system. Individual patients often respond wildly differently to the same medicine even in the absence of drug interactions.
Similarly the non-determinism in ChatGPT should be handled at a higher level. It can suggest possibilities for diagnosis and treatment, but you should still evaluate those possibilities with the help of a trained physician.
a drug interaction checker can be deterministic, based on a static corpus of drug interaction data
a diagnostic system should not necessarily be deterministic, because it always operates on incomplete data and it necessarily produces estimates of probability as an output
Human doctors are probabilistic systems.
With real consequences for mistakes, or at least a framework for accountability.
This type of naive response really is bothersome!
Humans are probabilistic systems?! You might want to inform the world's top neuroscientists and philosophers to down tools. They were STILL trying to figure this out but you've already solved it! Well done.
There's nothing naive about it. Most doctors work off of statistics and probabilities stemming from population based studies. Literally the entire field of medicine is probabilistic and that's what angers people. Yes, 95% chance you're not suffering from something horrible but a lot of people would want to continue diagnostics to rule out that 5% that you now have cancer and the doctor sent you home with antibiotics thinking it's just some infection, or whatever.
I don't think it's a naive response. Perhaps it's obvious to you that human doctors can't produce an "exact correct solution", but quite a lot of people do expect this, and get frustrated when a doctor can't tell them exactly what's wrong with them or recommends a treatment that doesn't end up working.
Despite this, given that health care prices in the USA continue to accelerate from relentless government interference and regulation, perhaps if the deterministic side of things could be figured out, we could monumentally decrease cost and increase access.
Integration with Function is a great use-case. There is a huge category of pre-diagnostic health questions (“Medicine 3.0” as Attia puts it) where personalization and detailed interpretation of results is important, yet insurance typically won’t cover preemptive treatment.
Not to mention that doctors generally don’t have time to explain everything. Recently I’ve been doing my own research and (important failsafe) running the conclusions by my doctor to validate. Tighter integration between physician notes, ChatGPT conversations, and ongoing biomarkers from e.g. function and Apple Health would make it possible to craft individualized health plans without requiring six-figure personal doctor subscriptions.
A great opportunity to improve the status quo here.
Of course - as with software, quality control will be the crux. We don’t want “vibe diagnosing”.
What you described already exists:
- for liver: https://www.longevity-tools.com/liver-function-interpreter - for insulin resistance: https://www.longevity-tools.com/glucose-metabolism-interpret...
etc
The problem with dr appointments is that too often, physicians dont actually think carefully about your case.
It's like they one-shot it.
This is why I've had my dr change their mind between appointments, having had more time to review the data.
Or I get 3 different experts giving me 3 different (contradicting!) diagnoses.
That's also why I always hesitate listening to their first advice.
I see many comments like this in here. Where is this so common? I'm not from US but I had impression that health-care while expensive, it is good. If I assume most comments come from US then it is just expensive.
I cannot imagine doctor evaluating just one possibility.
My cousin just finished years of medical school, residency, and his first job as a psychiatrist. He opened up a private practice a year ago and has been working hard to acquire a client base. I fear this will destroy his livelihood. He can't compete on the convenience. To see him, a person has to reach him via phone or email, process their healthcare information, and then physically visit him. All while this tool has been designed to process health information, which can also speak out loud with the patient instantly. Sure he can prescribe medications, but many people he sees do not need medication. Even if the doctor is better, the convenience of this tool will likely win out.
If America wants to take care of its people, it needs to tear down the bureaucracy that is our healthcare system and streamline a single payer system. Otherwise, doctors will be unable to compete with tools like this because our healthcare system is so inconvenient.
> Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.
I suspect that will be legally-tested sooner than later.
As long as the liability precedents set by prior case law and current regulations hold, there should be no problem. OpenAI and the hordes of lawyers working for and with them will have ensured that every appropriate and legally required step has been taken, and at least for now, these are software tools used by individuals. AI is not an agent of itself or the platform hosting it; the user's relative level of awareness of this fact shouldn't be legally relevant as long as OpenAI doesn't make any claims to the contrary.
You also have to imagine that they've got their zero guardrails superpowered internal only next generation bot available to them, which can be used by said lawyer horde to ensure their asses are thoroughly covered. (It'd be staggeringly stupid not to use their AI for things like this.)
The institutions that have artificially capped levels of doctors, strangled and manipulated healthcare for personal gain, allowed insurance and health industries to become cancerous - they should be terrified of what's coming. Tools like this will be able to assist people with deep, nuanced understanding of their healthcare and be a force multiplier for doctors and nurses, of which there are far too few.
It'll also be WebMD on steroids, and every third person will likely be convinced they have stereochromatic belly button cancer after each chat, but I think we'll be better off, anyway.
And just like the CSAM of Grok, it will be exempt from consequences
It also doesn't make any sense. It's like self-driving cars that require you to pay attention at all times anyway.
How is this NOT a class action suit in the making?
I'm dealing with a severe health ailment with my cat right now and ChatGPT has been pretty invaluable in helping us understand what's going on. We've been keeping our own detailed medical log that I paste in with the lab and radiology results and it gives pretty good responses on everything so far. Of course I'm treating the results skeptically but so far it has been helpful and kept us more informed on what's going on. We've found it works best if you give it the raw facts and lab results.
The main issue is that medicine and diseases come with so many "it depends" and caveats. Like right now my cat won't eat anything, is it because of nausea from the underlying disease, from the recent stress she's been through, from the bad reaction to the medicine she really doesn't like, from her low potassium levels, something else, all of the above? It's hard to say since all of those things mention "may cause nausea and loss of appetite". But to be fair, even the human vets are making their own educated guesses.
maybe the openai moat is the data we shared along the way.
no seriously, openai seemingly lost interest in being the 'best' model - instead optimizing for other traits such as speech and general human likeness? there's obviously codex but from my experience it's slower and worse than the other big 2 in every single way: cost, speed and accuracy. codex does seem to be loved by vibe coders the most that don't really know how to code at all so maybe it is also what they're optimizing for and why it doesn't personally suit me.
others might have better models, but openai has the users emotionally attached to the models at this point even if they know it or don't. there were several times I recommended switching and the response I got is that "chatgpt knows me better".
GPT 5.2 High is the gold standard in code quality right now.
The “Codex” models suck.
Claude, even Opus, creates major side-effects and misses obvious root causes of bugs.
haven't done bug hunting with openai so cannot comment, but given the right tools opus can most definitely solve bugs as well.
I think opus is special because it was trained explicitely on very strong reliance on such tools while gpt seems to "know" how things work a lot better reducing the need for tools.
opus being able to dedicate a lot more parameters for these things make it a better model if you give it what it needs, but that's just my observation. it's also much faster as a bonus.
Gemini helped diagnoze me with eosinophilic esophagitis. I have had problems with swallowing all my life and doctors kept dismissing it as a psychological problem. I think there is a great space with ai medical help.
> eosinophilic esophagitis
And how do you threat that ?
There are pills and a minor balloon dilation. The point is I would not be threated without it. Now I can go with my girlfriend for a dinner. That would be a huge stress before that.
This was expected. People are going to be convinced that this AI knows more than any doctor, will self medicate, and will die, harm others, their kids, etc.
Great work, can't wait to see what's next.
In my experience GPT is uber-careful with health related advice.
Which makes me think it's likely on the user if what you said actually happened...
Please look at the post. This is about a GPT which is designed to give you health advice, with all hallucinations, miscommunication, bad training data, lack of critical thinking (or, any thinking, obviously).
I pity the doctors who will now have to deal with such self-diagnosed "patients". Wonder if General Medicine doctors will see a drop in patient, as AI convinces you to see a specialist with its diagnosis?
> researchers found that searching symptoms online modestly boosted patients’ ability to accurately diagnose health issues without increasing their anxiety or misleading them to seek care inappropriately [...] the results of this survey study challenge the common belief among clinicians and policy-makers that using the Internet to search for health information is harmful. [0]
For example, "man Googles rash, discovers he has one-in-a-million rare disease" [1].
> Ian Stedman says medical professionals shouldn't dismiss patients who go looking for answers outside the doctor's office - even if they resort to 'Dr. Google.'
> "Whenever I hear a doctor or nurse complain about someone coming in trying to diagnose themselves, it boils my blood. Because I think, I don't know if I'd be dead if I didn't diagnose myself. You can't expect one person to know it all, so I think you have to empower the patient."
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC8084564/
[1] https://www.cbc.ca/radio/whitecoat/man-googles-rash-discover...
Indeed. My doctor likes it when people search the Internet - as long as they come to him before doing anything drastic.
If there's a reasonable audit trail for the doctor to verify that valid differential reasoning was done that they can quickly verify, there's relatively few downsides and lots of upsides for them.
Then you have not dealt with doctors.
Some physicians are absolutely useless and sometimes worse than not receiving any treatment at all. Medicine is dynamic and changes all the time. Some doctors refuse to move forward.
When I was younger I've had a sports injury. I was misdiagnosed for months until I did my own research and had the issue fixed with a surgery.
I have many more stories of doctors being straight up wrong about basics too.
I see physicians in a major metro area at some of the best hospital networks in the US.
I sadly have to agree with you. I had a 30+ year orthopedic surgeon confidently tell me my ACL wasn't torn.
Two years later when I got it fixed the new surgeon said there was nothing left of the old one on the MRI so it must have been torn 1.5-2+ years ago.
On the other hand, to be fair to doctors, I had a phase of looking into supplements and learned the hard lesson that you really need to dig into the research or find a very trusted source to have any idea of what's real because I definitely thought for a bit a few were useful that were definitely not :)
And also to be fair to doctors I have family members who are the "never wrong" types and are always talking about whatever doctor of the day is wrong about what they need.
My current opinion is using LLMs for this, in regards to it informing or misinforming, is no different than most other things. For some people this will be valuable and potentially dramatically help them, and for others it might serve to send them further down roads of misinformation / conspiracies.
I guess I ultimately think this is a good thing because people capable of informing themselves will be able to do so more effectively and, sadly, the other folks are (realistically) probably a lost cause but at the very least we need to do better educating our children in critical thinking and being ok with being wrong.
I have been misdiagnosed and taken wrong medicine for 4 years for a problem by a top specialist, the doctor kept making up reasons on why I'm not getting better.
By luck I consulted with another specialist due the former doctor not being available at and odd time, and some re-tests help determine that I need a different class of medicines. I was better within months.
4 years of wrong medicines and over confidence from a leading doctor. Now I have a tool to double check what doctor has recommended.
This is not much different than a smarter WebMD, if anything, more doctors are using ChatGPT with their patients today than you realize anyway.
Not to mention, doctors are absolutely fallible and misdiagnose constantly.
Happy for the millions of people who are getting misdiagnosed due to rush, dismissal, and missed clues
I usually look up my symptoms (not on ChatGPT) and when I finally go to a doctor I just let them do their job, but I usually do it just to have some idea of what's going on by the time I go there. My wife's a nurse (not a Doctor) so sometimes she can tell me if what I read sounds crazy or what have you based on her own personal experience with patients.
I hope this puts at least some of the scam artists known as psychiatrists out of a job.
Quite contrary - it will give them much more work, I can guarantee.
Specifically as someone in the UK, where doctors are free but extremely hard to get hold of, this is quite interesting to me.
Another interesting aspect is that the NHS app makes all your detailed health history (doctors notes, scan results etc.) available to you as the patient.
Which in turn means you have the option of feeding it into ChatGPT. This feels potentially very valuable and a nice way of working around issues with whether doctors themselves are allowed to do it.
I'm not sure this applies to every surgery, but certainly my dad had access to everything immediately when he had a scan.
Out of curiosity, do you know from when this data started being assigned? Is it just from when you install the NHS app and create some form of login or will it have everything there?
I hadn't heard of this before.
Something to note here is that just yesterday (January 6 2026) the FDA announced changes around regulation of wearable & AI enabled devices: https://www.statnews.com/2026/01/06/fda-pulls-back-oversight... (" FDA announces sweeping changes to oversight of wearables, AI-enabled devices The changes could allow unregulated generative artificial intelligence tools into clinical workflows")
I think it's only a matter of time before OpenAI starts doing hardware (phones, watches, vr)
I present to you https://openai.com/sam-and-jony/
Openai is cooked. They can't compete with others so just experimenting on some side things...
I always check my blood test and mrı results with ChatGPT before showing to Doctor. Doctor says same thing what ChatGPT says and it's giving more clear and detailed information. However we shouldn't trust chatgpt result 100%. It's just good to take an idea. Also we shouldn't trust any doctor 100%
I ran a part of your comment through ChatGPT, and it said you're typing on a Turkish keyboard or have it set as your language. Is this true?
I ran a part of your comment through Clause, and it said you're typing on a Hungarian keyboard or have it set as your language. Is this true?
Wait. Why? What prompted this? The phrasing?
> I always check my blood test and mrı results with ChatGPT
MRI letters
Yes, correct!
> you can sign up for the waitlist
Waitlist: 404 page not found.
Just like regular healthcare then I see.
Vibedeployments
Embarrassing
the future is now
the amount of people willing to delegate to chatgpt tells me in the near future only rich people will be able to speak with a real doctor. the current top comment about someone's uncle being saved due to chatgpt guidance says it all.
Unfortunately, that's kind of already the case. The standard of care for wealthy people, who often purchase "Personal Medicine" services, can be astoundingly better than what is available to the general public. It's more like having a health team behind you than just a lone GP. They can push you through the system, get treatments, ask colleagues, and collaborate with other teams, way quicker.
Many people in the comments are worried about laypeople using this for medical advice.
I'm worried that hospital admins will see this as a way to boost profit margins. Replace all the doctors by CNAs with ChatGPT. Yes, doctors are in short supply, are overworked, and make mistakes. The solution isn't to get rid of them, but to increase the supply of doctors.
I woder wether this will have the same pitfalls as regular ChatGPT.
The latter implicitly assumes all your questions are personal. It seems to have no concept of context for its longer term retentions.
Certainly for health, non accute things seem matter a lott. This is why yoir personal doctor that has known you for decades will spot things beyond your current symptoms.
But ChatGTP will uncritically retain from that time you helped your teacher relative build her lesson plans that you "are a teacher in secondary education" or that time you helped diagnose a friends car trouble that you "drives a high performance car" just the same as your regular "successfully built a proxmox datacenter".
With health there will be many users aking on behalve of or helping out an elderly relative. I wonder whether all those 'diagnoses' and 'issues' will be correctly attributed to the right 'patient' or just be mixed together and assumed to be all about 'you'.
yikes: https://news.ycombinator.com/item?id=46524382
[Teenager died of overdose 'after ChatGPT coached him on drug-taking']
An estimated 371,000 people die every year following a misdiagnosis, and 424,000 are permanently disabled. https://qualitysafety.bmj.com/content/33/2/109?rss=1
Admittedly I am basing this on pure vibes: I'd bet that adding AI to the healthcare environment will, on balance, reduce this number, not increase it.
Spoken like a true techbro
Would you like to propose a different metric for estimating whether a given tool benefits society?
Would you like to provide actual proof that your favorite toy benefits people's health before daring others to challenge you? The imagined data you’ve yet to provide can't possibly justify the harm it's causing by pushing people on the edge to suicide.
The article is paywalled but appears to concern abusing a cocktail of kratom, alochol, and xanax. I don't really think that's the same. Also, this feature isn't really about making ChatGPT start answering medical questions anyhow, since people are already doing that.
This is absolutely going to kill people. In a country that had even a modicum of regulation around providing healthcare this would be illegal.
Completely agree. It is deeply irresponsible for OpenAI to release this product.
They should held liable for malpractice for every incorrect statement and advice provided by it.
It's also absolutely going to save people. The question is what the acceptable (and the actual) ratio is.
The Sacklers said the same about OxyContin.
I'm certain it will kill people, but medical error already kills a huge number of people - the exact number is heavily disputed, but in the US the lower bound is in the tens of thousands annually.
You know what else kills people?
Cars, planes, food, scooters, sports, cold, heat, electricity, medication, ...
Might be useful if they start letting me write my own prescriptions or can send a robot to my house to run tests or perform surgery. Otherwise, I don't really see how this changes anything for me; the doctor - that I already have to see - should just check their analysis with AI on my behalf.
Doctors should use AI, but they often don't. We can expect slow adoption.
When it happens, the quality of AI will be determined by enterprise sales and IT.
Right now, a patient can inject AI into the system by bringing a ChatGPT transcript to their doctor.
No, doctors shouldn't use llms. The second a doctor does this with me knowing is the moment I switch doctors.
I use it for health advice sometimes.. but.. doesn't this seem like a massive source of liability? Are they just assuming the investor dollars will pay for the lawyers?
Local AI can’t come soon enough. My health should be between me and my AI. Keep corporations and government out of it.
I trust they considered the bias in medical research that exists in their training. I wonder if OpenAI will implement morbidity & mortality (M&M) rounds to learn from the mistakes and missed diagnosis.
Based on the reports of various failings on the safety front, I sure hope users will take that into account before they get advice to take 500g of aspirin.
ChatGPT has become an indispensable health tool for me. It serves as a great complement to my doctor. And there's been at least two cases in our house where it provided recommendations that were of great value (one possibly life saving and the other saving from an unnecessary surgery). I think that specialized LLMs will eventually be the front-line doctor/nurse.
Using AI to analyze health data has such a huge potential upside, but it has to be done locally.
I use [insert LLM provider here] all the time to ask generic, health-related questions but I’m careful about what I disclose and how I disclose it to the models. I would never connect data from my primary care’s EHR system directly to one of these providers.
That said, it’ll be interesting to see how the general population responds to this and whether they embrace it or have some skepticism.
I’m not confident we’ll have powerful/efficient enough on-device models to build this before people start adopting the SaaS-based AI health solutions.
ChatGPT’s target market is very clearly the average consumer who may not necessarily care what they do with their data.
I really feel like no one should trust open ai with their health data.
I barely trust them with any personal info. Are we (HN crowd) biased in a way that will cause us to miss out? I wonder if we are insiders - having a privileged perspective given our industry is on the front lines - or if we’re just becoming jaded.
I missed out on having my transactions publicly posted on Venmo, too. I’m just a slow adopter, I guess.
no one should trust any EMR or organization with their health data, it'll all be leaked eventually.
>purpose-built encryption and isolation
Repeated 2x without explanation. Good start.
---
>You can further strengthen access controls by enabling multi-factor authentication
Pushing 2fac on users doesn't remove the need for more details on the above.
---
>to enable access to trusted U.S. healthcare providers, we partner with b.well
>wellness apps—like Apple Health, Function, and MyFitnessPal
Right...?
---
>health conversations protected and compartmentalized
Yet OAI will share those conversations with enabled apps, along with "relevant information from memories" and your "IP address, device/browser type, language/region settings, and approximate location"? (per https://help.openai.com/en/articles/20001036-what-is-chatgpt...)
It’s so foreign to me that anyone would want this in their life.
When was the last time you had to deal with healthcare for anything other than visible/testable issues?
For the last 7-8 years.
Joined the wait-list. Can't wait.
AI already saved me from having an unnecessary surgery by recommending various modern-medicine (not alternative medicine) alternatives which ended up being effective.
Between genetics, blood stool and urine tests, scans (ultrasound, MRI, x-ray, etc), medical history... Doctors don't have time for a patient with non trivial or non obvious issues. AI has the time.
Doctors should be expected to use these tools as part of general competency. Someone I knew had herpes zoster on her face and the doctor had no idea and gave her diabetic medication. A five second inference with Gemini by me, uploading the picture only (after disabling apps activity to protect her privacy), and it said what it was. It's deplorable. And there's no way for this doctor to lose their job. They can keep being worse than Gemini Flash and making money.
There's "Ask ChatGPT" overlay at the bottom of the page, so I asked "how risky is giving my medical data to OpenAI?" Interestingly ChatGPT advised caution ;) In short it said: 1. Safer than standard AI chats, 2. Not as safe as regulated healthcare systems (it reminded that OpenAI is not regulated and does not follow e.g. HIPAA), 3. Still involves inherent cloud risks.
I am amazed at the false dichotomy (ChatGPT vs. Doctors) discussed in the comments. Let's look at it from the perspective of how a complex software is developed with a team of software engineers and with AI assistance. There is GitHub, Slack discussions, Pull Request, reviews, agents, etc...
Why a patient's health cannot be seen this way?
Version controlled source code -> medical records, health data
Slack channels -> discussion forum dedicated to one patient's health: among human doctors (specialists), AI agents and the patient.
In my opinion we are in the stone age compared to the above.
I think you vastly overestimate the number of orgs using “agents” at all in software development, let alone as an active part of the review process for code, and ESPECIALLY the number who consider such bots equally valuable contributors to humans.
They are tools, they are sometimes useful tools in particular domains and on particular teams, but your comment reads like one that assumes they are universally agreed upon already, and thus that the health industry has a trustee example they could follow by watching our industry. I firmly disagree with that basis.
Maybe. But I am working with these agents and I see the sophisticated patterns using LLMs, using ground truth data and human experts working together. It could be much more effective than 'patient asking chatgpt' vs. 'general doctor one-shotting a problem without AI assistance and enough context'.
You put the "agent" word into apostrophes as if I use it as a marketing buzzword. No. An agent is an LLM in a loop with memory usage, with file system access, etc, which is usually more effective than just using an LLM as is, especially if you orchestrate these agents and subagents in a good way. In my opinion using the word 'ChatGPT' (a specific user-facing LLM brand) in these discussions is much more buzzwordy than using the word agent.
All the Americans here arguing why this is a good thing, how your system is so flawed, etc. remember that this will be accessible to people in countries with good, free healthcare.
This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees. People are going to choose asking ChatGPT instead of their local doctor who is not only cheaper(!!!) but also actually educated.
People saying that this is good because the US system specifically is so messed up and useless are missing that the US makes up ~5% of the world's population, yet you think that a medical tool made for the issues of 5% of the population will be AMAZING and LIFE SAVING for the other 95%, more than harmful? Get a grip.
Not to mention shitty doctors, which exist everywhere, likely using this instead of their own brains. Great work guys.
I suspect the rationale at OpenAI at the moment is "If we don't do it, someone else will!", which I last heard in an interview with someone who produces and sells fentanyl.
>> This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees.
Well then I suppose they'd have no need or motivation to use it, right?
Was anyone able to get past the 404 page to actually join the waitlist or try this?
I hit the 404 and seems like most folks on X did too, e.g. replies to Greg Brockman's announcement here: https://x.com/gdb/status/2008984705723719884
Argh of course I posted this and then when I tried one more time after the page finally worked. Ignore the above; the page seems to work now (points to a waitlist).
Fun fact: There's a saying for this in german: 'Vorführeffekt', or 'Demonstration Effect'.
https://en.wiktionary.org/wiki/Vorf%C3%BChreffekt
This reads like desperation to get a headline more than an actual product, especially when half the links don't even work
I heard the links are lazily auto vibe coded when enough people click on them /s
I made something like that https://github.com/lionkor/ai-httpd
Love it!
Handily appearing next to this story on the front page of HN:
"Health care data breach affects over 600k patients"
https://news.ycombinator.com/item?id=46528353
The old chat gpt models scanning the nih pub med repositories with proper prompting (e.g. …backed by randomized control trial data) was an amazing health care tool. The stripped down cheaper versions today are junk and I’ve had to start relying on grok :-( I’m not convinced OpenAI can make this work
Most doctors are just people who had a strong ability to pass tests and finish medical school.
Most of the ones I've worked with aren't passionate about their specialty or their patients, and their neglect and mistakes show it.
> Most doctors are just people who had a strong ability to pass tests and finish medical school.
This is a tautology. “Most doctors are just (doing lots of work there) people who had the ability to meet the prerequisites for…becoming a doctor.”
Are doctors fallible? Of course. Is there room for improvement with regard to LLM’s? Hopefully. Does that mean there’s reason to spread general distrust in doctors? Fuck no. Until now they were the only chance you had at getting health care.
"Join the Waitlist” button -> 404.
:o/
It looks like it's fixed now but had the same initial experience.
Vibe coded…. /s
Zero mention of actual compliance standards or HIPAA on the launch page of a product that is supposed to interconnect with medical records other health apps! No thanks...
Is it cancer?
Oh no wait, you’re right it’s heart disease!
Oh it’s not heart disease? It’s probably cancer
Rinse and repeat
https://giphy.com/gifs/the-office-mrw-d10dMmzqCYqQ0
Not a chance.
AI apocalyptic hegemony
Expectations : I, Robot
Reality: Human extinction after ChatGPT kills everyone with halucinated medical advice, and lives alone..
I mean, that's Darwin in action, right?
High risk high reward? Or just the level of regulation companies can expect in the year 2026 that they’re not afraid to take this path?
My guess is that it is just chatgpt tuned to give the advice that gives the lowest possible liability to openai.
Get ready to learn about the food pyramid, folks.
AI in health - makes sense.
OpenAI in health - I'm reticent.
As someone who pays for ChatGPT and Claude, and uses them EVERYDAY... I still am not sure how I feel about these consumer apps having access to all my health data. OpenAI doesn't have the best track record of data safety.
Sure OpenAI business side has SOC2/ISO27001/HIPAA compliance, but does the consumer side? In the past their certifications have been very clearly "this is only for the business platform". And yes, I know regular consumer don't know what SOC2 is other than a pair of socks that made it out of the dryer.... but still. It's a little scary when getting into very personal/private health data.
Gattaca is supposed to be a warning, not a prediction. Then again neither was Idiocracy, yet here we are.
ChatGPT arguably saved my fathers life two weeks ago. He was in a rehab center after breaking his hip and his condition suddenly deteriorated.
While waiting for the ambulance at the rehab center, i plugged all his health data from is MyChart and described the symptoms. It accurately predicted (in its top two possibilities) C Diff infection.
Fast forward two days, ER has prescribed in general antibiotics. I pushed the doctors to check for C Diff and sure enough he tested positive for it - and they got him on the right antibiotics for it.
I think it was just in time as he ended up going to the ICU before he got better.
Maybe they would have tested for C Diff anyway, but definitely made me trust ChatGPT. Throughout his stay after every single update in his MyChart I copy and paste the pdf to the long running thread for his health.
I think ChatGPT health- being able to automatically import this directly will be a huge game changer. This is probably my number one use case of AI is health and wellness.
My dad is getting discharged tomorrow (to a different rehab center, thankfully)
recommending people eat horse dewormer as a service here we go
What could possibly go wrong?
I'd rather not talk to a commercial LLM about personal (health) details. Guess how they will/do try to make this completely overhyped chat bots profitable. OpenAI could sell relevant personal health related data to insurance companies, or maybe the hr department of your next job. Just saying...
Isn't it illegal to provide health advice without a license?
The endless parade of health influencers says no.
Was it approved by the FDA as a Medical Device?
Is FDA approval worth anything to consumers anymore? The way things are run today, OpenAI could buy any stamp of approval through money and appeasement.
I was thinking more in terms of compliance, is it legal to sell medical tech without FDA certification?
This is without getting into Physician licenses if we consider the product to be physician state advice.
Regulatory oversight of medical tech is important. But it's not because of the law, it's because it's a matter of life and death. The legality aspect becomes less interesting when there's no functioning regulatory framework that protects the people.
Which is to say, OpenAI getting approval wouldn't make this any better if that approval isn't actually worth the paper it's written on.
> ChatGPT Health is not yet available in the UK or EU
Ah yes. Because in the EU you cannot actually steal people's data.
This is going to kill people
Yeah, gotta say I'm not enthusiastic about handing over any health data to OpenAI. I'd be more likely to trust Google or maybe even Anthropic with this data and that's saying something.
At first I was reading this like 'oh boy here we go, a marketing ploy by ChatGPT when Gemini 3 does the same thing better', but the integration with data streams and specialized memory is interesting.
One thing I've noticed in healthcare is for the rich it is preventative but for everyone else it is reactive. For the rich everything is an option (homeopathics/alternatives), for everyone else it is straight to generic pharma drugs.
AI has the potential to bring these to the masses and I think for those who care, it will bring a concierge style experience.
I would not trust a company with no path to profitability with my medical health records, because they are more likely to do something unethical and against my interests, like selling insights about me to other companies, out of desperation for new revenue streams.
This is going to be fun to watch :)
Really? Because I was thinking it was going to be tragic.
It could be interesting, the medical industry is fine with everyone being deficient in everything. And then people miraculously get ill?
The tinfoil hat seems to be too tight around your head
You are right, what was I even thinking.. I just asked ChatGPT if eating fat will cause obesity and it went all in on counting calories. The standards are hilariously low. Lots of doctors use google and we both know google doesn't work. It will probably tell you humans are made from froot loops, chips, cola and quarter pounders - but do count those calories or it gets to big!
Maybe they can train it on Japanese text.
Brave new world we live in...
Since there are a lot of positive comments at the top (not surprising given the astroturfing on HN), please watch this video from ChubbyEmu, a real doctor, about a case study from someone who self-diagnosed using AI: https://www.youtube.com/watch?v=yftBiNu0ZNU
For every positive experience there are many more that are negative, if not life threatening or simply deadly.
> For every positive experience there are many more that are negative
On what basis do you make this assertion?
Another nail in the coffin for apps that depend on AI APIs because the AI companies themselves are working on products using their own APIs (unless you can make the UX significantly better). UX now seems like the prime motivator when building apps.
LLMs themselves are the coffin for most apps, because by its very nature, AI subsumes products.
UX is not going to be a prime motivator, because the product itself is the very thing that stands between user and the thing they want. UX-wise, for most software, it's better for users to have all these products to be reduced to tool calls for AI agents, accessible via a single interface.
The very concept of product itself is limiting users to interactions allowed by the product vendor[0] - meanwhile, used as tools for AI agents allows them to be combined in ways user need[1].
--
[0] - Something that, thanks to move to the web and switching data exchange model from "saving files" to "sharing documents", became the way for SaaS businesses to make money by taking user data hostage - a raison d'être for many products. AI integration threatens that.
[1] - And vendors would very much like users to not be able to. There's going to be some interesting fights here, as general-purpose AI tools are an existential threat to most of the software industry itself.
Depends on the product itself. For example I use an LLM for tracking calorie data by telling it or providing a picture of the food I had, and it does a web search to find those details. The problem is it doesn't actually remember past meals, so I wrapped this API call in a simple app that just tracks the calorie numbers like a traditional app.
Just having an LLM is not the right UX for the vast majority of apps.
Right. But what if you dropped the human-facing UI, and instead exposed the backend (i.e. a database + CRUD API + heavy domain flavoring) to LLM as a tool? Suddenly you not only get a more reliable recognition (you're more likely to eat something that you've eaten before than completely new), but also the LLM can use this data to inform answers to other topics (e.g. diet recommendations, restaurant recommendations), or do arbitrary analytics on-demand, leveraging other tools at its disposal (e.g. Python, JS, SQL, Excel are the most obvious ones), etc. Suddenly the LLM would be more useful at maintaining shopping lists and cross-referencing with deals in local grocery stores - which actually subsumes several classes of apps people use. And so on.
> Just having an LLM is not the right UX for the vast majority of apps.
I argue it is, as most things people do in software doesn't need to be hands-on. Intuition pump: if you can imagine asking someone else - a spouse, a friend, an assistant - to use some app to do something for you, instead of using the app yourself, then turning that app into a set of tools for LLM would almost certainly improve UX.
But I agree it's not fully universal. If e.g. you want to browse the history of your meals, then having to ask an LLM for it is inferior to tapping a button and seeing some charts. My perspective is that tool for LLM > app when you have some specific goal you can express in words, and thus could delegate; conversely, directly operating an app is better when your goal is unclear or hard to put in words, and you just need to "interact with the medium" to achieve it.
Your first paragraph are features I already have slated to work on, as I also ran into the same things, ie if I have 500 calories left for the day, what can I eat within that limit? But not sure why I need to ditch the UI entirely, my app would show the foods as a scrollable list and click on one to get more info. I suppose that is sort of replicating the LLM UI in a way, since it also produces lists of items, but apps with interactive UX over just typing still feels natural to most.
A solution could be, can the AI generate the UI then on the fly? That's the premise of generative UI, which has been floating around even on HN. Of course the issue with it is every user will get different UIs, maybe even in the same session. Imagine the placement of a button changing every time you use an app. And thus we are back to the original concept, a UX driven app that uses AI and LLMs as informational tools that can access other resources.
Yep, building a wrapper for LLM API is never going to build a sustainable business, there's absolutely zero moat. You need more, preferably physical world elements / something else that increases the switching costs.
Absolutely not.
These discussions pit ChatGPT against doctors — but we can combine them! Doctors can use it to improve their diagnostics and treatment options.
And patients can use it to improve communication with their doctors (which, given the short duration of an appointment can make a big difference.)
>>Designed with privacy and security at the core ...Health is built as a dedicated space with added protections for sensitive health information and easy-to-use controls.
Good words at a high level, but it would really help do have some detail about the "dedicated space with added protections"
How isolated is the space? What are the added protections? How are they implemented? What are the ways our info could leak?, and many more.
I wish I didn't have to be so skeptical of something that should be a great good providing more health info to more people, but the leadership of this industry really has, "gone to the dark side".
I've already been using ChatGPT to evaluate my blood results and give me some solutions for my diet and workouts. It's been great so far without any special model.
America can't afford public healthcare and now can't afford RAM so OpenAI can offer a bad version of healthcare.
To be more precise: America could easily afford healthcare, but American elites regularly accept pathetically tiny bribes to pretend that universal healthcare is impossible. They also accept pathetically tiny bribes to allow companies like OpenAI to do whatever they please without meaningful legal consequence, so OpenAI is able to keep getting away with shipping products that cannot work as advertised and will kill people.
this is great. I've been using AI for health optimization (exercise routine, diet, etc.) based on my biomarkers for a while now.
Once ChatGPT recommended me a simple solution to a chronic health issue - a probiotic with a specific bacteria strand. And while I used probiotics with before, apparently they have all different stands of bacteria. The one ChatGPT recommended me really worked.
"Medical attention is all you need."
Such a dystopian nightmare we live in now. The US is cutting our actual healthcare services to subsidize this shit so billionaires can become trillionares while the rest of us suffer
Source?
LLMs were actually not that great so far in the preventative side / risk prediction. They are very good if you already have the disease but if you are trending there they were chill about it. The better way would be to first calculate the risks via deterministic algos and then do differential diagnosis. So something that specialized doc would do. This is an example of an online tool, that does it like that: https://www.longevity-tools.com/liver-function-interpreter
I strained my groin/abs a few weeks ago and asked ChatGPT to adjust my training plan to work around the problem. One of its recommendations was planks, which is exactly the exercise that injured me.
My cleaning lady's daughter had trouble with her ear. ChatGPT suggested injecting some oil into it. She did and it became a huge problem, so that she had to go to the hospital.
I'm sure ChatGPT can be great, but take it with a huge grain of salt.
This is one of the main dividing lines wrt LLM usage and dangers: Not just believing what it tells you and finding hard sources before acting.
For some people this is obvious, so much so that they wouldn't even mention it, while others have seen only the hype and none of the horror stories.