19 comments

  • solaris2007 a few seconds ago

      "Don't believe everything you read online".
  • PeterHolzwarth 27 minutes ago

    I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

    This seems like a web problem, not a ChatGPT issue specifically.

    I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

    Is there an angle here I am not picking up on, do you think?

      falkensmaize 10 minutes ago

      AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

      So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.

      stvltvs 11 minutes ago

      Those other technologies didn't come with hype about superintelligence that causes people to put too much trust in it.

      xyzzy123 26 minutes ago

      The different is that OpenAI have much deeper pockets.

      I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".

        PeterHolzwarth 24 minutes ago

        To sue, do you mean? I don't quite understand what you intend to convey. Reddit has moderately deep pockets. A random forum related to drugs doesn't.

          xyzzy123 21 minutes ago

          Random forums aren't worth suing. Legally, reddit is not treated as responsible for content that users post under section 230, i.e, this battle has already been fought.

          On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.

          OpenAI _might plausibly_ be responsible for certain outputs.

            PeterHolzwarth 16 minutes ago

            Ah, I see you added an edit of "I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs"."

            I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.

  • themafia 37 minutes ago

    The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear.

    Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.

      PeterHolzwarth 20 minutes ago

      >"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."

      I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.

      What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

      <edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.

        andsoitis 15 minutes ago

        > What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

        In a forum, it is the actual people who post who are responsible for sharing the recommendation.

        In a chatbot, it is the owner (e.g. OpenAI).

        But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.

          falkensmaize 6 minutes ago

          Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.

  • datsci_est_2015 40 minutes ago

    This brings to mind some of the “darker” subreddits that circle around drug abuse. I’m sure there are some terrible stories about young people going down tragic paths due to information they found on those subreddits, or even worse, encouragement. There’s even the commonly-discussed account that (allegedly) documented their first experiences with heroin, and then the hole of despair they fell into shortly afterwards due to addiction.

    But the question here is one of liability. Is Reddit liable for the content available on its website, if that content encourages young impressionable people to abuse drugs irresponsibly? Is ChatGPT liable for the content available through its web interface? Is anyone liable for anything anymore in a post-AI world?

      ggm 35 minutes ago

      This is a useful question to ask in the context of carriers having specific defence. Also, publishers in times past had specific obligations. Common carrier and safe harbour laws.

      I have heard it said that many online systems repudiate any obligation to act, lest they be required to act, and thus both acquire cost, and risk, when their enforcement of editorial standards fail: that which they permit, they will be liable for.

  • returnInfinity 8 minutes ago

    Sam and Dario "The society can tolerate a few deaths to AI"

  • dfajgljsldkjag an hour ago

    The guardrails clearly failed here because the model was trying to be helpful instead of safe. We know that these systems hallucinate facts but regular users have no idea. This is a huge liability issue that needs to be fixed immediately.

  • NewJazz an hour ago

    Took a while to figure out what the OD was of, but it was a combination of alcohol, kratom (or a stronger kratom-like drug), and xanax.

      loeg 35 minutes ago

      7-O is like kratom in a similar way that fentanyl is like opium, FWIW. It's much, much more potent. That stuff should be banned.

      That said, he claims to have taken 15g of "kratom" -- that has to be the regular stuff, not 7-O -- that's still a huge, huge dose of the regular stuff. That plus a 0.125 BAC and benzos... is a lot.

      dfajgljsldkjag 43 minutes ago

      The article mentions 7-OH also known as feel free, which shockingly hasn't been banned and is sold without checks at many stores. There are quite a few Youtube videos talking about addiction to it and it sounds awful.

      https://www.youtube.com/watch?v=TLObpcBR2yw