AB316: No AI Scapegoating Allowed

6 points | by forthwall 2 hours ago

4 comments

  • ares623 38 minutes ago

    It's a good thing OpenAI has _two_ CEO's. It's like having two kidneys. When a CEO needs to held accountable, there's a spare available.

  • WCSTombs 38 minutes ago

    I think it's just saying that AIs are treated like inanimate objects and thus not something that liability can apply to. Here's an analogy that I think illustrates the effect of the law, if I've understood it: let's say I drive my car into a house and damage the house, and the owner of the house sues me. Now, it's not a given that I'm personally liable for the damages, since it's possible for a car to malfunction and go out of control through no fault of the driver. However, if I walk into the court and say that the car itself should be held liable and responsible for the damages, I'm probably going to have a bad day. Similarly, I shouldn't be able to claim that an AI is responsible for some damages, since you can't frickin' sue an AI, can you?

    The article goes on to ponder who's liable then, the developer of the AI, the user, or someone in between? It's a reasonable question to ask, but really not apropos to the law in question at all. That question isn't even about AI, since you can replace the AI with any software developed by a third party. In fact, the question isn't about software either, since you can replace "software" by any third-party component, even something physical. So I would expect that whatever legal methods exist to place liability in those situations, would also apply generally to AI models being incorporated into other systems.

    Since people are asking whether this law is needed or useful at all: I would say either the law is completely redundant, or very much needed. I'm not a lawyer, so I don't know which of those two cases it is, but I suspect it's the second one. I would be surprised if by a few years from now we haven't seen someone try to escape legal liability by pointing their finger at an AI system they claim is autonomously making the decisions that caused some harm.

  • SilverElfin 2 hours ago

    I don’t understand the point of the law. AI tech is inherently not predictable. Users know this. I don’t see how creating this liability keeps AI based products viable.

      WCSTombs 23 minutes ago

      And I think most people would agree that an inherently unpredictable component has no place in a safety-critical system or anywhere that potential liability would be huge. AI-based products can still be viable for the exact same reason that an ocean of shitty bug-riddled software is commercially viable today, because there are many potential applications where absolute correctness is not a hard requirement for success.