4 comments

  • chrisjj 2 hours ago

    Why do you call this a failure?

    This is "AI" parroting humans who made authorised commitments.

    If you don't want commitments out, don't feed them in.

      bhaviav100 26 minutes ago

      I don’t call it a failure of the AI. I agree it’s doing exactly what it was trained to do.

      The failure is architectural: once AI is allowed to draft at scale, “don’t feed it commitments” stops being a reliable control. Those patterns exist everywhere in historical data and live context.

      At that point the question isn’t training, it’s where you draw the enforcement boundary for irreversible outcomes.

      That’s the layer I’m testing.

  • SilverElfin 2 hours ago

    Good idea. I think companies are implementing all this complex stuff on their own today. But many probably also just have tight training of staff on what kind of refunds or discounts they can give, and manage it by sampling some amount of chat logs. It’s low tech but probably works enough to reduce the cost of mistakes.

      bhaviav100 2 hours ago

      That’s true today, and it works as long as humans are the primary actors.

      The break happens when AI drafts at scale. Training + sampling are after-the-fact controls. By the time a bad commitment is found, the customer expectation already exists.

      This is just moving the boundary from social enforcement to a hard system boundary for irreversible actions.

      Curious if you’ve seen teams hit that inflection point yet.