3 comments

  • al_borland 2 hours ago

    I’ve just been ignoring my boss every time he says something about how we should leverage AI. What we’re building doesn’t need it and can’t tolerate hallucinations. They just want to be able to brag up the chain that AI is being used, which is the wrong reason to use it.

    If I was forced to use it, I’d probably be writing pretty extensive guardrails (outside of the AI) to make sure it isn’t going off the rails and the results make sense. I’m doing that anyway with all user input, so I guess I’d be treating all LLM generated text as user input and assuming it’s unreliable.

      kundan_s__r 2 hours ago

      That’s a very sane stance. Treating LLM output as untrusted input is probably the correct default when correctness matters.

      The worst failures I’ve seen happen when teams half-trust the model — enough to automate, but still needing heavy guardrails. Putting the checks outside the model keeps the system understandable and deterministic.

      Ignoring AI unless it can be safely boxed isn’t anti-AI — it’s good engineering.

  • stephenr 43 minutes ago

    I've found that I can use a very similar approach to the one I've used when handling the risks associated with blockchain, cryptocurrencies, "web scale" infrastructure, and of course the chupacabra.