25 comments

  • Negitivefrags 11 minutes ago

    At my company I just tell people “You have to stand behind your work”

    And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.

    I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.

      darth_avocado 3 minutes ago

      > At my company I just tell people “You have to stand behind your work”

      Since when has that not been the bare minimum. Even before AI existed, and even if you did not work in programming at all, you sort of have to do that as a bare minimum. Even if you use a toaster and your company guidelines suggest you toast every sandwich for 20 seconds, if following every step as per training results in a lump of charcoal for bread, you can’t serve it up to the customer. At the end of the day, you make the sandwich, you’re responsible for making it correctly.

      Using AI as a scapegoat for sloppy and lazy work needs to be unacceptable.

        Negitivefrags a minute ago

        Of course it’s the minimum standard, and it’s obvious if you view AI as a tool that a human uses.

        But some people view it as a seperate entity that writes code for you. And if you view AI like that, then “The AI did it” becomes an excuse that they use.

      bitwize 9 minutes ago

      The smartest and most sensible response.

      I'm dreading the day the hammer falls and there will be AI-use metrics implemented for all developers at my job.

        locusofself 7 minutes ago

        It's already happened at some very big tech companies

  • jdlyga a few seconds ago

    As a a developer, you're not only responsible for contributing code. But verifying that it works. I've seen this practice be put in place on other teams, not just with LLM's, but with devs who contribute bugfixes without understanding the problem

  • scuff3d 20 minutes ago

    It's depressing this has to be spelled out. You'd think people would be smart enough not to harass maintainers with shit they don't understand.

      ActionHank 6 minutes ago

      People who are smart enough to think that far ahead are also smart enough not to fall into the “ai can do all jobs perfectly all the time and just need my divine guidance” trap.

      doctorpangloss 3 minutes ago

      on the flip side, the inability for LLVM to take contributions - whatever that means, I don't know what the best system is - leads to all sorts of problems in the ecosystem, slow down in triton features, problems with rust, etc. the problems they are experienced predated LLMs, as long as you allow "people don't perceive contributions as being easy or dealt with in a timely manner" a problem.

  • jonas21 2 minutes ago

    The title should be changed to: "LLVM AI tool policy: human in the loop".

    (at the moment it is "We don't need more contributors who aren't programmers to contribute code," which is a quote from a reply on the thread).

    The HN guidelines state "please use the original title, unless it is misleading or linkbait; don't editorialize."

  • looneysquash 11 minutes ago

    Looks like a good policy to me.

    One thing I didn't like was the copy/paste response for violations.

    It makes sense to have one. Just the text they propose uses what I'd call insider terms, and also terms that sort of put down the contributor.

    And while that might be appropriate at the next level of escalation, the first level stock text should be easier for the outside contributor to understand, and should better explain the next steps for the contributor to take.

  • whatever1 2 hours ago

    The code writers increased exponentially overnight. The number of reviewers is constant (slightly reduced due to layoffs).

      rvz an hour ago

      And so did the slop.

  • vjay15 12 minutes ago

    It is insane that this is happening in one of the most essential piece of software. This is a much needed step to decrease the increase of slop contribution. It's more work for the maintainer to review all this mess.

  • hsuduebc2 29 minutes ago

    Contributors should never find themselves in the position of saying “I don’t know, an LLM did it”

    I would never have thought that someone could actually write this.

      jfreds 11 minutes ago

      I get this at work, frequently.

      “Oh, cursor wrote that.”

      If it made it into your pull request, YOU wrote it, and it it’ll be part of your performance review. Cursor doesn’t have a performance review. Simple as

        lokar 2 minutes ago

        I could see this coming when I quit. I would not have been able to resist insisting people doing that be fired.

        hsuduebc2 5 minutes ago

        Yea, this is just lazy. If you don't know what it does and how then you shouldn't submit it at all.

      clayhacks 19 minutes ago

      I’ve seen a bunch of my colleagues say this when I ask about the code they’ve submitted for review. Incredibly frustrating, but likely to become more common

  • EdwardDiego an hour ago

    Good policy.

  • 29athrowaway 20 minutes ago

    Then the vibe coder will ask an LLM to answer questions about the contribution.

  • mmsc 5 minutes ago

    This AI usage is like a turbo-charger for the Dunning–Kruger effect, and we will see these policies crop up more and more, as technical people become more and more harassed and burnt out by AI slop.

    I also recently wrote a similar policy[0] for my fork of a codebase[1]. I had to write this because the original developer took the AI pill, and starting committing totally broken code that was fulled of bugs, and doubled down when asked about it [2].

    On an analysis level, in a recent post[3], I commented that "Non-coders using AI to program are effectively non-technical people, equipped with the over-confidence of technical people. Proper training would turn those people into coders that are technical people. Traditional training techniques and material cannot work, as they are targeted and created with technical people in mind."

    But what's more, we're also seeing programmers use AI creating slop. They're effectively technical people equipped with their initial over-confidence, highly inflated by a sense of effortless capability. Before AI, developers were once (sometimes) forced to pause, investigate, and understand, and now it's just easier and more natural to simply assume they grasp far more than they actually do, because @grok told them this is true.

    [0]: https://gixy.io/contributing/#ai-llm-tooling-usage-policy

    [1]: https://github.com/MegaManSec/gixyng

    [2]: https://joshua.hu/gixy-ng-new-version-gixy-updated-checks#qu...

    [3]: https://joshua.hu/ai-slop-story-nginx-leaking-dns-chatgpt#fi...

  • zeroonetwothree 19 minutes ago

    I only wish my workplace had the same policy. I’m so tired of reviewing slop where the submitter has no idea what it’s even for.

  • jfreds 20 minutes ago

    > automated review tools that publish comments without human review are not allowed

    This seems like a curious choice. At my company we have both Gemini and cursor (I’m not sure which model under the hood on that) review agents available. Both frequently raise legitimate points. Im sure they’re abusable, I just haven’t seen it

      bandrami 7 minutes ago

      An LLM is a plausibility engine. That can't be the final step of any workflow.