6 comments

  • jackfranklyn a day ago

    The time savings with AI coding tools aren't linear - they're heavily context-dependent.

    For me, the biggest wins come from tasks where I know exactly what I want but would normally spend time on boilerplate or syntax lookup. Wiring up API endpoints, writing test scaffolding, formatting data transformations - stuff that's tedious but not intellectually hard.

    Where it gets weird is when I'm exploring a problem space. The AI confidently suggests solutions that look plausible but have subtle issues. Debugging those can eat up all the time saved and then some, because you're now debugging code you didn't fully write or understand.

    The mental overhead of constantly evaluating AI output - is this correct? is this the approach I'd have taken? - is real and under-discussed. Sometimes it's faster to just write the thing myself than to prompt, review, correct, re-prompt.

      BinaryIgor a day ago

      Exactly this; for some tasks, it can speed up you dramatically, 5 - 10 x; with others, it actually makes you slower.

      And yes, very often writing a prompt + verifying results and possible modifying them and/or following-up takes longer than just writing code from scratch, manually ;)

      damnitbuilds a day ago

      I concur.

      Just using AI to write boilerplate with a simple "Do what I did for this for these" request and it's like what I hoped The Future would be, 5 years ago. It's great !

      But get over-ambitious with your requests and you get over-complex almost-solutions that, indeed, take up all your time to fix and I find this takes all the fun out of development - you have to restart coding something from scratch that you had "almost" finished a day ago.

      But when you get to know the AI's limits, it is definitely a time-saver.

      Hmm, I think I will trademark "Over-complex almost-solutions".

        chrisjj a day ago

        > But get over-ambitious with your requests

        Have you first asked the bot if the request is overambitious? :)

        > But when you get to know the AI's limits

        That would be when you know whether its output can be trusted, right? And there's the problem. Software is way beyond the point where product can be adequately proved. We rely on process control. And stochastic parroting does not cut it.

          damnitbuilds a day ago

          I suspect the AI-Dunning-Kruger effect would come into play.

  • chrisjj a day ago

    The bigger surprise here is the test sample of experienced software developers "expected AI to expedite their work and increase productivity". Perhaps a more scientifically selected sample would have yielded a more scientifically valid result.