Why AI hasn't changed everything (yet)

2 points | by riz1 2 hours ago

3 comments

  • proc0 2 hours ago

    The main problem that I'm seeing is that software design is underappreciated and underestimated. To the extent there is AI hype it is driven by this blind spot. Software isn't just a bunch of text. Software is logical structures that form moving parts that interlock and function based on a ton of requirements and specs of the target hardware.

    So far AI has shown it cannot understand this layer of software. There are studies of how LLMs derive their answers to technical questions and it is not based on the first principals or logical reasoning, but rather sparse representations derived from training data. As a result it could answer extremely difficult questions that are well represented in the training data but fail miserably on the simplest kinds of questions, i.e. some simple addition of ten digits.

    This is what the article is talking about with small teams with new projects being more productive. Chances are these small teams have small enough problems and also have a lot more flexibility to produce software that is experimental and doesn't work that well.

    I am also not surprised the hype exists. The software industry does not value software design, and instead optimize their codebases so they can scale by adding an army of coders that produce a ton of duplicate logic and unnecessary complexity. This goes hand-in-hand with how LLMs work, so the transition is seamless.

      riz1 an hour ago

      I mostly agree with you, especially on software design being underappreciated. A lot of what slows teams down today isn’t typing code, it’s reasoning about systems that have accreted over time. I am thinking about implicit contracts, historical decisions, and constraints that live more in people’s heads than in the code itself.

      Where I’d push back slightly is on framing this primarily as an LLM limitation. I don’t expect models to reason from first principles about entire systems, and I don’t think that’s what’s missing right now. The bigger gap I see is that we haven’t externalised design knowledge in a way that’s actionable.

      We still rely on humans to reconstruct intent, boundaries, and "how work flows" every time they touch a part of the system. That reconstruction cost dominates, regardless of whether a human or an AI is writing the code.

      I also don’t think small teams move faster because they’re shipping lower-quality or more experimental software (though that can be true). They move faster because the design surface is smaller and the work routing is clear. In large systems, the problem isn’t that AI can’t design; it’s that neither humans nor AI are given the right abstractions to work with.

      Until we fix that, AI will mostly amplify what already exists: good flow in small systems, and friction in large ones.

  • riz1 2 hours ago

    I've been thinking about why AI seems to accelerate some teams dramatically while leaving others mostly unchanged. This post is an attempt to articulate what I think is missing: not better tools, but better routing of work, context, and ownership. Curious how this resonates (or doesn't) with others.