4 comments

  • thinkingkong an hour ago

    Well… if you look at pure functions without ant state then thats a whole class of computing you can refer to. The problem is that its not efficient to calculate state from arguments for everything. We end up saving to disk, writing packets over the network, etc. In a purely theoretical environment you could avoid state, but the real world imposes constraints that you need to operate within or between.

    Additionally, depending how deep down you go, theres state stored somewhere to calculate against. Vues are stored in some kind of register and theyre passed into operations with a target register as an additional argument.

      SpicyG an hour ago

      I agree, and I think this is where the distinction matters. I’m not claiming that state disappears, or that computation can be purely stateless all the way down. There is always state somewhere - registers, buffers, disks, networks. The question is where authority lives and whether correctness depends on reconstructing history. The inefficiency you point out is real: recomputing everything from arguments is often worse than persisting state. That’s why the pattern I’m aiming at isn’t “no state,” but no implicit, negotiated state. State can exist, be large, and even be shared — but it should be explicit, bounded, and verifiable, not something the system has to infer or reconcile in order to proceed. At the lowest levels, yes, registers hold values and operations mutate targets. But those mutations are local, immediate, and enforced by hardware invariants. Problems tend to appear higher up when systems start treating historical state as narrative, as something to reason about, merge, or explain, rather than as input with strict admissibility rules. So I see this less as a theoretical purity claim and more as a placement problem: push state to places where enforcement is cheap and local, and keep it out of places where it turns into coordination and recovery logic.

  • PaulHoule an hour ago

    It depends on what kind of system you're talking about.

    If you have no memory, that memory can't get corrupted.

    If the memory is carried by the request the memory can't get desynchronized with the request.

    You can use cryptographic techniques to prevent tampering and even reuse of states, though reuse can be a feature instead of a bug. Sometimes the state is too big to pass around like a football but even then you can access it with a key and merge it in in a disciplined way.

      SpicyG an hour ago

      I agree, and I think you’ve named the core constraint cleanly. The distinction I’m trying to draw isn’t “no memory ever,” but no implicit memory required for correctness. If there’s no memory, there’s nothing to corrupt. If memory is carried by the request, it can’t desynchronize from the request. That’s really the invariant I care about. I also agree that cryptographic techniques make this tractable in practice. Signed tokens, capabilities, idempotency keys, and replay protection let you move state to the edge, while also keeping the core enforcement logic stateless. In that model, reuse can be a feature rather than a bug, as long as it’s explicit and verifiable. Where I’ve seen things break down, is when state is large or shared and gets merged implicitly. As you say, sometimes you can’t pass it around like a football, but even then accessing it by key and merging it in a disciplined and bounded way, preserves the same principle: the system shouldn’t need to remember in order to act correctly. So for me it’s less “stateless vs stateful” and more “enforced state vs negotiated state.” Once the system starts negotiating with history, entropy creeps in very fast.