16 comments

  • michalsustr 4 minutes ago

    Very interesting! I suggest following up with this on a rust core devs forum, as there might be higher concentration of people capable giving feedback.

  • andyjohnson0 19 hours ago

    I once worked for about a decade with a body of server-side C code that was written like this. Almost every data structure was either statically allocated at startup or on the stack. I inherited the codebase and kept the original style, once I'd got my head around it.

    Positives were that it made the code very easy to reason about, and my impression was that it made it reliable - ownership of data was mostly obvious, and it was hard to (for example) mistakenly use a data structure after it had been free'd. Memory usage under load was very predictable.

    Downsides were that data structures (such as string buffers) had to be sized for the max use-case, and code changes had to be hammered into a basically hierarchical data model. It was also hard to incorporate third-party library code - leading to it having its own http and smtp handling, which wasn't great. Some of that might be a consequence of the choice of base language though.

      stevendgarcia 18 hours ago

      This is a really helpful data point, thanks for sharing it.

      What you're describing aligns pretty closely with the behavior I'm trying to achieve—predictable ownership, clear memory lifetimes, and fewer “how did this get freed?” bugs. The downsides you mentioned (like sizing buffers for the worst case, being stuck with a rigid hierarchy, and friction with third-party libraries) are exactly the areas I'm aiming to address.

      The difference with what I'm cooking up is: by using lexical scopes with cheap arenas, we can preserve most of that reasoning without the rigid static tree structure. Scopes are flexible and explicit, and you can nest, retry, and promote memory between them without hard-coding everything upfront.

      That said, I don't think it completely resolves the ecosystem issues you ran into. If anything, it just makes the boundaries clearer.

      If you don't mind me asking, did you run into any specific pain points with refactors that were difficult because of the memory model, or was it more of a cultural constraint?

      Also, did this experience influence how you built things afterwards? Where did you land in terms of language/stack?

        stevendgarcia 18 hours ago

        Your comment got my wheels turning so.. quick followup.

        Since you lived with this for such a long stretch, I'd love your gut reaction to the specific escape hatches I'm building in to avoid the rigidity trap:

        1. Arenas grow, not fixed:

        Unlike stack frames, the arenas in my model can expand dynamically. So it's not "size for worst case"—it's "grow as needed, free all at once when scope ends." A request handler that processes 10 items or 10,000 items uses the same code; the arena just grows.

        2. Handles for non-hierarchical references

        When data genuinely needs to outlive its lexical scope or be shared across the hierarchy, you get a generational handle:

            let handle = app_cache.store(expensive_result)
            // handle can be passed around, stored, retrieved later
            // data lives in app scope, not request scope
        
        The handle includes a generation counter, so if the underlying scope dies, dereferencing returns None instead of use-after-free.

        3. Explicit clone for escape:

        If you need to return data from an inner scope to an outer one, you say `clone()` and it copies to the caller's arena. Not automatic, but not forbidden either.

        4. The hierarchy matches server reality:

            App (config, pools, caches)
            └── Worker (thread-local state)
                └── Task (single request)
                    └── Frame (loop iteration)
        
        
        For request/response workloads, this isn't an artificial constraint—it's how the work actually flows. The memory model just makes it explicit.

        Where I think it still gets awkward:

        * Graph structures with cycles (need handles, less ergonomic than GC)

        * FFI with libraries expecting malloc/free (planning an `unmanaged` escape hatch)

        * Long-running mutations without periodic scope resets (working on incremental reclamation)

        Do you think this might address the pain you experienced, or am I missing something? Particularly curious whether the handle mechanism would have helped with the cases where you had to hammer code into the hierarchy.

          andyjohnson0 17 hours ago

          There is a lot in what you describe that goes substantially beyond what I had in that codebase. It was basically a set of idioms with some helper code for a few common functions. Having an opinionated, predefined hierarchy is a good approach - there were concepts similar to your app/worker/task in the codebase I dealt with, although the equivalent of worker and task were both (kind of, its been a few years) situated below app.

          In the code I mentioned, a lot of use was made of multi-level arrays of structs, with functions being passed a pointer to a root data structure and one or more array indexes. This made function argument validation somewhat better than just checking for null pointers, as array sizes were mostly stored in their containing struct or were constant. I don't know if that corresponds to your 'handle' concept, but I suspect you're doing something more general-purpose.

          There were simple reader/writer functions for DTOs (which were mostly stored in arrays) but no idea of an ORM.

          Escaping using clone seems sound. The ability to expand scopes seems (if I understand it) powerful, but perhaps makes reasoning about the dynamic behaviour of the code harder. Having some kind of observability around this may help.

          Refactoring wasn't a huge problem. The codebase was basically a statically-linked monolith, so dependencies were simplified. I think that having an explicit way to indicate architecture boundaries might be useful.

          Overall, I suspect that if there are limitations with your approach then it may be that, while it simplifies 80-90% of a problem, the remainder is hard to fit into the architectural framework. Dogfooding some production-level applications should help.

          Good luck. What you're doing is fascinating, and I hope you'll update HN with your progress.

      chrisjj 16 hours ago

      > it was hard to (for example) mistakenly use a data structure after it had been free'd

      Hard? Why not impossible?

  • tacostakohashi 17 hours ago

    I'm not sure this needs to be its own language.

    In C/C++, this can be done by just not using malloc() or new.

    You can get an awfully long way in C with only stack variables (or even no variables, functional style). You can get a little bit further with variable length arrays, and alloca() added to the mix.

    With C++, you have the choice of stack, or raw new/delete, or unique_ptr, or shared_ptr / reference counting. I think this "multi-paradigm" approach works pretty well, but of course its complicated, and lots of people mess it up.

    I think, with well-designed C/C++, 90+% of things can be on the stack, and dynamic allocation can be very much the exception.

    I've been switching back and forth across C/C++/Java for the past few months. The more I think about it, the more ridiculous/pathological the Java approach of every object dynamically allocated, impossible to create an object not on the heap seems.

    I think the main problem is kind of a human one, that people see/learn about dynamic allocation/shared_ptr etc. and it becomes a hammer and everything looks like a nail, and they forget the prospect of just using stack variables, or more generally doing the simplest thing that will work.

    Maybe some kind of language where doing dumb things is an error would be good. e.g., in C++ if you do new and delete in the same scope, it's an error because it could have been a stack variable, just like unreachable code is an error on Java.

      stevendgarcia 17 hours ago

      Great feedback!

      You’re absolutely right — C and C++ give you the primitives to do this manually. If every developer followed the “stack first, heap only when necessary” discipline, and carefully used unique_ptr or avoided new/delete when possible, you could achieve much of the same safety and determinism.

      The difference I’m aiming for is that these constraints aren’t optional — they’re baked into the language and compiler. You don’t rely on every developer making the right choice; instead, the structure of the code itself enforces ownership and lifetime rules.

      So in your terms, instead of “doing dumb things is an error,” it’s structurally impossible to do dumb things in the first place. The language doesn’t just punish mistakes with foot-guns, it makes the safe path the only path.

      This also opens up other possibilities that are really awkward in C/C++, like structured concurrency with deterministic memory cleanup, restartable scopes, and safe parallel allocations, without relying on GC or heavy reference counting.

      I’d be curious: if C++ had a compiler that made stack-first allocation the default and forbade escapes unless explicit, would that solve most of the problems you’ve experienced, or are there still edge cases that would require a fundamentally different runtime model?

        tacostakohashi 16 hours ago

        As far as I'm concerned, stack-first allocation _is_ the default. It's true that the default exists in my head rather than in in a compiler, though.

        Maybe think about whether what you propose could exist as a compiler warning, or static analysis tool. Or, if you want to create your own language, go for it, that's cool too.

        For my purposes... the choice of paradigms, compilers, platforms with C++ and ability to handle and work on decades of existing code outweighs the benefits of "improved" languages, but that's just me.

  • eimrine 19 hours ago

    J has some of this approach but it has been made mostly for math so it is not optimized for CRUDs.

      stevendgarcia 18 hours ago

      That’s an interesting comparison.

      I agree J aligns philosophically (values over references), and you're right that it feels more optimized for pure mathematical work rather than managing long-lived, mutable state in concurrent services. What I’m exploring is whether a model like this can provide similar benefits in CRUD-heavy systems without needing GC or manual memory management.

      If you’ve seen J used effectively in that space, I’d love to hear more about it.

        eimrine 15 hours ago

        No I haven't, but I want you to notice that this script language has no GC at all. So theoretical part of your message is possible. Just it needs a Kenneth Iverson level of programmer if to continue the work in array programming paradigm.

        It may be not efficient at all for using rich types and structs because stack language is the earliest approach whose pros are coming from ability to make the single-pass compiler. If your requirements do not fit in single-pass approach, then you are going to have a really hard time of guessing what and when is needed to be recycled.

  • chrisjj 20 hours ago

    Great work! I look forward to the responses.

      stevendgarcia 18 hours ago

      Thanks! Appreciate the feedback. There are a couple of replies here that sparked some interesting angles—looking forward to diving deeper into those and seeing where the discussion goes