8 comments

  • delaminator 2 days ago

    I use a hook to dump the entire session on compress.

    it saves all the input chat, all the output chat, which tools were used and what they were used on.

    https://github.com/lawless-m/Devolver

    I use about five different computers. It all gets logged to one of them devlog-receiver

    and it serves a web page where you can search through all of your sessions across all of your machines. I use DuckDB full text search.

    Sure, I don't have an MCP part. So that bit's different.

      tad-hq 2 days ago

      That's a clever approach for centralization. The hook method is solid for logging.

      The key difference is you're doing full-text search on raw conversations. With my MCP approach, Claude gets both the raw history and AI-generated summaries.

      So when I ask "In project X, what security trade-off did we make on feature Y" Claude reads the conversation summary, understands, and tells me immediately, rather than sorting through and matching keywords.

      The MCP piece unlocks agent reasoning over your entire history, not just text retrieval. Haiku Analysis allows faster, more holistic understanding.

      Different tools for different use cases!

        delaminator 2 days ago

        The summaries are a good idea.

        Tbh I’m just seeing where it goes. I did the “dump the conversation” part as stage 1, added “ingest them centrally” … ok what next “ok, search them”.

        I haven’t had to actually use it yet. But it is interesting to see which projects got the most prompts, which used the most tokens.

        It was all prompted (no pun) because I wanted to show a non-programming colleague how the whole “build by prompting” thing works but more than just typing a couple of demo prompts.

          tad-hq 2 days ago

          I went through the same stages—started with "dump everything," then "search it," and recently landed on "let the agent read it for me." I started this as a private project like 2 months ago very simply, and it just graduated more and more as time went on.

          Your token analysis feature sounds useful for tracking usage patterns and workflow efficiency, I have thought about it before. A lot of the direction agentic code is going in is optimizing tool call usage with proper context engineering so I definitely see the value there.

            delaminator a day ago

            Oh, it counts words not tokens, sadly

                Project Prompts Tools Files Words In Words Out Last Activity
                Blizzard 272 2069 59 19.9k 22.0k 2026-01-05 14:54
                    RIVSPROD01 272 2069 59 19.9k 22.0k 2026-01-05 14:54
                StinkySpy 106 0 0 3.1k 19.5k 2026-01-05 13:19
                    roob 106 0 0 3.1k 19.5k 2026-01-05 13:19
                Devolver 57 162 11 5.8k 4.7k 2026-01-02 17:10
                    RIVSPROD01 21 162 11 3.6k 1.6k 2026-01-02 16:17
                    RIVMIS01 19 0 0 943 2.5k 2026-01-02 16:08
                    roob 17 0 0 1.3k 547 2026-01-02 17:10
                ONI-StorageTooltipMod 13 0 0 267 1.1k 2026-01-05 14:53
                    roob 13 0 0 267 1.1k 2026-01-05 14:53
                Robocyril 12 0 0 696 2.0k 2026-01-02 15:13
                    RIVMIS01 12 0 0 696 2.0k 2026-01-02 15:13
                    Declotter 11 0 0 661 1.7k 2026-01-02 15:01
                    RIVMIS01 11 0 0 661 1.7k 2026-01-02 15:01
            
            471 prompts, 2231 tool calls, 30k words in, 50k words out

            tool call counts was broken, which is why some are zero

              tad-hq 17 hours ago

              If you look at the jsonl structure of the sessions inside of your .claude/projects directory you should be able to find the token usage you are looking for. It's saved directly at the end of every tool call.

      2 days ago
      [deleted]