That's a clever approach for centralization. The hook method is solid for logging.
The key difference is you're doing full-text search on raw conversations. With my MCP approach, Claude gets both the raw history and AI-generated summaries.
So when I ask "In project X, what security trade-off did we make on feature Y" Claude reads the conversation summary, understands, and tells me immediately, rather than sorting through and matching keywords.
The MCP piece unlocks agent reasoning over your entire history, not just text retrieval. Haiku Analysis allows faster, more holistic understanding.
Tbh I’m just seeing where it goes. I did the “dump the conversation” part as stage 1, added “ingest them centrally” … ok what next “ok, search them”.
I haven’t had to actually use it yet. But it is interesting to see which projects got the most prompts, which used the most tokens.
It was all prompted (no pun) because I wanted to show a non-programming colleague how the whole “build by prompting” thing works but more than just typing a couple of demo prompts.
I went through the same stages—started with "dump everything," then "search it," and recently landed on "let the agent read it for me." I started this as a private project like 2 months ago very simply, and it just graduated more and more as time went on.
Your token analysis feature sounds useful for tracking usage patterns and workflow efficiency, I have thought about it before. A lot of the direction agentic code is going in is optimizing tool call usage with proper context engineering so I definitely see the value there.
If you look at the jsonl structure of the sessions inside of your .claude/projects directory you should be able to find the token usage you are looking for. It's saved directly at the end of every tool call.
I use a hook to dump the entire session on compress.
it saves all the input chat, all the output chat, which tools were used and what they were used on.
https://github.com/lawless-m/Devolver
I use about five different computers. It all gets logged to one of them devlog-receiver
and it serves a web page where you can search through all of your sessions across all of your machines. I use DuckDB full text search.
Sure, I don't have an MCP part. So that bit's different.
That's a clever approach for centralization. The hook method is solid for logging.
The key difference is you're doing full-text search on raw conversations. With my MCP approach, Claude gets both the raw history and AI-generated summaries.
So when I ask "In project X, what security trade-off did we make on feature Y" Claude reads the conversation summary, understands, and tells me immediately, rather than sorting through and matching keywords.
The MCP piece unlocks agent reasoning over your entire history, not just text retrieval. Haiku Analysis allows faster, more holistic understanding.
Different tools for different use cases!
The summaries are a good idea.
Tbh I’m just seeing where it goes. I did the “dump the conversation” part as stage 1, added “ingest them centrally” … ok what next “ok, search them”.
I haven’t had to actually use it yet. But it is interesting to see which projects got the most prompts, which used the most tokens.
It was all prompted (no pun) because I wanted to show a non-programming colleague how the whole “build by prompting” thing works but more than just typing a couple of demo prompts.
I went through the same stages—started with "dump everything," then "search it," and recently landed on "let the agent read it for me." I started this as a private project like 2 months ago very simply, and it just graduated more and more as time went on.
Your token analysis feature sounds useful for tracking usage patterns and workflow efficiency, I have thought about it before. A lot of the direction agentic code is going in is optimizing tool call usage with proper context engineering so I definitely see the value there.
Oh, it counts words not tokens, sadly
471 prompts, 2231 tool calls, 30k words in, 50k words outtool call counts was broken, which is why some are zero
If you look at the jsonl structure of the sessions inside of your .claude/projects directory you should be able to find the token usage you are looking for. It's saved directly at the end of every tool call.
Ah, thanks. I'll add it in.