I think a missing key detail that becomes obvious in hindsight is that you needed a lot more structured logic before when llms were not as powerful, but now that these models are being post trained on this type of minimal tool breadth but high depth (ultimately towards just an agent using a computer) you can remove a lot of the scaffolding.
I think a missing key detail that becomes obvious in hindsight is that you needed a lot more structured logic before when llms were not as powerful, but now that these models are being post trained on this type of minimal tool breadth but high depth (ultimately towards just an agent using a computer) you can remove a lot of the scaffolding.
This is essentially what the claude code and codex teams have been preaching, right?