1 comments

  • doomlaser 2 hours ago

    Posting this in light of Sergey Brin’s Stanford comments last week about Google under-investing after publishing the Transformer paper. Not just in scaling compute, but in actually turning that invention into first-class LLM products.

    I revisited a 2015 radio interview I did (months before OpenAI existed) where I tried to reason about AI as “high-level algebra.” I didn’t have today’s vocabulary, but the underlying intuition—intelligence as inference over math and incentives—ends up looking surprisingly close to how LLMs actually work.

    Curious which parts people think aged well vs. clearly wrong.