The Q, K, V Matrices

140 points | by yashsngh a day ago

57 comments

  • roadside_picnic 12 hours ago

    I will beat loudly on the "Attention is a reinvention of Kernel Smoothing" drum until it is common knowledge. It looks like Cosma Schalizi's fantastic website is down for now, so here's a archive link to his essential reading on this topic [0].

    If you're interested in machine learning at all and not very strong regarding kernel methods I highly recommending taking a deep dive. Such a huge amount of ML can be framed through the lens of kernel methods (and things like Gaussian Processes will become much easier to understand).

    0. https://web.archive.org/web/20250820184917/http://bactra.org...

      libraryofbabel 11 hours ago

      This is really useful, thanks. In my other (top-level) comment, I mentioned some vague dissatisfactions around how in explanations of attention the Q, K, V matrices always seem to be pulled out of a hat after being motivated in a hand-wavy metaphorical way. The kernel methods treatment looks much more mathematically general and clean - although for that reason maybe less approachable without a math background. But as a recovering applied mathematician ultimately I much prefer a "here is a general form, now let's make some clear assumptions to make it specific" to a "here's some random matrices you have to combine in a particular way by murky analogy to human attention and databases."

      I'll make a note to read up on kernels some more. Do you have any other reading recommendations for doing that?

      lugu an hour ago

      I don't understand what motivate the need for w1 and w2, except if we accept the premise that we are doing attention in the query and key spaces... Which is not the thesis of the author. What am I missing?

      Surprisingly, reading this piece helped me better understand the query, key metaphor.

      lambdaone 4 hours ago

      The archive link above is broken: this is an earlier archived copy of that page with content intact:

      https://web.archive.org/web/20230713101725/http://bactra.org...

      mbeex 3 hours ago

      Site is still fine (but is and was always http-only):

      http://bactra.org/notebooks/nn-attention-and-transformers.ht...

      aquafox 6 hours ago

      Oh wow, I wish I could give more than one upvote for this reference!

      D-Machine 10 hours ago

      Yes, this needs to be linked more, you are doing a great service.

      MontyCarloHall 10 hours ago

      It's utterly baffling to me that there hasn't been more SOTA machine learning research on Gaussian processes with the kernels inferred via deep learning. It seems a lot more flexible than the primitive, rigid dot product attention that has come to dominate every aspect of modern AI.

        D-Machine 10 hours ago

        I think this mostly comes down to (multi-headed) scaled dot-product attention just being very easy to parallelize on GPUs. You can then make up for the (relative) lack of expressivity / flexibility by just stacking layers.

          MontyCarloHall 10 hours ago

          A neural-GP could probably be trained with the same parallelization efficiency via consistent discretization of the input space. I think their absence owes more to the fact that discrete data (namely, text) has dominated AI applications. I imagine that neural-GPs could be extremely useful for scale-free interpolation of continuous data (e.g. images), or other non-autoregressive generative models (scale-free diffusion?)

            D-Machine 9 hours ago

            Right, I think there are plenty of other approaches that surely scale just as easily or better. It's like you said, the (early) dominance of text data just artificially narrowed the approaches tried.

        AlexCoventry 7 hours ago

        Doesn't involve Gaussians, but:

        The Free Transformer: https://arxiv.org/abs/2510.17558

        Abstract: We propose an extension of the decoder Transformer that conditions its generative process on random latent variables which are learned without supervision thanks to a variational procedure. Experimental evaluations show that allowing such a conditioning translates into substantial improvements on downstream tasks.

        imtringued 5 hours ago

        The Q, K, V matrices form neural networks at runtime, that's the entire point.

      esafak 11 hours ago

      (How) do you find that framing enlightening?

      somethingsome 11 hours ago

      Hey, can I contact you somehow?

  • hackpert 12 minutes ago

    These metaphorical database analogies bug me, and from what it seems like, a lot of other people in comments! So far some of the most reasonable explanations I have found that take training dynamics into account are from Lenka Zdeborova's lab (albeit in toy, linear attention settings but it's easy to see why they generalize to practical ones). For instance, this is a lovely paper: https://arxiv.org/abs/2509.24914

  • libraryofbabel 12 hours ago

    This is ok (could use some diagrams!), but I don't think anyone coming to this for the first time will be able to use it to really teach themselves the LLM attention mechanism. It's a hard topic and requires two or three book chapters at least if you really want to start grokking it!

    For anyone serious about coming to grips with this stuff, I would strongly recommend Sebastian Raschka's excellent book Build a Large Language Model (From Scratch), which I just finished reading. It's approachable and also detailed.

    As an aside, does anyone else find the whole "database lookup" motivation for QKV kind of confusing? (in the article, "Query (Q): What am I looking for? Key (K): What do I contain? Value (V): What information do I actually hold?"). I've never really got it and I just switched to thinking of QKV as a way to construct a fairly general series of linear algebra transformations on the input of a sequence of token embedding vectors x that is quadratic in x and ensures that every token can relate to every other token in the NxN attention matrix. After all, the actual contents and "meaning" of QKV are very opaque: the weights that are used to construct them are learned during training. Furthermore, there is a lot of symmetry between Q and K in the algebra, which gets broken only by the causal mask. Or do people find this motivation useful and meaningful in some deeper way? What am I missing?

    [edit: on this last question, the article on "Attention is just Kernel Smoothing" that roadside_picnic posted below looks really interesting in terms of giving a clean generalized mathematical approach to this, and also affirms that I'm not completely off the mark by being a bit suspicious about the whole hand-wavy "database lookup" Queries/Keys/Values interpretation]

      D-Machine 10 hours ago

      > I've never really got it and I just switched to thinking of QKV as a way to construct a fairly general series of linear algebra transformations on the input of a sequence of token embedding vectors x that is quadratic in x and ensures that every token can relate to every other token in the NxN attention matrix.

      That's because what you say here is the correct understanding. The lookup thing is nonsense.

      The terms "Query" and "Value" are largely arbitrary and meaningless in practice, look at how to implement this in PyTorch and you'll see these are just weight matrices that implement a projection of sorts, and self-attention is always just self_attention(x, x, x) or self_attention(x, x, y) in some cases (e.g. cross-attention), where x and y are are outputs from previous layers.

      Plus with different forms of attention, e.g. merged attention, and the research into why / how attention mechanisms might actually be working, the whole "they are motivated by key-value stores" thing starts to look really bogus. Really it is that the attention layer allows for modeling correlations/similarities and/or multiplicative interactions among a dimension-reduced representation. EDIT: Or, as you say, it can be regarded as kernel smoothing.

        libraryofbabel 10 hours ago

        Thanks! Good to know I’m not missing something here. And yeah, it’s always just seemed to me better to frame it as: let’s find a mathematical structure to relate every embedding vector in a sequence to every other vector, and let’s throw in a bunch of linear projections so that there are lots of parameters to learn during training to make the relationship structure model things from language, concepts, code, whatever.

        I’ll have to read up on merged attention, I haven’t got that far yet!

          D-Machine 10 hours ago

          The main takeaway is that "attention" is a much broader concept generally, so worrying too much about the "scaled dot-product attention" of transformers deeply limits your understanding of what kinds of things really matter in general.

          A paper I found particularly useful on this was generalizing even farther to note the importance of multiplicative interactions more generally in deep learning (https://openreview.net/pdf?id=rylnK6VtDH).

          EDIT: Also, this paper I was looking for dramatically generalizes the notion of attention in a way I found to be quite helpful: https://arxiv.org/pdf/2111.07624

      ianand 6 hours ago

      I'm not a fan of the database lookup analogy either.

      The analogy I prefer when teaching attention is celestial mechanics. Tokens are like planets in (latent) space. The attention mechanism is like a kind of "gravity" where each token is influencing each other, pushing and pulling each other around in (latent) space to refine their meaning. But instead of "distance" and "mass", this gravity is proportional to semantic inter-relatedness and instead of physical space this is occurring in a latent space.

      https://www.youtube.com/watch?v=ZuiJjkbX0Og&t=3569s

      mnicky 12 hours ago

      IIRC isn't the symmetry between Q and K also broken by the direction of the softmax? I mean, row vs column-wise application yields different interpretation.

        ebonnafoux 5 hours ago

        Yes but in practice, if you compute K=X.wk, Q=X.wq and then K.tQ you make three matrice multiplication. Wouldn't be faster to compute W=wk.twq beforhand and then just X.W.tX which will be just two matrices multiplication ? Is there something I am missing ?

          yorwba an hour ago

          Most models have a per-head dimension much smaller than the input dimension, so it's faster to multiply by the small wk and wk individually than to multiply by the large matrix W. Also, if you use rotary positional embeddings, the RoPE matrices need to be sandwiched in the middle and they're different for every token, so you could no longer premultiply just once.

        libraryofbabel 12 hours ago

        Oh yes! That's probably more important, in fact.

          mnicky 5 hours ago

          Well, I think that this is also answer to your question about the intuition.

          If the assymetry of K and Q stems from the direction of the softmax application, it must also be the reason for the names of the matrices :)

          And if you think about it, it makes sense that for each Key, weights to all of the Queries sum to 1 and not vice versa.

          So this is my only intuition for the K and Q names.

          (It may or may not be similar to the whole "db lookup thing"... I just don't use that one.)

      andoando 10 hours ago

      I find it really confusing as well. The analogy implies we have something like Q[K] = V

      For one, I have no idea how this relates to the mathematical operations of calculating attention score, applying softmax and than doing dot product with the V matrix.

      Second just conceptually I don't understand how this relates to the "a word looks up to how relevant it is to another word". So if you have "The cat eats his soup", "his" queries how it's important it is to cat. So is V just numerical result of the significance, like 0.99?

      I dont think Im very stupid but after seeing a dozens of these, I am starting to wonder if anyone actually understands this conceptually

        empiricus 3 hours ago

        Not sure how helpful it is, but: Words or concepts are represented as high-dim vectors. At high level, we could say each dimension is another concept like "dog"-ness or "complexity" or "color"-ness. The "a word looks up to how relevant it is to another word" is basically just relevance=distance=vector dot product. and the dot product can be distorted="some directions are more important" for one purpose or another(q/k/v matrixes distort the dot product). softmax is just a form of normalization (all sums to 1 = proper probability). The whole shebang works only because all pieces can be learned by gradient descent, otherwise it would be impossible to implement.

      p1esk 11 hours ago

      The way I think about QKV projections: Q defines sensitivity of token i features when computing similarity of this token to all other tokens. K defines visibility of token j features when it’s selected by all other tokens. V defines what features are important when doing weighted sum of all tokens.

        D-Machine 10 hours ago

        Don't get caught up in interpreting QKV, it is a waste of time, since completely different attention formulations (e.g. merged attention [1]) still give you the similarities / multiplicative interactions, but may even work better [2]. EDIT: Oh and attention is much more broad than scaled dot-product attention [3].

        [1] https://www.emergentmind.com/topics/merged-attention

        [2] https://blog.google/innovation-and-ai/technology/developers-...

        [3] https://arxiv.org/abs/2111.07624

          p1esk 9 hours ago

          I glanced at these links and it seems that all these attention variants still use QKV projections.

          Do you see any issues with my interpretation of them?

            D-Machine 9 hours ago

            Read the third link / review paper, it is not at all the case that all attention is based on QKV projections.

            Your terms "sensitivity", "visibility", and "important" are too vague and lack any clear mathematical meaning, so IMO add nothing to any understanding. "Important" also seems factually wrong, given these layers are stacked, so later weights and operations can in fact inflate / reverse things. Deriving e.g. feature importances from self-attention layers remains a highly disputed area (e.g. [1] vs [2], for just the tip of the iceberg).

            You are also assuming that the importance of attention is the highly-specific QKV structure and projection, but there is very little reason to believe that based on the third review link I shared. Or, if you'd like another example of why not to focus so much on scaled dot-product attention, see that it is just a subset of a broader category of multiplicative interactions (https://openreview.net/pdf?id=rylnK6VtDH).

            [1] Attention is not Explanation - https://arxiv.org/abs/1902.10186

            [2] Attention is not not Explanation - https://arxiv.org/abs/1908.04626

              p1esk 7 hours ago

              1. The two papers you linked are about importance of attention weights, not QKV projections. This is orthogonal to our discussion.

              2. I don't see how the transformations done in one attention block can be reversed in the next block (or in the FFN network immediately after the first block): can you please explain?

              3. All state of the art open source LLMs (DeepSeek, Qwen, Kimi, etc) still use all three QKV projections, and largely the same original attention algorithm with some efficiency tweaks (grouped query, MLA, etc) which are done strictly to make the models faster/lighter, not smarter.

              4. When GPT2 came out, I myself tried to remove various ops from attention blocks, and evaluated the impact. Among other things I tried removing individual projections (using unmodified input vectors instead), and in all three cases I observed quality degradation (when training from scratch).

              5. The terms "sensitivity", "visibility", and "important" all attempt to describe feature importance when performing pattern matching. I use these terms in the same sense as importance of features matched by convolutional layer kernels, which scan the input image and match patterns.

                D-Machine 6 hours ago

                1. I do not think it is orthogonal, but, regardless, there is plenty of research trying to get explainability out of all aspects of scaled dot-product attention layers (weights, QKV projections, activations, other aspects), and trying to explain deep models generally via sort of bottom-up mechanistic approaches. I think it can be clearly argued this does not give us much and is probably a waste of time (see e.g. https://ai-frontiers.org/articles/the-misguided-quest-for-me...). I think this is especially clear when you have evidence (in research, at least) that other mechanisms and layers can produce highly similar results.

                2. I didn't say the transformations can be reversed, I said if you interpret anything as an importance (e.g. a magnitude), that can be inflated / reversed by whatever weights are learned by later layers. Negative values and/or weights make this even more annoying / complicated.

                3. Not sure how this is relevant, but, yes, any reasons for caring about QKV and scaled dot-product attention specifics are mostly related to performance and/or current popular leading models. But there is nothing fundamentally important about scaled dot-product attention, it most likely just happens to be something that was settled on prematurely because it works quite well and is easy to parallelize. Or, if you like the kernel smoothing explanation also mentioned in this thread, scaled dot-product self-attention implements something very similar to a particularly simple and nice form of kernel smoothing.

                4. Yup, removing ops from scaled dot-product attention blocks is going to dramatically reduce expressivity, because there really aren't much ops there to remove. But there is enough work on low-rank attention, linear attentions, and sparse attentions, that show you can remove a lot of expressivity and still do quite well. And, of course, the huge amount of helpful other types of attention I linked before give gains in some cases too. You should be skeptical about any really simple or clear story about what is going on here. In particular, there is no clear reason why a small hypernetwork couldn't be used to approximate something more general than scaled dot-product attention, except that, obviously this is going to be more expensive, and in practice you can probably just get the same approximate flexibility by stacking simpler attention layers.

                5. I still find that doesn't give me any clear mathematical meaning.

                I suspect our learning goals are at odds. If you want to focus solely on the very specific kind of attention used in the popular transformer models today, perhaps because you are interested in optimizations or distillation or something, then by all means try to come up with special intuitions about Q, K, and V, if you think that will help here. But those intuitions will likely not translate well to future and existing modifications and improvements to attention layers, in transformers or otherwise. You will be better served learning about attention broadly and developing intuitions based on that.

                Others have mentioned the kernel smoothing interpretation, and I think multiplicative interactions are the clearer deeper generalization of what is really important and valuable here. Also, the useful intuitions in DL have been less about e.g. "feature importances" and "sensitivity" and such, but tend to come more from linear algebra and calculus, and tend to involve things like matrix conditioning and regularization / smoothing and Lipschitz constants and the like. In particular, the softmax in self-attention is probably not doing what people typically say it does (https://arxiv.org/html/2410.18613v1), and the real point is that all these attention layers are trained in an end-to-end fashion where all layers are interdependent on each other to varying complicated degrees. Focusing on very specific interpretations ("Q is this, K is that"), especially where these interpretations are sort of vaguely metaphorical, like yours, is not likely to result in much deep understanding, in my opinion.

                  psb217 2 hours ago

                  Per your point 4, some current hyped work is pushing hard in this direction [1, 2, 3]. The basic idea is to think of attention as a way of implementing an associative memory. Variants like SDPA or gated linear attention can then be derived as methods for optimizing this memory online such that a particular query will return a particular value. Different attention variants correspond to different ways of defining how the memory produces a value in response to a query, and how we measure how well the produced value matches the desired value.

                  Some of the attention-like ops proposed in this new work are most simply described as implementing the associative memory with a hypernetwork that maps keys to values with weights that are optimized at test time to minimize value retrieval error. Like you suggest, designing these hypernetworks to permit efficient implementations is tricky.

                  It's a more constrained interpretation of attention than you're advocating for, since it follows the "attention as associative memory" perspective, but the general idea of test-time optimization could be applied to other mechanisms for letting information interact non-linearly across arbitrary nodes in the compute graph.

                  [1] https://arxiv.org/abs/2501.00663

                  [2] https://arxiv.org/abs/2504.13173

                  [3] https://arxiv.org/abs/2505.23735

      ebbi 12 hours ago

      Does that book require some sort of technical prerequisite to understand?

        libraryofbabel 12 hours ago

        It helps if you have some basic linear algebra, for sure - matrices, vectors, etc. That's probably the most important thing. You don't need to know pytorch, which is introduced in the book as needed and in an appendix. If you want to really understand the chapters on pre-training and fine-tuning you'll need to know a bit of machine learning (like a basic grasp of loss functions and gradient descent and backpropagation - it's sort of explained in the book but I don't think I'd have understood it much without having trained basic neural networks before), but that is not required so much for the earlier chapters on the architecture, e.g. how the attention mechanism works with Q, K, V as discussed in this article.

        The best part about it is seeing the code built up for the GPT-2 architecture in basic pytorch, and then loading in the real GPT-2 weights and they actually work! So it's great for learning but also quite realistic. It's LLM architecture from a few years ago (to keep it approachable), but Sebastian has some great more advanced material on modern LLM architectures (which aren't that different) on his website and in the github repo: e.g. he has a whole article on implementing the Qwen3 architecture from scratch.

          kouteiheika 7 hours ago

          > modern LLM architectures (which aren't that different) on his website and in the github repo: e.g. he has a whole article on implementing the Qwen3 architecture from scratch.

          This might be underselling it a little bit. The difference between GPT2 and Qwen3 is maybe, I don't know, ~20 lines of code difference if you write it well? The biggest difference is probably RoPE (which can be tricky to wrap your head around); the rest is pretty minor.

            libraryofbabel 7 hours ago

            There’s Grouped Query Attention as well, a different activation function, and a bunch of not very interesting norms stuff. But yeah, you’re right - still very similar overall.

          ebbi 12 hours ago

          Thank you! Might get the book to see what I can learn from it, and see what gaps I have to research and learn more. Appreciate the detailed response.

            libraryofbabel 11 hours ago

            Sure! I don't think the linear algebra pre-req is that hard if you do need to learn it, there's tons of material online to practice on and it's really just basic "apply this matrix to this vector" stuff. Most of what would be in even an undergrad intro to linear algebra course (inverting a matrix, determinants, whatever) is totally unnecessary.

  • storus an hour ago

    QKV attention is just a probabilistic lookup table where QKV allow adjusting dimensions of input/output to fit into your NN block. If your Q perfectly matches some known K (from training) then you get the exact V otherwise you get some linear combination of all Vs weighted by the attention.

  • MontyCarloHall 10 hours ago

    The confusing thing about attention in this article (and the famous "Attention is all you need" paper it's derived from) is the heavy focus on self-attention. In self-attention, Q/K/V are all derived from the same input tokens, so it's confusing to distinguish their respective purposes.

    I find attention much easier to understand in the original attention paper [0], which focuses on cross-attention for machine translation. In translation, the input sentence to be translated is tokenized into vectors {x_1...x_n}. The translated sentence is autoregressively generated into tokens {y_1...y_m}. To generate y_j, the model computes a similarity score of the previously generated token y_{j-1} against every x_i via the dot product s_{i,j} = x_i*K*y_{j-1}, transformed by the Key matrix. These are then softmaxed to create a weight vector a_j = softmax_i(s_{i,j}). The weighted average of X = [x_1|...|x_n] is taken with respect to a_j and transformed by the Value matrix, i.e. c_j = V*X*a_j. c_j is then passed to additional network layers to generate the output token y_j.

    tl;dr: given the previous output token, compute its similarity to each input token (via K). Use those similarity scores to compute a weighted average across all input tokens, and use that weighted average to generate the next output token (via V).

    Note that in this paper, the Query matrix is not explicitly used. It can be thought of as a token preprocessor: rather than computing s_{i,j} = x_i*K*y_{j-1}, each x_i is first linearly transformed by some matrix Q. Because this paper used an RNN (specifically, an LSTM) to encode the tokens, such transformations on the input tokens are implicit in each LSTM module.

    [0] https://arxiv.org/pdf/1508.04025 (predates "Attention is all you need" by 3 years)

      D-Machine 10 hours ago

      Very much this, cross attention and the x, y notation makes the similarity / covariance matrix far more clear and intuitive.

      Also forget the terms "query", "key" and "value", or vague analogies to key-value stores, that is IMO a largely false analogy, and certainly not a helpful way to understand what is happening.

        MontyCarloHall 10 hours ago

        100% agreed. Attention finally clicked for me when I realized "wait, it's just a transformed, weighted dot product and has nothing to do with key/value lookups." I would have gotten this a lot faster had they called the key matrix \Sigma.

  • CephalopodMD 9 hours ago

    I think of it more from an information retrieval (i.e. search) perspective.

    Imagine the input text as though it were the whole internet and each page is just 1 token. Your job is to build a neural-network Google results page for that mini internet of tokens.

    In traditional search, we are given a search query, and we want to find web pages via an intermediate search results page with 10 blue links. Basically, when we're Googling something, we want to know "What web pages are relevant to this given search query?", and then given those links we ask "what do those web pages actually say?" and click on the links to answer our question. In this case, the "Query" is obviously the user search query, the "Key" is one of the ten blue links (usually the title of the page), and the "Value" is the content of the web page that link goes to.

    In the attention mechanism, we are given a token and we want to find its meaning when contextualized with other tokens. Basically, we are first trying to answer the question "which other tokens are relevant to this token?", and then given the answer to that we ask "what is the meaning of the original token given these other relevant tokens?" The "Query" is a given token in the input text, the "Key" is another token in the input text, and the "Value" is the final meaning of the original token with that other token in context (in the form of an embedding). For a given token, you can imagine it is as though the attention mechanism "clicked the 10 blue links" of the other most relevant tokens in the input and combined them in some way to figure out the meaning of the original query token (and also you might imagine we ran such a query in parallel for every token in the input text at the same time).

    So the self attention mechanism is basically google search but instead of a user query, it's a token in the input, instead of a blue link, it's another token, and instead of a web page, it's meaning.

      D-Machine 8 hours ago

      Read through my comments and those of others in this thread, the way you are thinking here is metaphorical and so disconnected from the actual math as to be unhelpful. It is not that case that you can gain a meaningful understanding of deep networks by metaphor. You actually need to learn some very basic linear algebra.

      Heck, attention layers never even see tokens. Even the first self-attention layer sees positional embeddings, but all subsequent attention layers are just seeing complicated embeddings that are a mish-mash of the previous layers' embeddings.

  • enjeyw 8 hours ago

    One of the big problems with Attention Mechanisms is that the Query needs to look over every single key, which for long contexts becomes very expensive.

    A little side project I've been working on is to train a model that sits on top of the LLM, looks at each key and determines whether it's needed after a certain lifespan, and evicts it if possible (after the lifespan is expired). Still working on it, but my first pass test has a reduction of 90% of the keys!

    https://github.com/enjeyw/smartkv

  • BrokenCogs 9 hours ago

    "When we read a sentence like “The cat sat on the mat because it was comfortable,” our brain automatically knows that “it” refers to “the mat” and not “the cat.” "

    Am I the only one who thinks it's not obvious the "it" refers to the mat? The cat could be sitting on the mat because the cat is comfortable

      mapontosevenths 8 hours ago

      Why would the cat being comfortable make it sit on a mat?

      Many sentences require you to have some knowledge of the world to process. In this case, you need to have the knowledge that "being comfortable dictates where you sit" doesn't happen nearly as often as "where you sit dictates your comfort."

      Even for humans NLP is probabilistic, which is why we still often get it wrong. Or at least I know that I do.

        D-Machine 7 hours ago

        Ah, but cats won't just comfortably sit on a mat if they feel there is danger. They will only sit on a mat if they feel comfortable! Absent larger context, the sentence is in fact ambiguous (though I agree your reading is the most natural and obvious one).

          pests 7 hours ago

          But do we usually describe cats as comfortable, as in their feelings? We might say he IS comfortable, or he feels comfort, but for something to be "comfortable" that implies it gives comfort to others. I can see a cat being comfortable to a human, in that a cat gives comfort to a human. But I wouldn't say "The cat is comfortable, therefore he laid on a mat." Its almost a garden path sentence, I would expect "The cat is comfortable, that's why I let him lay on me".

            D-Machine 7 hours ago

            In literary and casual contexts, absolutely (though we'd probably say "he/she" instead of "it" here). As I said, "it" referring to the mat is the most natural and obvious reading, but other ones are perfectly logical and sound, if less likely/common.

            Although the sentence is itself a bit awkward and strange on its own, and really needs context. In fact, this is because the sentence is generated as a short example to make a point about attention and tokens, and is not really something someone would utter naturally in isolation.

            I mostly just wanted to playfully comment that original GP / top-level comment had a valid point about the ambiguity!

      yuretz 5 hours ago

      I think "it" refers to the process of sitting on the mat.

  • sp1982 11 hours ago

    Nice, I tried to writeup a simpler explanation for LLM a few days back too @ https://kaamvaam.com/machine-learning-ai/llm-attention-expla... One thing that stumped for a bit is the need for matrix V.

  • villgax 5 hours ago

    The LLM smell is now an oxford comma