AI Skeptic to AI Pragmatist

3 points | by mooreds 20 hours ago

1 comments

  • nis0s 18 hours ago

    Maybe I am wrong in how I think about this, but if giving the same prompt to a model produces similar outcomes (i.e., not exactly the same), then the final result is within a set of expected outcomes via the use of constituent words or phrases which retain the same meaning across different variations of possible constructions.

    You can define a function to determine whether a number is prime using any number of ways. If your prompt produces an outcome with that intent, and that intent alone, then your prompt, and by extension the model, have some tendency towards producing outcomes for a specific intent in that case, which seems like a deterministic process to me for retaining the “gist” of the result.

    The same applies to natural language outcomes. If I ask about a medicine and how it works, the model can construct the response in any number of ways, but the feature space of that response constrains the outcome to retain the same “gist” of the response across all variations of it.

    It’s the same as asking the same question to ten different people. They’ll each have their own way of answering you, but the gist of the answer will remain consistent across respondents.