1 comments

  • adamzwasserman 2 hours ago

    This has direct implications for my axiomatic prompting research (https://osf.io/pcx2d). We found that providing LLMs with explicit classification axioms improved LEDGAR accuracy by 10 to 22 percentage points across 7/8 models. But axioms add substantial tokens to the prompt.

    The question I can't currently answer: how much of the benefit comes from the semantic content of the axioms versus the repetition/emphasis effect this paper identifies?

    I'm running an ablation study with a critical condition: shuffled axioms (same tokens, randomized order). If shuffled matches structured axioms, the content doesn't matter. If structured axioms win, semantic structure genuinely helps beyond repetition.

    I'll add this experiment to the parent project on OSF. Curious whether others working on knowledge injection techniques have similar confounds to untangle.