1 comments

  • helro 2 hours ago

    I’ve been experimenting with voice-to-code workflows and realized that the benefit isn't actually speed—it’s the reduction of what I call the 'synthesis tax.'

    When we type, we are forced to translate raw intent into structured prose in real-time. We self-edit, delete, and reorganize as we go. This 'lossy compression' often causes us to omit the very details (edge cases, the 'why,' uncertainty) that help an LLM produce better code.

    I wrote this to explore how pairing high-bandwidth voice (~150 WPM) with 'meta-prompting' (using a secondary prompt to map speech to codebase context/file tags) creates a much higher-signal input than typing ever could.

    Curious if others have found that 'rambling' into a model actually yields better architectural results than a carefully typed one-liner.