I have been playing with Vibecodeprompts for a bit and what stood out to me is not the prompts themselves, but the framing.
Most “prompt libraries” assume the problem is wording. As if better adjectives or clever roleplay magically produce reliable systems. That has never matched my experience. The real failure mode is drift, inconsistency, and lack of shared structure once things scale beyond a single chat window.
Vibecodeprompts seems to implicitly accept that prompting is closer to infra than copywriting.
The prompts are opinionated. They encode assumptions about roles, constraints, iteration loops, and failure handling. You can disagree with those assumptions, but at least they are explicit. That alone is refreshing in a space where most tools pretend neutrality while smuggling in defaults.
What I found useful was not copying prompts verbatim, but studying how they are composed. You can see patterns emerge. Clear system boundaries. Explicit reasoning budgets. Separation between intent, process, and output. Guardrails that are boring but effective.
In other words, this is less “here is a magic prompt” and more “here is a way to think about working with models as unreliable collaborators”.
That also explains why this probably will not appeal to everyone. If you want instant magic, this is not it. You still have to think. You still have to adapt things to your domain. But if you are building anything persistent, reusable, or shared with other people, that effort feels unavoidable anyway.
Curious how others here think about this. Do you treat prompts as disposable glue, or as something closer to code that deserves structure, review, and iteration over time?
That only matters if the system you're using requires a specific input to achieve the desired outcome. For example, I can write a prompt for Claude Code to 'write a tic tac toe game in React' and it will give me a working tic tac toe game that's written in React. If I repeat the prompt 100 times I'll get 100 different outputs, but I'll only get one outcome: a working game.
For systems where it's the outcome that matters but the output doesn't, prompts will work as a proxy for the code they generate.
Although, all that said, very few systems work this way. Almost all software systems are too fragile to actually be used like that right now. A fairly basic React component is one of the few examples where it could apply.
Except prompts and LLM's are not predictable and experienced programmer is.
Ditto with true classical AI Lisps with constraints based solvers, be under Common Lisp, be under custom Lispen such as Zenlisp where everything it's built over few axioms:
With LLM's you will often lack predictability. If there's any, of course.
Because more than I once I had to correct these over trivial errors on TCL,
and they often lack cohesion between different answers.
That was solved even under virtual machines for text adventures such as the ZMachine,
where a clear relation between objects was pretty much defined from the start and
thus a playable world emerged from few rules with the objects themselves, not something
built from the start. When you define attributes for objects in a text adventure, it will map
the language 1:1 to the virtual machine, and it will behave in a predictable way.
You don't need a 600 page with ANSI C standards+POSIX, GLibc and > 3000 pages
long books with the AMD64/i386 ISA's in order to predict a basic behaviour. It's there.
Can LLM's get this? No, by design. They are like huge word predictors with eidetic memory. They might somehow be slightly good on interpolating, but they are useless extrapolating.
They don't understand semantics. OTOH, the Inform6 language
tageting the ZMachine interpreter has objects with implicit behaviour in their syntax and a basic
syntax parser for the actions from the users. That adds a bit of context
generated between the relations of the objects.
The rest it's just decorated descriptions from the programmers, where the ingame
answer can be changed once you drop some kind of objects and the like.
Cosmetic changes in the end, because internally there's mapped a action which is indistinguisable from the vanilla output from the Inform6 English library. And Gen-Z ers don't understand that when older people tell
them that no LLM will be close to a designed game from a programmer, be in Inform6 or Inform7. Because an LLM's will often mix named input, named output and the implicit named object.
I have been playing with Vibecodeprompts for a bit and what stood out to me is not the prompts themselves, but the framing.
Most “prompt libraries” assume the problem is wording. As if better adjectives or clever roleplay magically produce reliable systems. That has never matched my experience. The real failure mode is drift, inconsistency, and lack of shared structure once things scale beyond a single chat window.
Vibecodeprompts seems to implicitly accept that prompting is closer to infra than copywriting.
The prompts are opinionated. They encode assumptions about roles, constraints, iteration loops, and failure handling. You can disagree with those assumptions, but at least they are explicit. That alone is refreshing in a space where most tools pretend neutrality while smuggling in defaults.
What I found useful was not copying prompts verbatim, but studying how they are composed. You can see patterns emerge. Clear system boundaries. Explicit reasoning budgets. Separation between intent, process, and output. Guardrails that are boring but effective.
In other words, this is less “here is a magic prompt” and more “here is a way to think about working with models as unreliable collaborators”.
That also explains why this probably will not appeal to everyone. If you want instant magic, this is not it. You still have to think. You still have to adapt things to your domain. But if you are building anything persistent, reusable, or shared with other people, that effort feels unavoidable anyway.
Curious how others here think about this. Do you treat prompts as disposable glue, or as something closer to code that deserves structure, review, and iteration over time?
Seriously? When the same prompt to the same LLM on a different day can give different results seemingly at random?
That only matters if the system you're using requires a specific input to achieve the desired outcome. For example, I can write a prompt for Claude Code to 'write a tic tac toe game in React' and it will give me a working tic tac toe game that's written in React. If I repeat the prompt 100 times I'll get 100 different outputs, but I'll only get one outcome: a working game.
For systems where it's the outcome that matters but the output doesn't, prompts will work as a proxy for the code they generate.
Although, all that said, very few systems work this way. Almost all software systems are too fragile to actually be used like that right now. A fairly basic React component is one of the few examples where it could apply.
Except prompts and LLM's are not predictable and experienced programmer is. Ditto with true classical AI Lisps with constraints based solvers, be under Common Lisp, be under custom Lispen such as Zenlisp where everything it's built over few axioms:
https://t3x.org/zsp/index.html
With LLM's you will often lack predictability. If there's any, of course. Because more than I once I had to correct these over trivial errors on TCL, and they often lack cohesion between different answers.
That was solved even under virtual machines for text adventures such as the ZMachine, where a clear relation between objects was pretty much defined from the start and thus a playable world emerged from few rules with the objects themselves, not something built from the start. When you define attributes for objects in a text adventure, it will map the language 1:1 to the virtual machine, and it will behave in a predictable way.
You don't need a 600 page with ANSI C standards+POSIX, GLibc and > 3000 pages long books with the AMD64/i386 ISA's in order to predict a basic behaviour. It's there.
Can LLM's get this? No, by design. They are like huge word predictors with eidetic memory. They might somehow be slightly good on interpolating, but they are useless extrapolating.
They don't understand semantics. OTOH, the Inform6 language tageting the ZMachine interpreter has objects with implicit behaviour in their syntax and a basic syntax parser for the actions from the users. That adds a bit of context generated between the relations of the objects.
The rest it's just decorated descriptions from the programmers, where the ingame answer can be changed once you drop some kind of objects and the like.
Cosmetic changes in the end, because internally there's mapped a action which is indistinguisable from the vanilla output from the Inform6 English library. And Gen-Z ers don't understand that when older people tell them that no LLM will be close to a designed game from a programmer, be in Inform6 or Inform7. Because an LLM's will often mix named input, named output and the implicit named object.