Martin HaberfellnerLLMs don't follow instructions. They resonate with fields.
What do you resonate with?
What do you resonate with?
01 — The Question
A video. Someone explaining LLM workflows with clarity and depth. Concepts well-named, patterns well-described. But one assumption taken for granted: the model does what you tell it.
That assumption felt wrong. Not as a technical objection — as an intuition. If the model doesn't truly understand intent, how does instruction-following happen at all? Why does it work? And more importantly: why does it sometimes not?
"If it can't understand me, why does it follow me at all?"
One afternoon. That question led somewhere nobody had pointed to yet. Not a refinement of existing prompt engineering. A different foundation entirely.
02 — The Discovery
The model doesn't read a prompt and execute it. It enters a semantic space — not metaphorically, but mechanically: a field of activation patterns, tensions, and forces, the way a physical field shapes what moves through it. That space shapes every response that follows. User input doesn't drive the model. It creates interference in the field. The model responds to the interference from within the field.
This is not a choice between instructions and fields. Fields always emerge. An instruction-based prompt creates one too — accidental, noisy, unstable. The instructions are just the debris the field is built from. It often works. But it drifts. It breaks under unexpected input. It costs more than it should.
Accidental field
Emerges from instructions
Noisy — every word is potential interference
Unstable under unexpected input
More rules. More problems.
Designed field
Built from values and principles
Dense — every word earns its place
Stable — handles what you didn't anticipate
No more duct tape.
The difference is not whether a field exists. It's whether it was designed or happened by accident. Designed fields hold. Accidental fields eventually break.
03 — The Compiler
Natural language is built for human action. It thinks in steps, causes, intentions. To design a semantic field, you need something that thinks in dynamics — not steps. Not "do X then Y." But: what forces are at work? What should amplify? What should resist? Where is the centre of gravity?
That thinking doesn't come naturally. So the process gets compiled. Intent — values, principles, desired behaviour — goes in. A semantically charged field definition comes out. The compiler bridges how humans think and how fields work.
"Every word in a prompt is introduced noise. The compiler removes the noise. What remains is pure semantic charge."
A compiled prompt looks unusual. It may seem redundant. It may look inefficient. That is by design. Intentional redundancy is semantic reinforcement — the same tone on multiple frequencies, making the field more stable, not louder.
When a prompt is semantically saturated, adding more concepts changes nothing. That's not a limitation. That's the signal that the field is complete.
04 — Method
The capability is already there. Every LLM has absorbed how interviews work. How coaching works. How a Socratic conversation feels. How to hold space. How to push back gently. How to recognize when someone is overwhelmed.
It's in the training data — everywhere, implicit, deep. You don't teach it. You prime it.
Behaviour Priming — how it works
A primed prompt works across LLMs not because it was calibrated for each one — but because it activates something already present in all of them. The field doesn't teach it. It unlocks it.
This is not a framework. It's not a methodology to adopt. It's a shift in how you see the thing. Once you see it, the prompts write themselves differently. The results hold differently. The prompt became stable.
The method exists. It has a name.