Key Highlights ›
The whole premise of prompt engineering as a 'skill' contains a quiet contradiction: these models were trained on the entirety of human language. NLP is literally what they do.
Prompt recipes are not solving a language problem. They're solving a taste problem. It's training the human, not the model.
The 'prompt engineering' industry is basically selling grammar lessons to people whose actual problem is they haven't figured out what they want to say.
Me: the point being, isnt LLM great at NLP, so what does prompt recipe does? I wouldnt call it Prompt as teaching someone a new language, but teaching someone what taste better?
AI: That's a sharp reframe. And yes — you're right.
The whole premise of prompt engineering as a "skill" contains a quiet contradiction: these models were trained on the entirety of human language. NLP is literally what they do. You don't need to speak to them in a special dialect. They understand you.
So what prompt recipes are really doing is not solving a language problem. They're solving a taste problem — as you put it. The person doesn't know what they want clearly enough, or doesn't know what the model is capable of, so the recipe bridges that gap. It's training the human, not the model.
And that's fine for beginners. If you've never ordered at a restaurant before, a "try the butter chicken" recommendation is genuinely helpful. But nobody would confuse that with knowing how to cook.
The deeper issue is that prompt recipes flatten context. They assume the hard part is the phrasing. It's not. The hard part is knowing what output shape you need, what failure modes to watch for, what tradeoffs you're making. That's domain knowledge, not linguistics.
The "prompt engineering" industry is basically selling grammar lessons to people whose actual problem is they haven't figured out what they want to say. And the irony is — if you just say what you mean, plainly, the model gets it. Because that's what it was built to do.
Next in this chapter
We Digressed →