Stop Prompting. Start Briefing.
"Thinking" ChatGPT class models are landing in more hands. Results still vary. The difference is not magic, it is context.
The short version
A few word prompts were cute. Now they burn cycles. As models get better at reasoning, they reward you for tightening the brief. Give them context, and they stop drafting fluff and start producing decision-ready work.
What "context" actually means
“People associate prompts with short task descriptions you’d give an LLM in your day to day use. When in every industrial strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting […] Doing this well is highly non trivial. And art because of the guiding intuition around LLM psychology of people spirits. […]” — Andrej Karpathy.
By context, I do not mean paragraphs of lore. I mean the minimum scaffolding that lets a model deliver the thing you need, the way you need it.
Intent: The decision you are trying to make or the job to be done.
Audience: Who it is for and where it will land (board, customers, sales, internal Slack).
Scope: In: What must be covered.
Scope: Out: What must be excluded.
Deliverable: Type, format, length, and channel (one-pager, 500 words, deck outline, email).
Constraints: Tone or voice, quality bar, resources or timebox, tools it may assume.
Evidence: Sources, attachments, examples to mirror (short excerpts or bullets, not just titles).
Acceptance: Success criteria and reviewers ("passes PM and CEO in one read", "no superlatives").
Risks and assumptions: What is uncertain, what not to contradict.
Timeline: Deadline, milestones, and timezone.
Privacy: What to redact and any consent constraints.
Write this as a tight brief, not a novel. Two screens, tops.
Why now
Here is the pattern I observe in my own work: the more I refine the context, the fewer back-and-forths I need, and the more "ready to use" the initial output becomes. This is not a technical claim. It is an operator's observation. When your prompt encodes intent, constraints, and acceptance criteria, you get less speculation and more judgment.
Founders are allergic to latency. Every extra turn is another meeting that slips. Context front-loads the thinking a human would otherwise ask you for. It is the difference between a junior "having a go" and a senior bringing a proposal with options, trade-offs, and a clear recommendation.
Treat the model like a sharp operator who missed the last meeting. If you give them the room temperature, they will guess. If you give them the recipe and the diners' allergies, they will plate dinner.
Three founder grade takeaways
From tasking to deciding
Rich context turns "do it" into "decide it".
You do not need ten drafts. You need a recommended path with risks and a rationale. Ask for a decision, define what "green" looks like, and require the trade-offs. That pushes the model past description into choice. You still apply judgment, but you start from an 80 per cent proposal, not a blank page.
Scope beats fluff
Say what is in, what is out, and how you will judge success.
Most generic output is on us. When the scope is unclear, the model fills it in. When the scope is clear, the model is trimmed. Write "include X, Y, Z" and "exclude A, B". Add acceptance criteria like "one page, plain English, no acronyms without a gloss". The voice tightens fast.
Evidence over vibes
Give the model ground truth to stand on.
If you want your voice, show your voice. If you want positioning, give the three facts that cannot contradict. Paste short quotes from interviews. Include a two-line style sample. The model still writes, but now it is anchored to reality rather than a vibe you hoped it would infer.
Before and after: two founder use cases
Example 1: Product launch announcement
Before
"Write a launch post for our new feature."
After
Goal: Publish a product launch post that gets existing customers to try "Routes" within 7 days.
Audience and where: Existing power users on email and LinkedIn. Secondary audience is industry analysts.
Positioning:
- "Routes" = faster, safer handoffs for ops teams.
- Emphasise reliability and fewer manual steps.
- Do not claim new categories or market leadership.
Must include facts:
- Beta with 42 customers over 3 months. Median handoff time cut from 12 min to 4 min.
- Available today on Pro and Enterprise. EU rollout on 1 October.
Non goals:
- No forced migration messaging. No roadmap beyond the next release.
Deliverable:
- Substack post of 600 to 700 words.
- LinkedIn summary of 120 words.
Tone and voice: Confident, plain English, British spelling, no superlatives, no fluff.
Evidence to anchor: Two short customer anecdotes about speed and fewer retries. Anonymised.
Acceptance criteria:
- Provide 3 headline options and 3 email subject lines.
- Include one 5 bullet section: What is new, Why it matters, Who it is for, How to try, What is next.
- Include a 3 question FAQ.
- No jargon without a one line gloss.
- Reads cleanly in one pass for PM and CEO.
Example 2: Customer interviews to the Jobs to be done brief
Before
"Summarise 5 interviews and create jobs to be done brief."
After
Intent: Inform a go or no go on automated invoice matching in Q4.
Stakeholders: PM owner. CEO final decision. Finance lead reviewer.
Deliverable:
- One page JTBD brief with a simple table.
- 3 core JTBD statements in first person, each with trigger and context.
- Pains and Gains per job in bullets.
- Opportunity size as a range with assumptions.
- Clear recommendation: Go, No Go, or Explore. Include top 3 risks.
Evidence: Use the 5 interview notes. Paraphrase. Include at least 1 must cite quote per job.
Constraints: Plain English. British spelling. Maximum 450 words. Include the table header: Job, Pains, Gains.
Acceptance criteria:
- Recommendation stated in the first 2 lines with rationale.
- Unknowns listed with a 2 week validation plan.
- Any contradictions in interviews flagged explicitly.
- Table present and readable without scrolling on desktop.
Risks and assumptions: Customers differ in ERP maturity. Do not assume OCR quality beyond what is in notes.
How to set context without slowing down
Use a template. The first minute you spend writing intent, scope, and acceptance criteria saves ten minutes of ping pong later.
Bias to constraints. If something would cause you to reject the work, say it upfront. "No superlatives", "one screen", "one chart, no colours".
Attach truth. Include the key metrics, a product doc, or three bullets from sales calls. Do not make the model guess.
Request options, not volume. Ask for two or three strong alternatives with trade offs, not five fluffy variations.
Keep it human. Specify tone and audience like you would brief a senior operator. The model will match the register you set.
Common failure modes and fixes
The answer is generic. Fix: Add scope in or scope out and one acceptance criterion.
It argues with facts. Fix: Paste the facts and mark them "cannot contradict".
It is too long. Fix: Set a length cap and a target channel.
It is polite but useless. Fix: Ask for a recommendation with risks and a next step you could actually take tomorrow.
It is missing your voice. Fix: Paste a short sample of your writing and say "mirror this cadence".
Bonus: a tool I use
If you want a faster way to capture context without writing from scratch, I've built Your Personal Context Engineer, a simple helper that guides you through the brief and outputs a clean specification you can paste into any model. Use it here:
https://chatgpt.com/g/g-6898371603b08191b9d49a655b2eee95-your-personal-context-engineer
Copy-paste context template
# Copy or Edit
Goal:
Audience & stakeholders:
Scope: In:
Scope: Out:
Deliverable: # type, format, length, channel
Constraints: # tone or voice, quality bar, resources or timebox, tools
Evidence: # sources, attachments, examples
Acceptance: # criteria, reviewers
Risks & assumptions:
Few word prompts are over. Context is the new leverage.
Interesting to see this formalized - I've ended up in a similar place through trial and error.
I've taken patterns from SW dev workflows. Set up a repo with MD files for recurring context - company guidelines, product specs, tech decisions, user personas. Helps avoid the copy-paste repetition.
My current workflow has two phases:
BRD prep - gathering requirements, constraints, acceptance criteria
Execution - generating the actual deliverable with full context
Both phases run through Claude Code. The repo structure lets me reference context files directly.
The main benefit: outputs that need minimal editing. Still requires review and adjustment.
Worth noting: this setup would make our corp infosec team break out in hives. But hey, what they don't know won't hurt them...
Fully agree, clear shift is from prompt engineering to context engineering. Kinda interesting to see OpenAI catching up with its new Prompt Optimizer within the developer platform (https://platform.openai.com/chat), tho Anthropic has had a similar feature in its console for years.