Interesting to see this formalized - I've ended up in a similar place through trial and error.
I've taken patterns from SW dev workflows. Set up a repo with MD files for recurring context - company guidelines, product specs, tech decisions, user personas. Helps avoid the copy-paste repetition.
Fully agree, clear shift is from prompt engineering to context engineering. Kinda interesting to see OpenAI catching up with its new Prompt Optimizer within the developer platform (https://platform.openai.com/chat), tho Anthropic has had a similar feature in its console for years.
Interesting to see this formalized - I've ended up in a similar place through trial and error.
I've taken patterns from SW dev workflows. Set up a repo with MD files for recurring context - company guidelines, product specs, tech decisions, user personas. Helps avoid the copy-paste repetition.
My current workflow has two phases:
BRD prep - gathering requirements, constraints, acceptance criteria
Execution - generating the actual deliverable with full context
Both phases run through Claude Code. The repo structure lets me reference context files directly.
The main benefit: outputs that need minimal editing. Still requires review and adjustment.
Worth noting: this setup would make our corp infosec team break out in hives. But hey, what they don't know won't hurt them...
Fully agree, clear shift is from prompt engineering to context engineering. Kinda interesting to see OpenAI catching up with its new Prompt Optimizer within the developer platform (https://platform.openai.com/chat), tho Anthropic has had a similar feature in its console for years.