August 8, 2025
Chain Prompting
AI Automation
Most people ask AI for too much at once.
Then they’re surprised when it gives them mush.
You need to be chain prompting.
Like any valuable, complicated task, the work should be broken down into smaller, realistically achievable moves. Yesterday I commented on a post from Peter Yang about this and a few folks pinged me, so here’s a quick, real example from what we’re doing at LipFlix.
The Problem
Customer discovery calls are full of gold, but asking an LLM to “summarize the call and give me all the insights and action items” is how you get oatmeal. With today’s models, a single, mega-prompt tends to blur facts, mix up speakers, and flatten nuance.
So we chain it.
The Simple Chain We Use (LipFlix)
We run each call transcript through three small, specific prompts—each with a single job, Mom Test–style (i.e., focus on what people actually did, not what they might do).
1) What they made & why
Extract the exact videos they created.
Capture the motivation in their own words.
2) Where they shared it & reactions
List channels (TikTok, IG, text to a friend, etc.).
Pull reactions/metrics (comments, likes, “my sister cried,” etc.).
3) Friction & pricing
Any issues during creation (technical or UX).
Feelings at each painful moment.
Pricing feedback and objections.
Then we run a short synthesis step that stitches those three outputs into one compact brief with: Opportunities, UX papercuts, Objections, and Follow-ups. That summary gets posted to Slack for the team to digest.
It’s a tiny chain—but it’s incredibly useful.
Why This Beats a One-Shot Prompt
Sharper extraction. Each step cares about one thing, so the model doesn’t wander.
Traceability. You can keep citations/quotes per step, which helps prevent “vibes-based” summaries.
Easier iteration. If “reactions” are weak, tune Step 2 without touching the rest.
Better evals. On our own data, the chained version consistently produced more accurate facts and clearer next steps than the single-call “do-it-all” prompt. (Yes, we ran lightweight evals—happy to share the setup.)
How We Automate It (Lindy.AI)
Trigger: New transcript file lands in our folder.
Step A: Prompt 1 → What they made & why (JSON out).
Step B: Prompt 2 → Where & reactions (JSON out).
Step C: Prompt 3 → Friction & pricing (JSON out).
Step D: Synthesizer → merges the three JSON blobs into a tight brief with bullets and quotes.
Step E: Post to Slack (#insights) and drop a copy in our notes.
Total setup time: not long. Value: big.
Copy-Paste Starters
Use these as-is or tweak to your voice. Keep inputs small and explicit.
Prompt 1 — What they created & why
Prompt 2 — Where & reactions
Prompt 3 — Friction & pricing
Synthesizer — Actionable brief
A Few Tips
Keep steps tiny. If a step has “and,” it’s probably two steps.
Return structured output. JSON forces clarity and simplifies the synth step.
Preserve quotes. They’re gold for roadmap debates.
Tune one link at a time. Don’t refactor the whole chain when one step is noisy.
The Ask
I promise you: one LLM call to handle all of the above produces lower-quality output—with today’s models. Chain prompting wins.
Are you also learning chain prompting or building frameworks like this? What’s working for you? 🔗
Subscribe to our newsletter
Want more like this straight to your inbox? Want occasional free videos and PDFs and fun stuff? If so, sign up below. If not, I won't be mad, just disappointed... 😅
Enter email address
Subscribe