1 YouTube Video → 21 Posts: The Faceless Repurposing Pipeline (Descript + Make.com)

Friday tactical newsletter — 1,200 words. Read time: 5 minutes.

I shipped a 12-minute faceless YouTube video on Wednesday afternoon. By Wednesday evening, that single video had been chopped into 3 vertical Shorts, 5 LinkedIn carousels, 7 Twitter posts (one full thread + 4 standalones), 4 Pinterest pins, and 2 newsletter snippets — 21 derivative pieces. Total hands-on time after the master edit was done: 28 minutes.

I’m telling you this because the question I get most from solo creators is some version of “how do you keep up with all those platforms without a team?” The answer is unsexy: you don’t keep up with them. You publish once, then let a Make.com scenario fan the artifacts out. The trick is the artifact that sits in the middle — the Descript transcript — and the scenario routing decisions you make around it.

Here’s the exact pipeline. No theory. Steal it.

The 5-step repurposing pipeline (and why each step matters)

Step 1 — Cut the master video in Descript with chapter markers, not just an edit

This is where 90% of solo creators leave money on the table. They edit the video, hit export, and move on. You want one extra pass: open the Descript Composition Sidebar and drop chapter markers at every self-contained idea in your video. A good rule of thumb: if you can imagine someone screenshotting that 30-second segment and posting it as a standalone, mark it.

For a 12-minute video I aim for 8-10 chapter markers. Each one becomes a candidate Short, a candidate Twitter post, and a candidate carousel slide group. Without these markers, the rest of the pipeline has nothing to grip onto. With them, every downstream step becomes a deterministic lookup instead of a creative decision.

If you haven’t picked an editor yet, I broke down the trade-off in Descript vs CapCut for faceless YouTube. Short version: CapCut is faster for cuts, but Descript wins this specific workflow because the transcript is the timeline.

Step 2 — Export the transcript as JSON, not as a .srt or .txt

Most creators export their transcript as a subtitle file or plain text. Wrong format for automation. In Descript, export to JSON with timestamps. You now have a structured object: every word has a start time, end time, and speaker label. Chapter markers carry over with their boundaries.

That JSON is the only artifact Make.com needs to do everything else. Push it to a Google Drive folder watched by your scenario, or — better — POST it directly to a Make.com webhook from the Descript shortcuts panel. The latter shaves 60 seconds off every cycle.

Step 3 — Build the Make.com fanout scenario (8 modules)

Here’s the scenario that does the work. I’ll describe it module by module so you can rebuild it in 20 minutes:

  1. Webhook trigger — receives the Descript JSON.
  2. Iterator — splits the JSON by chapter markers. Each iteration = one self-contained idea + its timecodes.
  3. OpenAI module #1 (router) — given the chapter title and transcript chunk, classifies which platforms it fits: Short, Tweet, LinkedIn carousel, Pinterest pin, newsletter pull-quote, or “skip.” This is the only place I let the model make a judgment call. Keep the prompt under 200 tokens.
  4. Router — splits the flow based on the classification. Each branch is a separate transformation.
  5. OpenAI module #2 (rewriter) — for each platform branch, rewrites the chunk in that platform’s voice. Twitter wants compression. LinkedIn wants a hook + payoff structure. Pinterest wants a benefit-driven title. Use few-shot examples in the prompt — three examples beats a 1,000-word style guide.
  6. Descript API call (clip generator) — for Short candidates only, calls Descript’s clip endpoint with the timecode boundaries to render a vertical 9:16 export. This module is in beta as of April 2026 — if it errors, fall back to a Drive folder and clip manually in the Descript UI.
  7. Buffer / Publer / Make’s native modules — pushes drafts to each platform’s queue. Drafts, not auto-publishes. You always want a human eye on the final post for at least the first 90 days.
  8. Notion logger — writes a row per derivative artifact: source video, platform, scheduled time, draft URL. This is your single source of truth when something doesn’t post correctly.

If you want a refresher on the underlying Make.com pattern this scenario uses (Router + Filter + Error Handler), I broke it down in the Make.com trick that triples your productivity.

Step 4 — Voice the Shorts using your existing TTS routing

For the vertical Shorts that come out of Descript, you have two options. Reuse the original video’s audio (free, fastest) or generate a punchier shorter narration with TTS.

I do the latter for hooks specifically — the first 3 seconds of every Short. ElevenLabs for the hook, original audio for the rest. That hook re-record buys me roughly 18-22% better completion rates compared to lifting the original audio verbatim. If you’re not using TTS at all yet, the cost ladder I worked out in ElevenLabs vs Google TTS for faceless creators shows where the breakpoints are.

Step 5 — Lock down credentials before you scale

Boring but load-bearing. The minute you wire up a Make.com scenario that touches Descript + OpenAI + Buffer + Notion + ElevenLabs + Google Drive + a posting tool, you have 7 API keys floating around. The day you bring on a VA to review the drafts, that count doubles.

Use a password manager with shared vaults so you can rotate keys without rebuilding the whole scenario. NordPass is what I use for this — the team plan lets a VA see only the buffer/scheduling vault while the Make.com keys stay in your owner-only vault. (That’s an affiliate link; it costs you nothing extra and supports the newsletter.)

The numbers I’m actually getting

For three months on the StackCraft test channel, here’s what one video → 21 derivatives looks like in practice:

  • Total upstream effort: 6-8 hours to research, script, record, and edit the master video.
  • Repurposing time per video: 28-35 minutes of human review (down from ~4 hours when I did it manually).
  • Reach lift: 3.4x more total impressions across all platforms vs. publishing the YouTube video alone.
  • Best-performing artifact: the Twitter standalones, not the thread. Threads got more replies; standalones got more click-throughs to the YouTube video.

That last point matters: if your goal is YouTube subscribers, lean into standalones. If your goal is authority and inbound DMs, run the thread.

The one thing that breaks this pipeline

Bad chapter markers. If your markers don’t sit at clean idea boundaries, the AI router in Step 3 mis-classifies, the rewriter in Step 5 produces bland Frankenstein copy, and you end up rewriting everything by hand anyway. Spend the extra 4 minutes in Descript getting the markers right. The whole automation is downstream of that one decision.

Quick FAQ

Do I need Descript Pro for this? Hobbyist tier ($16/mo) is enough for the JSON export and chapter markers. You only need Pro if your videos exceed 10 hours/month total.

Can I swap CapCut for Descript? Not for this exact workflow. CapCut doesn’t expose transcript JSON the same way. You’d need a separate Whisper transcription step, which adds 90 seconds per video and a cost line item.

What about Instagram Reels? Reels are the easiest add-on — they take the same vertical clips as YouTube Shorts. Just clone the Buffer module in your scenario and route Shorts to both queues.

Next Monday

Monday’s pillar drops a deeper look at SEO content strategy for solopreneurs — how to go from zero to topical authority in 6 months without a budget for backlinks. If you’re not on the Substack yet, subscribe here — Friday tactics like this one and Monday pillars are sent to email subscribers first.

See you Monday.

— Sébastien
StackCraft.ai


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *