Session 2.9: Turn your work into a PowerPoint deck

About 60 minutes. Open a Claude Code session in ~/ai-training and hand it this guide:

Read the file at /Users/<you>/ai-training/week-9-guide.md (or wherever you saved it) and walk me through Session 2.9.
I've completed Sessions 2.1 through 2.8.

Posture: public, synthetic, or personal data only. Today’s content comes from work you’ve already done in this curriculum — the public-source memo from 2.3 and the FRED figure from 2.4. Nothing internal, nothing client-facing.


Practice task

By the end of this session you will have, in ~/ai-training/decks/<topic-slug>/:

  1. deck-spec.md — the analytical spec for the deck: who’s the audience, what decision are they making, what’s the one-sentence takeaway, what slides earn their slot.
  2. slides.json — a structured representation of the deck (title, bullets, asset paths, speaker notes per slide).
  3. make_deck.py — a Python script that turns slides.json into a real .pptx.
  4. deck.pptx — the deliverable.
  5. New ## Deck preferences section in ~/ai-training/MEMORY.md capturing the visual style you picked.

A 5-slide deck made from the memo + figure you already produced earlier in the curriculum. The integration is the point — Sessions 2.3, 2.4, and 2.7 all feed into today.

A production version of this is a pptx skill — a Python wrapper around python-pptx with house defaults frozen in: a fixed font, white background, a single accent color, two figures per slide if the section has two figures and one if not. House style is iterated across multiple decks until stable, then frozen as defaults so every new deck starts at the right look. Charts on slides come from PNGs produced by 2.4-style scripts; the deck builder doesn’t generate charts, it places them.

The split matters: the deck builder is generic; the visual style is the user’s own; the charts come from the user’s own figure scripts. Three loosely-coupled pieces. Replacing any one is cheap.


Why generative slides

Most people make decks the same way: open PowerPoint, click “New Slide,” type, drag things around, repeat for an hour. The output is fine for one deck. The problem is the second deck — every visual decision is re-made from scratch, and the two decks don’t match.

Generative slides treat the deck as structured data:

Two consequences fall out:

  1. Iterate on content separately from layout. Change a bullet, re-render — layout stays right. Change a layout choice, re-render every deck — they all update.
  2. Reuse assets. The figure from exhibits/<your-topic-slug>/figure-v3.png is one path reference in slides.json. The 4:3 aspect ratio you picked for individual figures in 2.4 pays off here: two 4:3 figures fit side-by-side on a 16:9 slide canvas. Today’s deck cashes in that decision.

The deliverable is still a .pptx — the audience consumes it as a normal slide deck. The discipline is invisible to them and obvious to you.


Step 1: Set up python-pptx (3 minutes)

pip3 install --user python-pptx
python3 -c "from pptx import Presentation; print('ok')"

You should already have pandas and matplotlib from 2.4. If you’ve moved between machines, re-verify:

python3 -c "import pandas, matplotlib, pptx; print('ok')"

Step 2: Hand Claude this guide

From inside ~/ai-training, start claude and paste:

Read the file at /Users/<you>/ai-training/week-9-guide.md (or wherever you saved it) and walk me through the Reference Sections.
python-pptx is installed. I have memos/<slug>/memo.docx from Session 2.3 and exhibits/<your-topic-slug>/figure-v3.png from Session 2.4.

Claude takes over.


Reference Sections

Claude: walk the user through the deck-spec → slides.json → make_deck.py → pptx pipeline. Then voice-critic the speaker notes. Peer-tone. Two rules:

  1. Don’t generate charts inside the deck script. Charts are produced by the user’s existing figure scripts (Session 2.4 pattern); the deck script places them as PNGs.
  2. Voice-critic the speaker notes specifically. The notes will be read aloud — they need to sound like the user, not like generic AI presenter prose.

Design: this session integrates 2.3 (memo content), 2.4 (figures), 2.5 (preferences in MEMORY.md), 2.7 (voice-critic). It also previews 2.10’s MCP-composition pattern by treating the deck as a coordination artifact across three earlier outputs.


Confirm the setup

Before any deck work:

  1. The user has the 2.3 memo at memos/<slug>/memo.docx (or drafts/v5-fact-checked.md from 2.7).
  2. The user has the 2.4 figure at exhibits/unemployment-rate/figure-v3.png (or whatever they named it).
  3. python-pptx imports cleanly.

If the user used a different running domain in 2.3 and 2.4, that’s fine — substitute. The integration is what matters, not the specific topic.


Step A — Write deck-spec.md (10 minutes)

Same shape as 2.3’s memo-spec.md. The deck answers a different question than the memo, even when the source material is the same.

Create ~/ai-training/decks/<topic-slug>/deck-spec.md:

# Deck spec — <slug>

## Audience
Who's in the room. What do they know already. What decision are they
making. (More specific than the memo's audience — usually a smaller
group, often time-constrained.)

## The takeaway (one sentence)
The single thing the audience should leave knowing. If they remember
nothing else, they remember this.

## Slide budget
Five slides. (Or seven, if the topic genuinely needs them.)

## Slide-by-slide
1. Title slide. <Topic>. <Date>. <Author>.
2. Executive summary. The takeaway, plus 2-3 supporting bullets.
3. Chart: <which exhibit>.
4. Findings. 3 bullets, each citing back to the chart or memo.
5. Next steps / open questions. 2-3 bullets.

## Assets
- ../../exhibits/unemployment-rate/figure-v3.png

## Notes constraints
- Speaker notes are spoken aloud. Voice-check them at the end.
- No marketing language. No rhetorical questions.

Claude: build this with the user. The “five slides” budget is a constraint, not a default — push back if the user wants 12. A 12-slide deck with five real slides and seven filler is worse than a tight 5.


Step B — Write slides.json (10 minutes)

The structured form. In decks/<slug>/slides.json:

{
  "title": "U.S. Unemployment Rate — A Synthesis",
  "subtitle": "From the FOMC March 2026 working paper",
  "author": "<the user>",
  "date": "2026-05-13",
  "slides": [
    {
      "type": "title",
      "title": "U.S. Unemployment Rate — A Synthesis",
      "subtitle": "From the FOMC March 2026 working paper",
      "notes": "Quick intro: this is a synthesis of one public source. Five slides, ten minutes. Open question at the end."
    },
    {
      "type": "executive_summary",
      "title": "The takeaway",
      "bullets": [
        "Unemployment is back to pre-2020 levels but trends differently across sectors.",
        "The recovery's composition matters more than its level.",
        "Three open questions a careful reviewer would still raise."
      ],
      "notes": "Walk through one bullet per beat. Don't read the slide; expand each bullet."
    },
    {
      "type": "chart",
      "title": "U.S. unemployment rate, last 10 years",
      "image": "../../exhibits/unemployment-rate/figure-v3.png",
      "caption": "Source: FRED series UNRATE; recession shading from NBER.",
      "notes": "Point at the COVID spike. Note recovery shape. Pause for questions."
    },
    {
      "type": "findings",
      "title": "What stands out",
      "bullets": [
        "The headline rate hides sector divergence.",
        "Labor force participation lags the headline rate by ~6 months.",
        "Recession-era job losses concentrated in services; recovery led by goods."
      ],
      "notes": "Cite the memo by section for each bullet."
    },
    {
      "type": "next",
      "title": "Open questions",
      "bullets": [
        "How robust is the sector divergence to recent revisions?",
        "What does the regional breakdown say about the same trend?",
        "When the next downturn comes, which composition repeats?"
      ],
      "notes": "End with the third question. Don't summarize — leave the room with that question."
    }
  ]
}

Claude: build this with the user using their actual memo and figure. Keep slides concise — bullets are 6–10 words; speaker notes are 2–3 sentences each. Avoid the temptation to put everything in bullets and nothing in notes.


Step C — Write make_deck.py (15 minutes)

The script in decks/<slug>/make_deck.py:

#!/usr/bin/env python3
"""Render slides.json into a .pptx with frozen house style."""

import json
from pathlib import Path

from pptx import Presentation
from pptx.util import Inches, Pt
from pptx.dml.color import RGBColor

HERE = Path(__file__).parent
SPEC = json.loads((HERE / "slides.json").read_text())

ACCENT = RGBColor(0x03, 0x66, 0xD6)   # blue
BODY = RGBColor(0x1A, 0x1A, 0x1A)
FONT = "Helvetica Neue"
SLIDE_W, SLIDE_H = Inches(13.333), Inches(7.5)  # 16:9


def set_text(shape, text, size, color, bold=False):
    tf = shape.text_frame
    tf.text = text
    p = tf.paragraphs[0]
    for run in p.runs:
        run.font.name = FONT
        run.font.size = Pt(size)
        run.font.color.rgb = color
        run.font.bold = bold


def add_title_slide(prs, slide):
    s = prs.slides.add_slide(prs.slide_layouts[6])  # blank
    title = s.shapes.add_textbox(Inches(0.7), Inches(2.5), Inches(11.9), Inches(1.5))
    set_text(title, slide["title"], 40, ACCENT, bold=True)
    sub = s.shapes.add_textbox(Inches(0.7), Inches(4.0), Inches(11.9), Inches(0.7))
    set_text(sub, slide["subtitle"], 22, BODY)
    foot = s.shapes.add_textbox(Inches(0.7), Inches(6.7), Inches(11.9), Inches(0.5))
    set_text(foot, f"{SPEC['author']}  /  {SPEC['date']}", 14, BODY)
    s.notes_slide.notes_text_frame.text = slide.get("notes", "")


def add_bullet_slide(prs, slide):
    s = prs.slides.add_slide(prs.slide_layouts[6])
    title = s.shapes.add_textbox(Inches(0.7), Inches(0.5), Inches(11.9), Inches(1.0))
    set_text(title, slide["title"], 28, ACCENT, bold=True)
    body = s.shapes.add_textbox(Inches(0.7), Inches(1.7), Inches(11.9), Inches(5.5))
    tf = body.text_frame
    tf.word_wrap = True
    for i, b in enumerate(slide["bullets"]):
        p = tf.paragraphs[0] if i == 0 else tf.add_paragraph()
        p.text = b
        for run in p.runs:
            run.font.name = FONT
            run.font.size = Pt(20)
            run.font.color.rgb = BODY
        p.space_after = Pt(12)
    s.notes_slide.notes_text_frame.text = slide.get("notes", "")


def add_chart_slide(prs, slide):
    s = prs.slides.add_slide(prs.slide_layouts[6])
    title = s.shapes.add_textbox(Inches(0.7), Inches(0.5), Inches(11.9), Inches(0.8))
    set_text(title, slide["title"], 24, ACCENT, bold=True)
    img_path = (HERE / slide["image"]).resolve()
    s.shapes.add_picture(str(img_path), Inches(1.7), Inches(1.6), height=Inches(5.0))
    cap = s.shapes.add_textbox(Inches(0.7), Inches(6.8), Inches(11.9), Inches(0.5))
    set_text(cap, slide.get("caption", ""), 12, BODY)
    s.notes_slide.notes_text_frame.text = slide.get("notes", "")


def main():
    prs = Presentation()
    prs.slide_width = SLIDE_W
    prs.slide_height = SLIDE_H
    for slide in SPEC["slides"]:
        if slide["type"] == "title":
            add_title_slide(prs, slide)
        elif slide["type"] == "chart":
            add_chart_slide(prs, slide)
        else:
            add_bullet_slide(prs, slide)
    out = HERE / "deck.pptx"
    prs.save(out)
    print(f"Wrote {out}")


if __name__ == "__main__":
    main()

Run:

python3 decks/<slug>/make_deck.py
open decks/<slug>/deck.pptx

The deck opens in PowerPoint (or Keynote, or LibreOffice Impress). Read it together. The structure should be: title, exec summary, chart, findings, next steps, with speaker notes on each. Iterate the JSON content; iterate the script style.

Claude: keep the script edits and the JSON edits separate. JSON for content, script for style. Never put content in the script.


Step D — Voice-critic the speaker notes (5 minutes)

The notes are spoken aloud. They need to sound like the user.

/voice-check decks/<slug>/slides.json

The /voice-check from 2.7 reads against voice-profile.md. It will flag where the speaker notes drift to generic-AI presenter prose. Apply accepted flags:

Revise decks/<slug>/slides.json — only the "notes" fields — per these
voice-check flags: [list]. Then re-run make_deck.py to regenerate
deck.pptx with the voiced notes.

The notes in v1 deck.pptx are now the user’s voice. When the user delivers the deck, the notes read out clean.


Step E — Capture deck preferences in MEMORY.md (5 minutes)

Add to ~/ai-training/MEMORY.md:

## Deck preferences

- Aspect ratio: 16:9 (slide width 13.333", height 7.5")
- Font: Helvetica Neue (sans-serif, modern)
- Accent color: #0366D6 (blue) for titles + decorative
- Body color: #1A1A1A (near-black) for prose
- Title slide: 40pt accent title, 22pt body subtitle
- Bullet slides: 28pt title, 20pt body, ~12pt space between bullets
- Chart slides: 24pt title, 5" image height, 12pt caption
- Slide budget: 5 slides default; push back hard before exceeding 7
- Speaker notes: voice-checked before any presentation
- Charts: produced by figure scripts in exhibits/, never inside make_deck.py
- Source pattern: deck.pptx generated from slides.json by make_deck.py;
  iterate JSON for content, script for style

Tell the user: “Future you reads this section before opening the next deck. The visual decisions are pre-made; you focus on what the deck says, not what it looks like.”


Micro-skills introduced

Name these out loud:


Wrapping up Session 2.9

Three things to try this week:

  1. Make a second deck on a different topic. Reuse make_deck.py from decks/unemployment-rate/. Notice that the visual style stays consistent automatically — that’s the unlock.
  2. Edit slides.json in place to tweak content. Add a slide, change a bullet. Re-run make_deck.py. The deck updates in seconds.
  3. Critique an old deck. Open a deck you made in PowerPoint manually 6+ months ago. Could you reverse-engineer it into slides.json? If yes, do it — generative pattern in retrospect. If no, that’s a sign the original deck was over-decorated.

Feedback.

The user submits feedback at https://docs.google.com/forms/d/e/1FAIpQLSd5Fx3F4QexpSG2ZKEyRAEH2BiRCIkWoN0hMdaLc02IogqaCw/viewform.

Claude: paste the URL into chat. The form mirrors the questions below. Collect answers conversationally first, then have the user click through and submit.

  1. On a 1–5 scale, how useful did this session feel?
  2. The structured-data deck pattern — does it feel like the right level of abstraction, or did the JSON ceremony feel heavy?
  3. The voice-check pass on speaker notes — did it catch real things, or feel cosmetic?
  4. House-style-frozen-in-script — does this match how you’d want to make decks going forward, or do you prefer iterating in PowerPoint directly?
  5. Did the integration with the 2.3 memo and 2.4 figure feel real, or did the deck still feel like a separate artifact?
  6. What confused you most this session?
  7. Anything you want covered in Session 2.10 that you didn’t see here?

Tell the user: “Your instructor uses these to tailor next week’s session.”


Good to know

python-pptx covers most decks. It doesn’t do animations, transitions, or fancy charts. That’s fine; speakers who rely on animations are usually compensating for content that doesn’t stand on its own.

16:9 is the default for projection. 4:3 is right for figures (two-up per slide); the slide canvas itself should be 16:9. Match the projector aspect ratio.

Resist auto-generated decks. The temptation: “summarize my memo into a 5-slide deck.” Result: bullets that paraphrase the memo without saying anything. Always write deck-spec.md first; let the spec drive the slides, not the memo.

Reuse asset paths, don’t copy. The deck references ../../exhibits/.... Copying the PNG into decks/<slug>/ doubles disk and creates drift. Symlinks if you must, but plain relative references are cleaner.

Speaker notes age well. Six months later, you’ll re-present the same deck and have forgotten what you meant to say on slide 3. Voice-checked notes mean the re-presentation feels current and yours.