Caelan's Domain

/review - Quality Review

Created: March 27, 2026 | Modified: March 27, 2026

This is Part 8 of a 10-part series on cAgents. Previous: /debug - Systematic Debugging | Next: Sessions - Under the Hood


You've built the thing. Debugged the hard parts. Optimized what needed optimizing. Now the question is: is it actually good? Not "does it work?" - that's a lower bar. Good means secure, accessible, consistent, maintainable. The kind of quality that holds up when someone else looks at it, or when you come back to it six months later.

/review answers that question. It spawns a team of specialist reviewers in parallel - security, accessibility, code quality, SEO, brand consistency, whatever the task calls for - and returns a consolidated report of findings. Each specialist focuses on their domain and knows what to look for. You get a comprehensive audit without having to context-switch between a dozen different concerns yourself.


When to Use This

/review fits anywhere you need external eyes on finished or near-finished work:

  • Before merging a significant pull request or going live with a new feature
  • After a large refactor, to catch what the diff didn't surface
  • When auditing an area you're not confident in - security, accessibility, performance
  • For content: checking brand voice consistency, accuracy, and tone across a whole library
  • As the final gate before a launch, deployment, or publication

The simplest pairing: use /run to build it, use /review to check it.

Use /optimize instead when you already know there's a performance problem and need before/after metrics. /review will tell you about performance issues; /optimize will fix them with measurement.

Use /debug instead when something is broken and root cause is unclear. /review assumes the thing works - it's looking for quality issues, not bugs.


How It Works

When you run /review, cAgents reads your project context and determines which specialist reviewers are relevant for the scope. It then spawns them in parallel, each running an independent audit of the same material from their specific lens.

Typical reviewer specialists include:

  • Security reviewer - authentication, authorization, input validation, dependency vulnerabilities, secrets exposure
  • Code quality reviewer - readability, maintainability, test coverage, anti-patterns
  • Performance reviewer - load times, bundle size, inefficient queries, caching
  • Accessibility reviewer - WCAG compliance, keyboard navigation, screen reader compatibility, color contrast
  • SEO reviewer - metadata, headings, structured data, crawlability
  • Brand/tone reviewer - voice consistency, style guide adherence, accuracy (for content)

The reviewers work concurrently. When they finish, their findings are consolidated into a single prioritized report - critical issues first, then high, medium, and low. If you pass --auto-fix, cAgents will attempt to automatically resolve findings it's confident about, and flag the rest for manual attention.

The reviewers spawned depend on what you're reviewing. /review on a web project surfaces security and accessibility specialists. /review on a content library surfaces tone and accuracy specialists. You can also specify reviewers explicitly if you want to narrow the scope.

Walkthrough -- Quality Auditing a Website
The site is built, launched, optimized, and debugged. The contact form works. The pages load fast. But "it works" isn't the same as "it's ready." Before calling the project done, a full quality audit across every dimension.

/review Run a comprehensive quality audit of the portfolio site - security, accessibility, SEO, and code quality

cAgents spins up four specialist reviewers simultaneously. While you're reading this, they're scanning:

Spawning reviewers in parallel:
  → security-engineer       [scanning authentication, headers, dependencies]
  → accessibility-checker   [scanning WCAG compliance, keyboard nav, contrast]
  → seo-specialist          [scanning metadata, headings, structured data]
  → code-reviewer           [scanning maintainability, test coverage, patterns]

A few minutes later, the consolidated report:

/review Report - Portfolio Site Audit
Generated: 2026-03-28
Reviewers: security-engineer, accessibility-checker, seo-specialist, code-reviewer

═══════════════════════════════════════
 CRITICAL  (1)
═══════════════════════════════════════

[SECURITY] Missing Content-Security-Policy header
  Location: next.config.ts - HTTP response headers
  Risk: XSS attacks can execute arbitrary scripts in the browser context.
  Fix: Add CSP header to next.config.ts headers() configuration.

═══════════════════════════════════════
 HIGH  (3)
═══════════════════════════════════════

[ACCESSIBILITY] 4 images missing alt text
  Location: components/Portfolio.tsx, lines 34, 51, 67, 89
  Impact: Screen readers skip images entirely; WCAG 2.1 AA failure.
  Fix: Add descriptive alt attributes to all <img> elements.

[SEO] Open Graph tags missing on 3 pages
  Location: app/about/page.tsx, app/portfolio/page.tsx, app/blog/page.tsx
  Impact: Social shares show no preview image or description.
  Fix: Add <meta property="og:*"> tags via generateMetadata().

[SECURITY] npm dependency with known CVE
  Package: axios@1.3.4 (CVE-2023-45857, CVSS 6.5)
  Fix: Upgrade to axios@1.6.0 or later.

═══════════════════════════════════════
 MEDIUM  (4)
═══════════════════════════════════════

[ACCESSIBILITY] Focus ring removed via CSS
  Location: app/globals.css, line 12 - outline: none on *:focus
  Impact: Keyboard users cannot see focused element.

[CODE QUALITY] 3 components lack error boundaries
  Location: components/ContactForm.tsx, components/Portfolio.tsx,
            components/BlogListing.tsx
  Impact: Unhandled render errors crash the full page.

[SEO] Page titles not unique across blog posts
  Location: app/blog/[slug]/page.tsx - generateMetadata()
  Impact: Duplicate titles hurt search ranking.

[SECURITY] Cookie set without HttpOnly flag
  Location: lib/session.ts, line 44
  Impact: Cookie accessible to JavaScript; session hijacking risk.

═══════════════════════════════════════
 LOW  (5)
═══════════════════════════════════════

[CODE QUALITY] 5 unused imports across components
[ACCESSIBILITY] 2 form labels not associated with inputs
[SEO] robots.txt missing Sitemap directive
[CODE QUALITY] Console.log left in ContactForm.tsx
[ACCESSIBILITY] Insufficient color contrast on footer text (3.8:1, minimum 4.5:1)

═══════════════════════════════════════
Summary: 1 critical, 3 high, 4 medium, 5 low
Auto-fixable with --auto-fix: 7 of 13
═══════════════════════════════════════

Most of these are quick fixes - the kind of things that are easy to miss when you're focused on building. The CSP header and the vulnerable dependency are the ones that matter before going live. Everything else is polish.

Pass --auto-fix to let cAgents handle the seven auto-fixable items:

/review --auto-fix Run a comprehensive quality audit of the portfolio site - security, accessibility, SEO, and code quality

The remaining six go on a short list for manual review. Fifteen minutes of focused work, and the site is production-ready in a way "it works on my machine" never quite is.


Walkthrough -- Auditing a Content Library
All twelve articles are published. Traffic is growing. Before scaling the content program - adding more writers, increasing cadence, expanding to new channels - run a full audit to make sure the foundation is solid.

/review Audit all published content for brand consistency, tone, accuracy, and quality across the full article library

Three specialist reviewers pick up the work:

Spawning reviewers in parallel:
  → brand-manager          [scanning voice, style, terminology consistency]
  → copy-editor            [scanning grammar, structure, accuracy, clarity]
  → seo-specialist         [scanning keyword usage, metadata, internal linking]

The consolidated report surfaces patterns that are hard to catch one article at a time:

/review Report - Content Library Audit
Generated: 2026-03-28
Articles reviewed: 12
Reviewers: brand-manager, copy-editor, seo-specialist

═══════════════════════════════════════
 HIGH  (2)
═══════════════════════════════════════

[BRAND] Inconsistent product name capitalization
  Instances: "cloudStorage", "Cloud storage", "Cloud Storage" used
  interchangeably across 7 articles.
  Fix: Standardize to "Cloud Storage" (title case) per style guide.

[BRAND] Two distinct tones in use - technical and casual
  Articles 1–6: formal, third-person, passive voice.
  Articles 7–12: conversational, second-person, active voice.
  Impact: Readers experience a jarring shift halfway through the library.
  Fix: Audit articles 1–6 for voice consistency; update to match 7–12 style.

═══════════════════════════════════════
 MEDIUM  (4)
═══════════════════════════════════════

[COPY] 3 articles contain outdated statistics (pre-2024)
  Locations: "Introduction to Cloud Architecture" (2022 data),
             "Security Basics" (2021 benchmark), "Cost Comparison" (2023 pricing)
  Fix: Update statistics with current sources.

[SEO] Internal linking is one-directional
  Newer articles link to older ones, but older articles don't link forward.
  Impact: Readers who start with older content don't discover newer articles.
  Fix: Add forward links in articles 1–6.

[COPY] Inconsistent call-to-action phrasing
  "Learn more", "Read more", "Find out more", "Discover more" all used.
  Fix: Standardize CTA phrasing.

[BRAND] Competitor mentioned by name in 2 articles
  Current policy: refer to competitors generically.
  Fix: Replace specific names with "competing solutions" or equivalent.

═══════════════════════════════════════
 LOW  (3)
═══════════════════════════════════════

[COPY] 4 articles exceed recommended reading time (>10 min) without a TL;DR
[SEO] 5 articles missing meta descriptions
[COPY] Heading hierarchy inconsistent in 3 articles (h4 used before h3)

═══════════════════════════════════════
Summary: 0 critical, 2 high, 4 medium, 3 low
Auto-fixable with --auto-fix: 4 of 9
═══════════════════════════════════════

The tone split is the most important finding - two different writers apparently worked in different styles, and it shows. That's the kind of issue that's invisible when you're reading one article at a time but obvious when you read the library as a whole. The auto-fix handles the formatting and metadata issues; the tone and accuracy findings go to the team for manual review.


Walkthrough -- Auditing a Full Curriculum for Equity and Rigor
The semester is half over. Maya's project-based US History course is working -- students are more engaged, the project submissions are stronger than the old test scores ever were. Before sharing the curriculum with her department and committing to it permanently, she wants a thorough review of all six units. Not just "is the content good?" but "does this work for every student in the room?"

/review Audit the complete US History PBL curriculum - all 6 units - for
standards alignment, assessment rigor, differentiation quality, and
accessibility of materials for students with IEPs and English language learners.

Four specialist reviewers pick up the work:

Spawning reviewers in parallel:
  → curriculum-designer    [scanning standards alignment across all 6 units]
  → quality-manager        [scanning rubric rigor and assessment consistency]
  → accessibility-checker  [scanning materials for IEP and ELL accessibility]
  → sensitivity-reader     [scanning for equity assumptions and access barriers]

The consolidated report:

/review Report - US History PBL Curriculum Audit
Generated: 2026-03-28
Units reviewed: 6 (90 lesson plans, 24 source documents, 6 rubrics)
Reviewers: curriculum-designer, quality-manager, accessibility-checker,
           sensitivity-reader

═══════════════════════════════════════
 CRITICAL  (1)
═══════════════════════════════════════

[EQUITY] Unit 5 project requires home internet access
  Location: Unit 5 (Industrialization) - project brief, research phase
  Impact: 6 students on Maya's roster don't have reliable home internet.
  The project as designed excludes them from the research phase entirely.
  Fix: Add an in-school alternative path using library computers and
  printed source packets. Restructure research days to front-load
  internet-dependent work during class time.

═══════════════════════════════════════
 HIGH  (2)
═══════════════════════════════════════

[ASSESSMENT] Rubric language degrades in later units
  Location: Units 4-6 rubrics vs. Units 1-3 rubrics
  Pattern: Units 1-3 use specific, measurable criteria ("identifies
  3 specific causes and explains each with textual evidence"). Units
  4-6 use vague language ("demonstrates understanding of key themes").
  Impact: Students can't self-assess against vague criteria. Grading
  becomes subjective. Parents can't understand scores.
  Fix: Revise Units 4-6 rubrics to match the specificity standard
  established in Units 1-3.

[STANDARDS] Unit 6 standard coverage gap
  Location: Unit 6 (Progressive Era) - lesson plans 11-15
  Impact: Ohio standard H.CIT.8 (civic participation) is listed in
  the alignment matrix but no lesson activity directly addresses it.
  Fix: Add a civic action comparison activity to lesson plan 13.

═══════════════════════════════════════
 MEDIUM  (3)
═══════════════════════════════════════

[ACCESSIBILITY] 2 lesson plans reference YouTube without alternatives
  Videos lack transcripts or captioning for hearing-impaired students.

[ACCESSIBILITY] Source documents in Units 3-4 exceed Lexile range
  Reading level is 1180L; 4 students have IEPs specifying max 950L.
  Provide annotated or scaffolded versions.

[ASSESSMENT] Peer review rubric missing from Units 2 and 5
  Students evaluate each other's drafts but have no criteria to use.

═══════════════════════════════════════
 LOW  (2)
═══════════════════════════════════════

[FORMAT] Inconsistent header formatting across unit documents
[FORMAT] Parent letters use two different date formats

═══════════════════════════════════════
Summary: 1 critical, 2 high, 3 medium, 2 low
Auto-fixable with --auto-fix: 3 of 8
═══════════════════════════════════════

The critical finding is the one that matters. Maya designed the project on her home laptop, where internet access is invisible -- it's just there. She never thought about it as a requirement because for her it isn't one. Six students in her class would have hit a wall in week 13 with no way to complete the research phase. /review caught the assumption she didn't know she was making.

The rubric degradation is the second most important finding. Maya wrote Units 1-3 carefully over summer break. Units 4-6 were written in September while school was already in session, and the quality dropped. Now she has a specific standard to match them to.


Walkthrough -- Pre-Lease-Renewal Audit of Two Restaurant Locations
David's Midtown lease is up for renewal. The landlord wants a 12% increase. Five months ago, the Decatur location opened and it's been doing well -- maybe too well relative to Midtown. Before signing a 3-year commitment at higher rent, David wants a clear picture of both locations: what's working, what's not, and whether the numbers support the renewal.

/review Audit both restaurant locations - operational efficiency, financial
health, menu performance, and customer satisfaction signals. Midtown (6 years
operating) and Decatur (5 months operating). I need to decide whether to
renew the Midtown lease at a 12% rent increase.

Four reviewers span both locations:

Spawning reviewers in parallel:
  → finance-manager        [scanning revenue, margins, and cost structure]
  → operations-manager     [scanning staffing, throughput, and waste]
  → marketing-analyst      [scanning customer review sentiment and loyalty]
  → performance-analyst    [scanning per-item menu performance at both sites]

The consolidated report:

/review Report - Dual-Location Restaurant Audit
Generated: 2026-03-28
Locations: Midtown (est. 2020) and Decatur (est. 2025)
Reviewers: finance-manager, operations-manager, marketing-analyst,
           performance-analyst

═══════════════════════════════════════
 CRITICAL  (1)
═══════════════════════════════════════

[FINANCIAL] Midtown per-seat revenue declined 8% year-over-year
  Decatur per-seat revenue is already 15% above Midtown's current.
  Cause: Midtown's lunch crowd (office workers) has thinned
  post-remote-work. The neighborhood's foot traffic pattern has
  shifted permanently.
  Impact: At the proposed 12% rent increase, Midtown's break-even
  moves from 68% occupancy to 76%. Current average: 71%.
  Framework: Renewal is viable only if lunch revenue recovers or
  dinner volume grows 18% within 12 months.

═══════════════════════════════════════
 HIGH  (2)
═══════════════════════════════════════

[MENU] 5 Midtown items account for 62% of food waste, 8% of revenue
  Items: pepper soup (weekday only), ofada rice, dodo platter,
  fried plantain chips, chin chin basket
  Pattern: These are legacy items David keeps for personal and
  cultural reasons. Regulars don't order them. New customers
  don't know what they are.
  Note: Removing them is a business decision, not a quality one.
  Consider rotating them as weekend specials instead of daily menu.

[OPERATIONS] Decatur kitchen producing at 94% capacity during Friday
  and Saturday dinner service
  Impact: No room for growth without process changes. Ticket times
  already averaging 22 minutes (target: 18).
  Fix: Increase prep batch sizes for top 4 sellers; shift one
  prep cook's start time 2 hours earlier on Fri/Sat.

═══════════════════════════════════════
 MEDIUM  (2)
═══════════════════════════════════════

[SENTIMENT] Customer loyalty signals diverge between locations
  Decatur reviews: "authentic," "worth the drive," "felt like family"
  Midtown reviews: "convenient," "solid lunch option," "quick"
  Analysis: "Convenient" is a weak loyalty driver. Customers who
  describe you as convenient will leave when something more
  convenient opens. "Authentic" sticks.

[OPERATIONS] Staff overtime 23% higher at Midtown than Decatur
  Despite lower revenue. Indicates scheduling inefficiency, not
  demand-driven overtime.

═══════════════════════════════════════
 LOW  (1)
═══════════════════════════════════════

[MARKETING] Neither location has updated Google Business photos
  in 14 months

═══════════════════════════════════════
Summary: 1 critical, 2 high, 2 medium, 1 low
Lease renewal decision framework included in full report appendix
═══════════════════════════════════════

David asked for an operational audit. The review gave him a strategic question: is Midtown the future, or is Decatur? The numbers don't tell him what to do -- that depends on what the restaurant means to him, what the Midtown neighborhood means to his regulars, whether he wants to fight for a recovery or double down on what's already working. But the numbers are clear now, and the decision is his to make with open eyes instead of gut feel.


Walkthrough -- Post-Launch Game Audit
One month after launch. Meridian has stabilized at 74% positive reviews on Steam -- decent for an indie narrative puzzle game, but not great. Sadie has been reading reviews one at a time, responding to bug reports, and trying to figure out what to prioritize. She needs the aggregate picture: what patterns are hiding in the noise?

/review Comprehensive post-launch audit of Meridian: game quality
(player feedback patterns), store presence (page effectiveness,
screenshot quality, description), and community health (Discord
activity, Steam forum sentiment, review response patterns).

Three reviewers divide the work:

Spawning reviewers in parallel:
  → game-designer          [analyzing player feedback patterns and difficulty curve]
  → marketing-analyst      [scanning store page conversion and mobile experience]
  → community-manager      [scanning Discord activity and Steam forum health]

The consolidated report:

/review Report - Meridian Post-Launch Audit
Generated: 2026-03-28
Review period: First 30 days post-launch
Reviewers: game-designer, marketing-analyst, community-manager

═══════════════════════════════════════
 HIGH  (2)
═══════════════════════════════════════

[GAME QUALITY] 34% of negative reviews cite the same puzzle
  Location: Chapter 3 - "The Clock Tower"
  Pattern: Difficulty spike breaks an otherwise smooth curve.
  Players either quit at the clock tower or push through and
  love the remaining 4 chapters. 58% of refunds occur within
  30 minutes of reaching this puzzle (consistent with the
  refund spike /debug diagnosed last week).
  Options: (a) add a contextual hint after 3 failed attempts,
  (b) add an optional skip with narrative summary, or
  (c) reduce the puzzle's combinatorial space from 5 to 3 elements.
  Recommendation: Option (a) -- preserves the puzzle's satisfaction
  while removing the frustration cliff.

[COMMUNITY] Discord is a broadcast channel, not a community
  Members: 400. Regular posters (3+ messages/week): 12.
  Pattern: Sadie posts announcements. A few people react.
  No conversations between members. No user-generated content.
  Fix: Weekly puzzle discussion threads, fan art features,
  developer Q&A sessions (monthly), and a #theories channel
  to encourage speculation about the story's unanswered questions.

═══════════════════════════════════════
 MEDIUM  (2)
═══════════════════════════════════════

[STORE PRESENCE] Steam page underperforms on mobile browsers
  Screenshots: text-heavy puzzles are unreadable at mobile
  resolution. Description: paragraphs are too long for mobile
  scroll behavior; most mobile visitors don't reach the feature list.
  Fix: Add 2 screenshots that emphasize visual atmosphere over
  puzzle text. Break description into shorter paragraphs with
  bold leads.

[GAME QUALITY] Positive reviews underrepresent the story
  67% of positive reviews mention puzzles. Only 23% mention the
  narrative, which is the game's actual differentiator. The store
  page emphasizes puzzles too -- the game's marketing doesn't
  match its strongest asset.

═══════════════════════════════════════
 LOW  (2)
═══════════════════════════════════════

[COMMUNITY] Steam forum has 3 unanswered bug reports (>7 days old)
[STORE PRESENCE] "Overwhelmingly Positive" threshold (95%) is
  unreachable at current trajectory; "Very Positive" (80%) is
  achievable with the clock tower fix alone

═══════════════════════════════════════
Summary: 0 critical, 2 high, 2 medium, 2 low
Priority recommendation: Fix the clock tower puzzle first.
It drives refunds, negative reviews, and player dropout. Fixing
it improves the review score, reduces refunds, and lets more
players experience the story -- which is the game's best asset.
═══════════════════════════════════════

Reading reviews one at a time, Sadie knew the clock tower puzzle was a problem. She didn't know it was the problem -- that a third of her negative reviews traced back to one difficulty spike, and that the refund pattern /debug diagnosed last week pointed to the same 20-minute window. The aggregate view changes the clock tower from "something to fix eventually" to "the single highest-leverage change for the game's future."


Walkthrough -- Post-Event Vendor and Satisfaction Audit
The Todos Santos wedding happened. The couple is on their honeymoon. Rosa got the text every planner waits for: "It was perfect." But "perfect" is an emotion, not a report. Rosa wants the structured version -- vendor performance against contracts, budget actuals versus plan, timeline adherence, and what guests actually said -- both for her own records and to inform the next destination wedding she coordinates.

/review Post-event audit of the Todos Santos wedding. Review: vendor
performance against contracts (all 8 vendors), budget actuals vs. planned
allocations, timeline adherence (planned vs. actual), and client feedback
themes from the couple's debrief notes and guest comments.

Four reviewers assess different dimensions of the event:

Spawning reviewers in parallel:
  → procurement-specialist  [scanning 8 vendor contracts against deliverables]
  → finance-manager         [scanning budget actuals vs. planned allocations]
  → planning-analyst        [scanning timeline adherence and buffer usage]
  → marketing-analyst       [scanning guest feedback themes and satisfaction]

The consolidated report:

/review Report - Todos Santos Wedding Post-Event Audit
Generated: 2026-03-28
Event date: 2026-03-15
Vendors reviewed: 8
Reviewers: procurement-specialist, finance-manager, planning-analyst,
           marketing-analyst

═══════════════════════════════════════
 HIGH  (2)
═══════════════════════════════════════

[VENDOR] Caterer delivered 78 meals, not contracted 83
  Contract: 83 plated meals including 11 dietary restriction plates
  Delivered: 78 meals; 5 dietary restriction plates missing
  Rosa comped affected guests from the late-night snack station
  (guests didn't notice), but the caterer owes a credit of ~$435.
  Action: Invoice the credit before final payment clears.

[VENDOR] Photographer overtime clause not invoiced correctly
  Contract: 6-hour coverage, overtime at $275/hour
  Actual: 6 hours 45 minutes (went 45 min over, which is normal)
  The overtime clause exists but the photographer didn't invoice it.
  Action: Either negotiate a goodwill credit on the base invoice
  or update future contracts to include a 1-hour buffer in the
  base package.

═══════════════════════════════════════
 MEDIUM  (2)
═══════════════════════════════════════

[TIMELINE] Ceremony 12 min late, dinner service 8 min late
  Ceremony delay: hair/makeup ran 15 min over; absorbed by the
  30-min buffer before processional.
  Dinner delay: kitchen timing slip; absorbed by cocktail hour
  extension.
  Guest impact: None. Buffers worked exactly as designed.
  Future note: Hair/makeup consistently runs over across Rosa's
  events (4 of last 6). Consider building 45-min buffer instead
  of 30.

[BUDGET] Final spend: $63,840 vs. $65,000 planned (-1.8%)
  Under budget in: florals (-$1,400, seasonal substitution saved
  money), transportation (-$320, fewer airport shuttles needed).
  Over budget in: late-night snack station (+$580, comping the
  missing dietary meals).

═══════════════════════════════════════
 POSITIVE FINDINGS
═══════════════════════════════════════

[SATISFACTION] Bilingual ceremony mentioned by 14 of 38 guests
  Pattern: Guests who mentioned the bilingual ceremony used
  words like "moved," "inclusive," "beautiful gesture." This
  wasn't a logistical feature -- it was an emotional highlight.
  Marketing value: Rosa should feature bilingual ceremonies
  prominently in her destination wedding portfolio.

[SATISFACTION] Couple's debrief: highest marks for "felt like
  we didn't have to worry about anything"
  This is the planner's core value proposition. The invisible
  logistics work -- the comped meals, the absorbed delays, the
  vendor coordination -- produced exactly the feeling it should.

═══════════════════════════════════════
Summary: 0 critical, 2 high, 2 medium, 2 positive
Recoverable credits: ~$435 (caterer) + potential photographer
  goodwill adjustment
═══════════════════════════════════════

The caterer credit and the photographer clause are money on the table that Rosa would have missed without the structured review. But the bilingual ceremony finding is the one with long-term value. Rosa knew it was a nice touch. She didn't know it was a differentiator -- that 37% of guests mentioned it unprompted. That's not a feature to offer when asked. That's a feature to lead with.


Walkthrough -- Season 1 Content and Sustainability Review
Six episodes of Forgotten Picket Lines are done. Jordan's first season. The numbers are modest -- 340 average downloads per episode, growing slowly. The show has been reviewed by two podcast blogs, both positive. Jordan is tired. Before planning season 2, they need an honest assessment: is the show good enough to keep doing, and can the workflow survive another season?

/review Full season review of Forgotten Picket Lines (6 episodes). Assess:
narrative quality and consistency, production quality (audio levels, pacing,
music usage), audience growth trajectory and engagement patterns, and
sustainability of the current production workflow for season 2.

Four reviewers assess different dimensions of the season:

Spawning reviewers in parallel:
  → narrative-designer      [scanning story structure, pacing, and hooks]
  → music-producer          [scanning audio levels, mixing, and production]
  → marketing-analyst       [scanning audience growth and engagement patterns]
  → operations-manager      [scanning production hours and sustainability]

The consolidated report:

/review Report - Forgotten Picket Lines Season 1
Generated: 2026-03-28
Episodes reviewed: 6 (4 narrative + 2 sidebar)
Reviewers: narrative-designer, music-producer, marketing-analyst,
           operations-manager

═══════════════════════════════════════
 HIGH  (2)
═══════════════════════════════════════

[NARRATIVE] Episodes 1 and 4 significantly outperform the rest
  Completion rate: Ep 1 (82%), Ep 4 (79%) vs. Ep 2 (61%),
  Ep 3 (58%), Ep 5 (64%), Ep 6 (63%)
  Pattern: Episodes 1 and 4 open with a personal or emotional hook
  in the first 90 seconds. The others open with historical context.
  "In 1937, textile workers in Gastonia..." loses listeners.
  "Marie Wiggins was 19 and seven months pregnant when she picked
  up a picket sign for the first time" keeps them.
  Fix: Restructure all episode openings to lead with a human
  moment. Move historical context to minute 3-4 after the hook
  has landed.

[SUSTAINABILITY] Jordan is averaging 14.5 hours/week, not 12
  Planned: 12 hrs/week (narrative episodes: 10 hrs, sidebars: 5 hrs)
  Actual: 14.5 hrs/week average over the season
  Breakdown: narrative episodes averaging 11.2 hrs (close to plan),
  sidebar episodes averaging 8 hrs (60% over plan)
  Trajectory: At this pace, burnout risk is high by season 2
  episode 4. Jordan already has a full-time production job.
  Options:
    (a) Extend production pipeline by 1 week per episode
    (b) Hire a part-time editor to reclaim 3 hrs/week (~$200/mo)
    (c) Reduce sidebar episode production standard (less scoring,
        simpler structure, conversational instead of scripted)
  Recommendation: Option (c) first -- it's free and addresses the
  root cause. Sidebars are over-produced relative to their purpose.

═══════════════════════════════════════
 MEDIUM  (3)
═══════════════════════════════════════

[PRODUCTION] Audio levels inconsistent between narration and
  interview segments
  Interviews average 3 dB quieter than narration. Listeners
  adjust volume mid-episode or stop listening in noisy environments.
  Fix: Apply LUFS-targeted loudness normalization pass to all
  interview segments (-16 LUFS for narration, match interviews).

[AUDIENCE] Growth is linear, not exponential
  Downloads per episode: 280 → 310 → 320 → 340 → 350 → 360
  Pattern: Steady but not compounding. New listeners aren't
  bringing more new listeners. No viral or referral mechanism.
  Fix: Add a "share this episode" CTA after the emotional peak
  (not at the end, when listeners have already moved on).

[NARRATIVE] Music cues are strongest in episodes 1 and 4
  (same episodes with the best hooks). In other episodes, music
  enters late or drops out during emotional moments. Consistent
  scoring would lift the weaker episodes.

═══════════════════════════════════════
 LOW  (1)
═══════════════════════════════════════

[PRODUCTION] Episode descriptions vary in length from 45 to 310
  words with no consistent structure

═══════════════════════════════════════
Summary: 0 critical, 2 high, 3 medium, 1 low

Season 1 verdict: The show is good. Episodes 1 and 4 prove
Jordan can produce work that holds an audience. The gap between
the strong episodes and the rest is fixable -- it's a structural
pattern (opening hooks), not a talent gap. The sustainability
finding is the one that determines whether season 2 happens at all.
═══════════════════════════════════════

Jordan already knew the show was tiring. They didn't know exactly where the hours were going -- or that the sidebar episodes, the ones designed to be low-effort, were the main culprit. The sustainability reviewer did the accounting Jordan had been avoiding: at the current pace, the math doesn't work for another 6 episodes on top of a full-time job. That's not a quality finding. It's a viability one.

The narrative finding -- that emotional hooks predict completion rates -- is the editorial insight that makes season 2 better, not just possible. Jordan doesn't need to write better content. They need to put the best content first.


That's all seven commands. Every thread that started with a blank page in /designer -- the curriculum that needed structure, the restaurant that needed a second-location strategy, the game that needed a launch plan, the wedding that needed coordination, the podcast that needed a format -- has now been designed, built, scaled, launched, optimized, debugged, and reviewed. Seven different people. Seven kinds of work. One toolset.

The next two articles go behind the curtain -- how cAgents tracks and coordinates all this work under the hood. Part 9: Sessions explains the directory structure that records every pipeline run, and Part 10: Hooks covers the event system that makes real-time coordination possible.


Key Flags

Flag What It Does
--auto-fix Automatically resolve findings that reviewers are confident about; flag the rest for manual attention
--focus <area> Focus the review on a specific area (e.g., security, accessibility, performance)
--severity <level> Only report findings at or above a threshold: critical, high, medium, low
--format <type> Output format for the report: default, json, markdown
--dry-run Show which reviewers would be spawned and what they'd check, without running the review

Tips & Gotchas

Chain /review with /run for a fast fix loop. When /review surfaces auto-fixable findings, pass --fix to resolve them. For the manual findings, hand them to /run one by one: /run Fix the missing alt text on portfolio images per the review report. This keeps the quality loop tight without you having to touch individual files directly.
Review early, not just at the end. Running /review after a major feature addition - not just before launch - catches issues while they're still isolated. A security finding in a single new component is a 10-minute fix. The same finding discovered after six more months of work built on top of it is a project.
--auto-fix applies changes directly. Commit your current work before running /review --auto-fix. The auto-fixer is confident about formatting, metadata, and dependency upgrades, but "confident" isn't the same as "always right." Review the diff before pushing, especially for security-related fixes.
A clean report isn't a guarantee. /review catches what it knows to look for. It doesn't replace manual security testing for high-stakes systems, accessibility testing with real assistive technology users, or code review by someone who understands your specific domain. Use it to catch the systematic and the obvious - and layer human review on top for anything critical.

Series navigation: Previous: /debug - Systematic Debugging | Next: Sessions - Under the Hood