Caelan's Domain

Part 7 — Scale: Automation, Sales, and the Complete System

Created: April 17, 2026 | Modified: April 19, 2026

Cowork Features
Introduced: Scheduled Tasks | Revisited: Projects, CLAUDE.md, Memory, Rules, Skills, Agents

This is Chapter 7 of a 7-chapter course on building your AI VP of Marketing with Claude Cowork. Previous: Chapter 6 — Measurement


Standing Meetings and a Promotion

Your VP is running the marketing function. Briefs appear when you ask for them. Voice checks fire on every draft. The Campaign Strategist turns a topic into a plan, the Repurposer turns a plan into channels, the Distribution agent turns channels into a calendar. Measurement tells you what worked last week.

You are still showing up to start every one of those jobs.

This chapter changes two things at once. First, you put the recurring work on the calendar — Scheduled Tasks handle content production, competitive monitoring, and performance reporting whether your laptop is open or closed. Second, you test whether the stack you built for marketing generalizes. You extend the VP into sales support, using the same Rules → Skills → Agents approach. If the playbook extends into a domain it wasn't designed for, the system is real. Not a demo. Then you step back, name the pattern, and audit the complete system against what you set out to build.

By the end of this chapter, your VP stops being a freshly hired marketing specialist and becomes what it should be — a complete executive whose job description you can rewrite whenever the business needs it.


Section 1 — Automation: Running on Autopilot

For six chapters, you have been walking into your VP's office and handing them work. You open Cowork, type a prompt or invoke a skill, review the output, and move on. The system works. But it only works when you show up.

Scheduled Tasks change that. A Scheduled Task is a prompt that runs on a schedule you define — daily, weekly, monthly — without you being present. Cowork opens the project, runs the task, and saves the output for you to review later. Your VP does the work whether you are at your desk or not.

Think of it this way. Until now, you have been managing your VP through drop-in meetings. You walk over, hand them something, wait for the result. Scheduled Tasks are standing meetings on their calendar. Monday at 8 AM, generate this week's content briefs. Friday at 5 PM, compile the weekly performance report. First of the month, run a competitive scan. The work happens on schedule because you put it on the calendar, not because you remembered to ask.

This is the payoff for the infrastructure you built across Chapters 1 through 6. Your CLAUDE.md gives your VP context. Your Rules constrain quality. Your Skills produce consistent outputs. Your Agents handle complex, multi-step work. Scheduled Tasks take all of that and put it on autopilot.

Creating a Scheduled Task

Open your Cowork project. In the left sidebar, click Scheduled Tasks. You will see an empty list — no tasks have been scheduled yet.

Click New Task. Cowork shows you a form with four fields:

Name. A label for the task. Pick something you will recognize in a list. "Weekly Content Briefs" or "Friday Performance Report" works.

Schedule. How often the task runs. Cowork gives you presets — daily, weekly, monthly — and a custom option where you pick specific days and times.

Prompt. The instructions your VP follows when the task fires. It can invoke Skills, reference Agents, or give direct instructions. Everything your VP can do in a conversation, it can do in a scheduled task.

Notifications. Whether Cowork notifies you when the task completes. Turn this on.

Fill in those four fields and click Save. The task appears in your Scheduled Tasks list with its next run time displayed.

Under the hood — Scheduled Tasks
Scheduled Tasks are Cowork-managed automations that run a prompt in your Project on a cadence.

  • What file. The Scheduled Tasks list is stored inside your Project and surfaced in the Cowork UI.
  • When written. Entries save when you click Save in the editor, and when Cowork records each run.
  • What format. UI-managed records — a prompt, a schedule, and a run history stored per task.
  • How to inspect. Open the Scheduled Tasks list in the Cowork sidebar, then click a task for history.
  • How to undo. Open the task in the list and pause, edit, or delete it before the next run.

Gotcha. A Scheduled Task reuses the entire Cowork session the same way a manual chat does. Your CLAUDE.md, your Rules, your Skills, your Agents, and your Memory all load into every scheduled run. If your laptop is closed at the scheduled time, the task queues and runs when Cowork next starts. A confusing output is never a scheduling problem — it is a prompt problem, and you fix it the same way you fix any prompt.

That is the entire mechanism. You write a prompt, set a schedule, and Cowork handles the rest. The complexity is not in the tool. It is in choosing what to automate and writing prompts that produce useful output without you in the loop.

Weekly Content Generation

Your content pipeline from Chapter 5 — The Pipeline takes a topic and turns it into distribution-ready content across multiple channels. You built it to run manually. Now you put it on a schedule.

Name: Weekly Content Pipeline Schedule: Weekly, Monday, 7:00 AM Prompt:

Run the weekly content production cycle.

Step 1: Review the marketing goals and content calendar in CLAUDE.md. Identify
2-3 topics that align with this month's priorities and have not been covered in
the last 60 days.

Step 2: For each topic, run the content-brief skill to produce a structured brief.

Step 3: Run the brand-voice-checker skill against each brief to verify tone and
language alignment.

Step 4: Pass the approved briefs to the Campaign Strategist agent. Produce a
campaign plan for each brief that includes channel selection, timing, and
format recommendations.

Step 5: Compile everything into a single document with this structure:
- Executive summary (3-4 sentences on what was produced and why)
- Brief 1: topic, brief, voice check results, campaign plan
- Brief 2: topic, brief, voice check results, campaign plan
- Brief 3: topic, brief, voice check results, campaign plan (if applicable)
- Recommended priority order with rationale

Flag any brief where the voice checker found issues. Do not auto-correct --
leave the original and the checker's notes so I can decide.

Save the task. Monday morning, before you open your laptop, your VP has already selected topics, generated briefs, checked them for voice, and built campaign plans. The output sits in the task history waiting for you.

Your review takes fifteen minutes. You scan the topics, check that they align with what you know about the business right now, read the voice checker's flags if any exist, and approve or adjust the priority order. Then you hand the approved briefs to the Content Repurposer — either manually or through another scheduled task later in the week.

Start with one scheduled content run per week. If your VP consistently picks good topics and the briefs need minimal edits, add a second run mid-week. Scaling up is easy. Scaling back from a flood of unreviewed content is not.

Competitive Monitoring

In Chapter 3 — Skills, you watched your VP run a competitor analysis. That was a one-time exercise. Markets do not stay still. A positioning matrix from two months ago is a historical document, not a strategic tool.

Name: Monthly Competitive Scan Schedule: Monthly, 1st of the month, 8:00 AM Prompt:

Run a competitive monitoring scan.

Using the competitor list in CLAUDE.md (or the most recent competitive analysis
in this project's history), research each competitor for changes in the last 30 days:

1. Pricing changes -- new tiers, price increases or decreases, new free offerings
2. Messaging shifts -- new taglines, repositioned value propositions, new claims
3. Product updates -- new features, discontinued products, major releases
4. Channel activity -- new marketing channels, increased activity on existing
   channels, notable campaigns
5. Content themes -- what topics they are publishing about, any patterns or
   shifts in focus

For each competitor, produce:
- A 2-3 sentence summary of what changed (or "No significant changes detected")
- A relevance rating: HIGH (requires a response from us), MEDIUM (worth noting),
  or LOW (informational only)

End with:
- A "Competitive Landscape Summary" paragraph covering the overall direction
- Any positioning opportunities that opened up since the last scan
- Recommended actions, if any, with specific next steps

If a competitor made a HIGH-relevance change, flag it at the top of the report.

Most months the output is routine — minor changes, a few MEDIUM flags, no action required. You scan it in five minutes and file it. That is fine. The value of competitive monitoring is not in any single report. It is in the pattern that emerges over three, six, twelve months. You spot a competitor gradually shifting upmarket. You notice another one doubling down on a channel you abandoned. You catch a pricing change before your customers ask you about it.

When a HIGH flag appears, you pay attention. A competitor just launched a product that overlaps with your core offering. A rival dropped their price by 30%. Someone is running a campaign directly targeting your audience segment. These are the moments where the automated scan earns its keep. You did not have to remember to check. Your VP checked for you.

Automated Reporting

You built a measurement framework in Chapter 6 — Measurement. Until now, you have been running reports manually — a Friday afternoon task that happens when you remember, and does not happen when you are busy. Which means it does not happen on the weeks you need it most.

Name: Weekly Performance Report Schedule: Weekly, Friday, 4:00 PM Prompt:

Compile the weekly marketing performance report.

Using the metrics framework from our measurement setup, produce a report
covering this week's performance:

1. HEADLINE METRICS — this week's value, last week's value, week-over-week
   change, flag any metric that moved more than 15%
2. CHANNEL PERFORMANCE — activity, engagement, conversion per channel
3. CONTENT PERFORMANCE — traffic and engagement per piece, best performer
4. FLAGS AND RECOMMENDATIONS — significant movers with likely cause, one
   specific recommendation for next week, any pipeline adjustments

Keep the report under 500 words. Lead with what changed, not with what
stayed the same. If nothing significant moved, say so in two sentences
and skip the detailed breakdown.

The value shows up on the weeks something moves. Traffic spiked because a post got shared. Email open rates dropped because the subject line format changed. A channel that was performing well went quiet. Your VP does not just show you the numbers — it flags the change, offers a likely explanation, and recommends a specific action. You still make the decision. But you make it with the analysis already done.

Create a second reporting task for monthly summaries: month-over-month trends, best-performing channel and content piece with data, topics that resonated vs. underperformed, progress toward quarterly goals from CLAUDE.md, three specific recommendations for next month. The monthly summary connects individual week performance to your quarterly goals. It tells you whether you are on track, falling behind, or ahead of plan. And it recommends adjustments — not vague "try harder" advice, but specific changes tied to data.

Review Cadence

You now have three scheduled tasks running. Your VP handles the work. But automation does not mean abdication. You are still the boss. A scheduled task that nobody reviews is not automation — it is waste.

Weekly — 15 minutes, Monday or Tuesday. Open Monday's content pipeline output. Scan topics, check the voice checker flags, approve or adjust the priority order. Check Friday's performance report. If something moved, decide whether it needs a response this week.

Monthly — 30 minutes, first week of the month. Read the competitive scan and decide on responses to HIGH flags. Read the monthly performance summary. Compare progress against your quarterly goals. Adjust CLAUDE.md if priorities have shifted. Changes to CLAUDE.md automatically flow into every scheduled task on its next run.

Quarterly — 60 minutes. Step back. Are the scheduled tasks producing useful output? Are you actually reading the reports? Has your business changed enough that the prompts need updating? Rewrite prompts, add new tasks, retire old ones, adjust schedules. Automation runs the same prompt every time, and your business does not stay the same.

What happens when you stop reviewing
A scheduled task that runs without review for more than a month starts producing stale output. Your VP picks topics based on outdated priorities. The performance report flags changes you already know about. The competitive scan tracks competitors who no longer matter. Fifteen minutes a week prevents this. Skip it for a month and you spend an hour cleaning up. Skip it for a quarter and you are better off deleting the tasks and starting over.

Section 2 — Sales: The Pattern Generalizes

Your marketing pipeline now runs on its own cadence. Content briefs appear Monday morning. Performance reports land Friday afternoon. Competitive intelligence arrives on the first of every month. That is the end of the marketing build.

It is also the beginning of something more interesting. The approach you used to build the marketing system — context, constraints, tools, pipeline, measurement, automation — was never really about marketing. It was a system-design approach that happened to produce a marketing VP. Other functions in your business have the same shape: recurring tasks, quality standards, multi-step workflows, outcomes worth measuring. The question is whether the same approach extends.

If the playbook extends into a domain it wasn't designed for, the system is real. Not a demo. This section is the proof. You extend your VP into sales support, using the same Rules → Skills → Agents approach. Marketing generates interest. Sales converts interest into revenue. They share an audience, a voice, and a set of content assets. Connecting them is not a stretch — it is the obvious next step.

This is a worked example, not a feature expansion. Every move in this section has a direct analog in the marketing build you already completed: sales-voice rules mirror brand-voice rules, the Lead Qualifier mirrors the Campaign Strategist's structured-decision shape, the handoff boundary mirrors the pipeline wiring from Chapter 5. If you catch yourself thinking "I've done this before" — good. That is the point. The retrospective in Section 3 will be earned precisely because the pattern you're about to apply is the same one you already know.

You are reusing the whole stack
The sales extension does not build a new stack. It adds files to the stack you already own. New rule files land in .claude/rules/ beside your brand-voice and process-rules entries. The Lead Qualifier lands in .claude/skills/ beside the content-brief, voice-checker, and social-media skills. Any future sales agent lands in .claude/agents/ beside the campaign-strategist, content-repurposer, and distribution files. Cowork Memory continues to carry your strategic context across turns, and Scheduled Tasks in the Cowork sidebar can fire the sales workflow on a cadence once it is proven. Every surface from the marketing build is a reuse target here — no new primitive is introduced.

Sales-Specific Rules

Your existing rules govern marketing output. Sales content shares the same brand identity, but the register shifts. Marketing educates. Sales converts.

Marketing voice says: "Freelancers who categorize transactions weekly spend 20 minutes on tax prep instead of 8 hours." That is informative. It builds awareness.

Sales voice says: "Your tax season last year took 8 hours. With Tideway, next quarter's prep takes 20 minutes. Here is how to start." That is direct. It addresses a specific person, names the outcome, and pushes toward a decision.

Both voices belong to the same brand. But they serve different moments in the buyer's journey. Your rules need to encode that difference.

Create .claude/rules/sales-voice.md with tone adjustments (lead with the prospect's situation, use second person aggressively, name specific outcomes with numbers, create urgency through deadlines and consequences rather than hype), sales-specific vocabulary (use: proposal, ROI, implementation timeline, investment, onboarding, pilot, trial; avoid: "circle back," "touch base," "no-brainer," "honestly," "cheap," "best-in-class"), and Voice in Action examples that show the same brand speaking in two registers.

Create .claude/rules/sales-process.md for workflow: qualification first (never produce sales collateral for an unqualified lead), proposal structure (prospect's situation → recommended solution → expected outcomes with numbers → investment and timeline → one specific next step), follow-up cadence (same-day initial, 3 days after proposal, 7/14 days for no-response, then close), content reuse (sales collateral references and links to published marketing content).

Generate both files in one /skill-creator session or with a direct prompt that asks Cowork to review your existing rules and CLAUDE.md first, then create the two new files so they complement, not contradict, the marketing playbook.

Read the Voice in Action examples carefully after Cowork generates them. The marketing and sales registers should feel like the same person talking in two different contexts — a conference presentation versus a one-on-one meeting. If the sales examples sound like they belong to a different company, the register shift is too aggressive. Dial it back.

Sales Collateral from Marketing Assets

Your marketing pipeline already produces content. Sales collateral takes that same material and reshapes it for a buyer who already knows they have a problem and is evaluating solutions. The information is not new. The framing is.

A blog post titled "Why Freelancers Should Automate Their Bookkeeping" educates a broad audience. A one-pager titled "How Tideway Saves Freelance Designers 12 Hours a Month" targets a specific prospect meeting. Same underlying data. Different job.

The easy path: /skill-creator. By this point in the course, /skill-creator is the default route for every new Skill. Open a new conversation, type /skill-creator, and paste a prompt asking for a Sales Collateral Generator that accepts one marketing asset as input, extracts core claims and data points, produces one of three outputs (one-pager, proposal section, or pitch talking points) based on what the user specifies, applies the sales-voice rules, and flags claims that need prospect-specific customization. Include a "Customize Before Sending" section that names 2-3 fields to personalize per prospect.

Test it with a real asset — the Tideway tax prep blog post generating a one-pager targeted at a freelance web developer. The output carries a prospect-facing headline, three benefit blocks tied to numbers, and a CTA tailored to the prospect's situation. The "Customize Before Sending" section flags the generic stats so you replace them with the prospect's actual reported numbers before sending.

Sales collateral with uncustomized placeholder numbers is worse than no collateral. A prospect who sees "clients save 12 hours per month" when they only spend 4 hours on bookkeeping will not trust the rest of your numbers. Always customize the flagged fields before sending.

Lead Qualification Skill

Marketing generates leads. But a lead is just a name until you know whether they can buy, want to buy, and are ready to buy. Qualification separates prospects worth your time from contacts who are not ready yet.

The BANT framework gives you four criteria: Budget, Authority, Need, Timeline. Your CLAUDE.md already describes your ideal customer. This skill applies that profile to actual lead information and produces a structured assessment.

Run /skill-creator again and build a Lead Qualifier that accepts lead information (form submission, pasted email, or meeting notes), evaluates against the four criteria on a 1-3 scale, produces a total score (4-12) with a classification (10-12 Hot, 7-9 Warm, 4-6 Cool), explains each score in one sentence citing specific evidence from the lead data, recommends a concrete next action (not "follow up" — a specific send-this-asset-and-ask-this-question action), and flags missing information with the question that would fill the gap.

Test it on a sample inbound submission — Sarah Chen, solo freelance graphic designer who found your tax prep blog post and spent a weekend searching receipts last April. The output scores Need high, flags Timeline as a positive signal, notes Budget needs more information, and scores Authority clear. The recommended action is concrete: send the freelancer one-pager, ask about her monthly revenue range, emphasize that starting now means next April is a 20-minute task.

The Lead Qualifier works best when your CLAUDE.md includes a clear ideal customer profile. If your CLAUDE.md says "we serve freelancers earning $75K-$250K," the skill can evaluate budget signals against that range. If your CLAUDE.md just says "we serve freelancers," the skill has nothing specific to score against. Invest five minutes updating your CLAUDE.md with concrete qualification criteria before building this skill.

This is the pattern in miniature. Rules define the criteria. The Skill applies them consistently to every lead. You get the same structured evaluation whether you are qualifying one lead at 8 AM or twenty at midnight. The judgment is codified, not improvised.

The Handoff

You now have marketing tools that generate leads and sales tools that qualify and convert them. The gap between those two systems is the handoff — the moment a lead moves from "someone who read our blog post" to "someone a salesperson should call."

Get this wrong and leads fall through. Marketing produces interest that nobody follows up on. Sales chases contacts who are not ready. Both sides blame the other.

The handoff is not a tool. It is a set of rules that define when a lead crosses the line and what information travels with them.

Create .claude/rules/handoff-rules.md. Define triggers (Lead Qualifier score of 7+, direct request for pricing/demo/proposal, 3+ content engagements in 7 days), information that travels with the lead (BANT scores with evidence, content engagement history, original source, suggested first action, direct quotes from the inquiry), what sales must not do (contradict published marketing messaging, abandon brand voice, make claims marketing has not published), what marketing must not do (hand off unqualified leads to fill a quota, stop nurturing after handoff), and a feedback loop that records which content and collateral closed each deal and feeds that back into Chapter 6's measurement framework.

This file does not create a new tool. It creates a boundary. Your marketing pipeline from Chapter 5 and your sales skills operate on either side of it, and the handoff rules ensure nothing gets lost in between. The feedback loop closes the circuit back into the measurement framework from Chapter 6, so the two systems learn from each other over time.

The feedback loop at the bottom is where marketing and sales stop being separate functions and start being one system. When you know that prospects who read the tax prep blog post close at twice the rate of prospects who came through LinkedIn ads, your marketing pipeline prioritizes more blog content. When you know that proposals mentioning "20 minutes versus 8 hours" close better than proposals mentioning "automated categorization," your sales collateral skill emphasizes the time comparison. Each side teaches the other.

You just did something more important than adding sales support. You applied the approach to a function you had not built for before — and it worked. That is the most valuable skill in this course: looking at any repeatable business function and saying "I know how to systematize this." The playbook extended into a domain it wasn't designed for. The system is real. Not a demo.

Now name what you just did.


Section 3 — Capstone: The Pattern and the Complete System

Everything you built for marketing follows a single pattern. Everything you just built for sales follows the same pattern. The sales section earned the right to this retrospective — without that second application, the "pattern" would be a claim. You made it a demonstration.

The Pattern

Six steps:

1. Identify the need. What recurring task takes too much time, produces inconsistent results, or falls through the cracks? For marketing, it was content production.

2. Write rules. Codify the standards in .claude/rules/. Rules turn subjective quality ("does this sound like us?") into checkable criteria.

3. Build a skill or agent. Structured input/output tasks become skills. Multi-step decision-making tasks become agents. The Content Brief Generator is a skill. The Campaign Strategist is an agent.

4. Wire into the pipeline. Each tool's output is the next tool's input. Brief feeds the voice checker, voice checker feeds the strategist, strategist feeds the repurposer.

5. Measure. Track whether the tool improves outcomes. The Metrics Analyzer pulls your data and tells you whether the pipeline is working.

6. Automate. Schedule recurring execution so the work happens without you initiating it. You review and approve, but you do not have to remember to start.

Here is how that pattern played out across the course:

Need: consistent content production
  → Rules: brand voice + content standards (Chapter 2)
    → Skill: Content Brief Generator (Chapter 3)
      → Agent: Campaign Strategist (Chapter 4)
        → Pipeline: Brief → Voice → Plan → Content (Chapter 5)
          → Measure: Marketing Metrics Analyzer (Chapter 6)
            → Automate: Scheduled Tasks (Chapter 7)

You already applied this pattern a second time in Section 2 above. Same six steps, different function.

The pattern works for anything. A quick sketch across three more domains:

Customer Support. Need: inconsistent ticket responses and escalation. Rules: response tone and escalation criteria. Skill: Ticket Response Drafter. Agent: FAQ Updater that reviews resolved tickets weekly. Pipeline: new ticket → draft → human review → send; resolved tickets feed the FAQ Updater. Measure: response time, CSAT, escalation rate. Automate: FAQ Updater every Friday; weekly tone-deviation report.

Operations. Need: ad-hoc vendor evaluations. Rules: scoring categories with minimum thresholds. Skill: Vendor Scorecard Generator. Agent: Market Scanner that researches alternatives and flags higher-scoring options. Pipeline: quarterly scan → scorecard per alternative → ranked comparison. Measure: savings, evaluation time, SLA performance. Automate: Market Scanner quarterly.

Hiring. Need: inconsistent candidate screening. Rules: role requirements and scoring criteria. Skill: Resume Screener with advance/hold/pass output. Agent: Interview Question Generator that tailors questions to the candidate's actual resume. Pipeline: job posting → screening → custom questions for advancers. Measure: time-to-hire, screening consistency, hiring-manager satisfaction. Automate: daily during active hiring; weekly pipeline summary.

In each example, the same six steps produce a system tailored to that function. The tools are different. The rules are different. The pattern is identical.

That pattern is the real product of this course. Your AI VP of Marketing is one application. You can build an AI Director of Customer Support, an AI Operations Manager, or an AI Recruiting Coordinator using the same framework. The skills transfer because the architecture transfers.

Where You Could Have Stopped

Off-ramps in this course
  • End of Chapter 1: hired your VP, set up accountability. CLAUDE.md holds your business context, Memory carries learned decisions, the review framework keeps you in control.
  • End of Chapter 2: add Rules. Your VP follows your standards on any task. Enough for many small businesses — a disciplined drafting partner with your voice encoded.
  • End of Chapter 5: Skills, Agents, and the Pipeline are running. Type a topic, walk through four stages, get distribution-ready content across channels. This is where the course stops feeling like "a chatbot with rules" and starts feeling like a department.
  • End of Chapter 6: add Measurement. The VP learns from results. Reports tell you what moved and why. The loop closes back in: findings feed Memory, Memory feeds the next campaign plan.
  • End of Chapter 7 (here): full system, automated, extended beyond marketing. The pattern is named. You can build an AI Director of anything.

The Complete System

You started with an empty Cowork project and a prompt that said "interview me about my business." Seven chapters later, you have a marketing system that produces strategy, creates content, checks brand consistency, distributes across channels, measures results, runs on a schedule, and handles sales collateral. That is not a chatbot. That is an operating department.

The complete system as a flow:

CLAUDE.md (business context)
  + Memory (learned decisions)
    + Rules (standards and constraints)
      → Skills (structured tools)
        → Agents (autonomous workers)
          → Pipeline (connected workflow)
            → Scheduled Tasks (automation)
              → Memory (results feed back in)

Notice the loop. Memory feeds back into the system. When the Metrics Analyzer reports that email open rates dropped, that finding enters Memory. The next time the Campaign Strategist builds a plan, it accounts for that data. The system does not just execute — it accumulates knowledge.

Trace the journey back through the waypoints. You started in Chapter 1 — The Hire with an empty project and a CLAUDE.md. You wrote standards into Rules in Chapter 2 — The Playbook. You built Skills in Chapter 3 and Agents in Chapter 4, then wired them into a working choreography in Chapter 5 — The Pipeline. You gave the system numbers to judge itself by in Chapter 6 — Measurement. Each of those waypoints was an off-ramp where a reasonable person could have stopped and kept a working system. You did not stop. That is why the loop closes here.

Your Final Architecture

Every file on disk, every surface Cowork manages for you. You can point at any node below and know which chapter wrote it.

your-cowork-project/                  (FINAL)
├── CLAUDE.md
└── .claude/
    ├── rules/
    │   ├── process-rules.md
    │   ├── sales-voice.md
    │   ├── sales-process.md
    │   └── handoff-rules.md
    ├── skills/
    │   ├── content-brief/
    │   ├── voice-checker/
    │   ├── social-media/
    │   ├── sales-collateral/
    │   └── lead-qualifier/
    └── agents/
        ├── campaign-strategist.md
        ├── content-repurposer.md
        └── distribution.md
+ Cowork Memory (external)
+ Scheduled Tasks (surfaced in Cowork UI)
FileIntroduced inLast touched inPurpose
CLAUDE.mdChapter 1Chapter 1Persistent business context Cowork reads on every turn.
.claude/rules/process-rules.mdChapter 2Chapter 2Review workflow, approval gates, process standards.
.claude/rules/sales-voice.mdChapter 7Chapter 7Sales register that builds on brand voice.
.claude/rules/sales-process.mdChapter 7Chapter 7Qualification, proposal structure, follow-up cadence.
.claude/rules/handoff-rules.mdChapter 7Chapter 7Marketing-to-sales boundary with feedback loop.
.claude/skills/content-brief/Chapter 3Chapter 3Content Brief Generator.
.claude/skills/voice-checker/Chapter 3Chapter 3Brand Voice Checker.
.claude/skills/social-media/Chapter 5Chapter 5Social Media Post Creator.
.claude/skills/sales-collateral/Chapter 7Chapter 7Sales Collateral Generator.
.claude/skills/lead-qualifier/Chapter 7Chapter 7BANT Lead Qualifier.
.claude/agents/campaign-strategist.mdChapter 4Chapter 4Briefs into multi-channel campaign plans.
.claude/agents/content-repurposer.mdChapter 4Chapter 4One piece into every format a campaign needs.
.claude/agents/distribution.mdChapter 5Chapter 5Channel Distribution Planner.
Cowork MemoryChapter 1Chapter 6Decisions, test results, audience insights retained by Cowork.
Scheduled TasksChapter 7Chapter 7Recurring jobs surfaced in the Cowork UI.

Point at any file in the tree above, read the row that names it, and know exactly which chapter wrote it. When a behaviour surprises you, open that file — the method works because every piece of the system is visible, named, and dated.

Audit Your Own VP

Before you close this course, run the capstone audit. The final step in this chapter's prompts sidebar produces .claude/memory/vp-capstone-audit-{date}.md — a concrete artifact that walks your VP against the seven chapters and records what you built, what you skipped, and what needs a pass. That file is the closer. It is the honest measurement of whether you built a department or just read about one.


What just changed

You added three Scheduled Tasks to the Cowork sidebar — weekly content pipeline, monthly competitive scan, weekly performance report — plus a monthly summary. You extended the Project with .claude/rules/sales-voice.md, .claude/rules/sales-process.md, .claude/rules/handoff-rules.md, .claude/skills/sales-collateral/SKILL.md, and .claude/skills/lead-qualifier/SKILL.md. The tree above shows the final architecture. The table above names every file. The pattern named in Section 3 is the approach that produced all of it.


What Is Next

You have a complete system. The course ends, but your work does not.

Connectors. Right now, your VP works inside Cowork. Future Cowork updates will add connectors for external tools — Google Drive, Slack, Chrome. When those ship, your VP will pull data from your analytics dashboard, push content drafts to your CMS, and send reports to your Slack channel without you copying and pasting between windows. The architecture you built is ready for that. Your skills and agents already produce structured output — connectors just change where that output lands.

Scaling to new functions. You proved the extension pattern works when you added sales support in Section 2. Pick your next function. Customer support, operations, hiring, finance reporting — whatever takes the most time or produces the most inconsistent results. Follow the six-step pattern. You already know how.

Customization. Your system works year-round, but your business probably has seasonal patterns. Build seasonal campaign templates — a holiday promotion template, a back-to-school template, a year-end review template. Add industry-specific rules that account for compliance requirements, terminology standards, or audience expectations unique to your field.

Share what you build. If you build something useful for a different function — a support system, an operations dashboard, a hiring pipeline — share it. The pattern is universal, but the implementations are specific to each business. The more examples exist, the easier it gets for everyone.

Seven chapters ago, your VP walked into an empty office with a blank whiteboard. You hired them. You wrote their handbook. You posted their policies on the wall, handed them procedures, and introduced them to the rest of the team. You watched them take on campaigns, run a pipeline, learn from results, put their own work on the calendar, and earn a second portfolio in sales. They are not a new hire anymore. They are a complete executive whose job description you can rewrite whenever the business needs it.

Thank you for building this with us. Your AI VP is ready.


This is Chapter 7 of 7 — the final chapter of the Your AI VP of Marketing course. Previous: Chapter 6 — Measurement