Caelan's Domain

Measuring What Matters: Tracking Your AI VP's Impact

Created: April 16, 2026 | Modified: April 16, 2026

Cowork Features
Used: Skills, Memory

This is Part 13 of a 16-part series on building your AI VP of Marketing with Claude Cowork. Previous: Agent: Channel Distribution Planner | Next: Running on Autopilot


Quick Start
This article works best with a functioning pipeline (Articles 1-10) but can be followed with any marketing data.

Starter config
Sample marketing metrics CSV covering website traffic, email stats, social engagement, leads by channel, and revenue by channel.

Measure Before You Automate

You have a working content pipeline. Skills generate briefs and check brand voice. Agents plan campaigns and repurpose content across channels. The distribution planner from Article 12 schedules where everything goes. The temptation right now is to automate it -- set the whole machine on a timer and let it run while you focus on the business. That is Article 14. Do not skip to it yet.

Automating without measurement is like doubling your ad spend without checking which ads convert. You will produce more content, faster, across more channels. But you will have no idea whether any of it is working. You will not know which channel brings leads, which content format gets shared, or whether your email list is growing or bleeding subscribers. Speed without direction is just expensive noise.

This article gives you direction. You will define the metrics that matter for your business, build a skill that analyzes them, store your baseline numbers in Memory, and set up a reporting cadence that tells you when to celebrate, when to adjust, and when to kill something that is not working. Once you know what success looks like, automating becomes a force multiplier instead of a gamble.


Define Your KPIs

KPIs -- key performance indicators -- are the numbers that tell you whether your marketing is doing its job. The phrase sounds corporate, but the concept is simple: pick the numbers that connect marketing activity to business outcomes, and watch them over time.

Here are the metrics that matter for most small businesses running content marketing. You do not need all of them. Pick the three to five that match the goals you set in Article 1.

Website sessions by source. How many people visit your site, and where do they come from? The four sources that matter: organic search (Google found you), social (someone clicked a link on Instagram or LinkedIn), direct (someone typed your URL), and referral (another website linked to you). This tells you which channels actually drive traffic. If you are posting on three social platforms but 80% of your traffic comes from organic search, that is a signal about where your effort should go.

Email open rates and click-through rates. Open rate tells you whether your subject lines work. Click-through rate tells you whether your email content compels action. Industry averages hover around 20-25% open rate and 2-3% click-through rate for small businesses, but your own trend matters more than the benchmark. A 32% open rate dropping to 28% over three months is more useful information than knowing the industry average.

Social media engagement. Likes, comments, shares, and follower growth -- broken out by platform. Follower count alone means nothing. A thousand followers with a 1% engagement rate produce less than three hundred followers at 5%. Track engagement rate (interactions divided by followers) rather than raw follower counts, and watch the trend month over month.

Leads by channel. Where do your actual leads come from? Not website visitors -- people who filled out a form, booked a call, subscribed to a trial, or emailed you asking about pricing. This is the metric that separates marketing activity from marketing results. If LinkedIn generates three times more leads than Instagram despite half the posting frequency, that changes your entire distribution strategy.

Revenue by channel. If you can trace a sale back to a marketing channel, this is the metric that ends all debates. Not every business can track this cleanly -- if your sales cycle is six months and involves five touchpoints, attribution gets messy. But if you sell online, use discount codes, or ask "how did you hear about us?" on intake forms, you have enough data to make this useful.

Look at the marketing goals in your CLAUDE.md from Article 1. If your top goal is growing your email list, email metrics and lead sources are your primary KPIs. If your goal is driving website traffic from search, organic sessions and keyword rankings matter most. If you are trying to build community on social media, engagement rates are your north star.

Pick three to five. Write them down. You will tell your VP exactly what to track in the next section.


Build a Metrics Analysis Skill

You have built skills before -- in Article 5 you created the Content Brief Generator by hand, and in Articles 6 and 11 you used /skill-creator to speed things up. This time, you are building a skill that does something different from content creation. It reads performance data and tells you what the numbers mean.

Open your Cowork project and invoke /skill-creator. When it asks what the skill should do, paste this:

Build a skill called "Marketing Metrics Analyzer" that:
- Accepts marketing performance data (CSV, pasted table, or text summary)
- Compares current metrics to baseline values stored in Memory
- Identifies trends (improving, declining, flat) for each metric
- Highlights the top 3 wins and top 3 concerns
- Recommends specific actions based on the data
- Outputs a structured report with sections for each channel
- Flags any metric that has dropped 20% or more from baseline
- Notes any metric that has improved 20% or more from baseline

/skill-creator will ask you follow-up questions. Here are the answers you will likely need:

What inputs does the skill need? Marketing performance data in any format -- CSV, a pasted table, or a plain text summary. The skill should handle whatever the user gives it.

Should it reference anything from Memory? Yes. Baseline metrics stored in Memory. The skill compares current data to the baseline to identify trends.

What format should the output take? A structured report with these sections: Executive Summary (3-4 sentences), Channel Performance (one subsection per channel), Top 3 Wins, Top 3 Concerns, Recommended Actions, and a Data Quality note flagging any missing or suspicious numbers.

How should it handle missing data? Note the gap and analyze what it has. Do not refuse to produce a report because one metric is missing.

Review the generated skill, adjust anything that does not match your needs, and save it.

Under the hood
Here is what /skill-creator generates (or something close to it). If you prefer to build the skill manually, paste this into a new skill called metrics-analyzer:

Analyze the marketing performance data provided and produce a structured
performance report.

INPUTS
- Marketing data: CSV, table, or text summary (provided by user)
- Baseline metrics: retrieved from Memory

ANALYSIS PROCESS
1. Parse the provided data and identify all available metrics
2. Retrieve baseline values from Memory
3. Calculate period-over-period changes for each metric
4. Compare current values to baseline
5. Identify trends: improving (3+ months of growth), declining (3+ months
   of decline), or flat
6. Flag any metric that has moved 20%+ from baseline in either direction

OUTPUT FORMAT

## Marketing Performance Report — [Date Range]

### Executive Summary
3-4 sentences covering overall trajectory, biggest win, biggest concern,
and one recommended priority action.

### Channel Performance
One subsection per channel (Website, Email, Social, Leads, Revenue).
For each channel:
- Current metrics vs. baseline
- Trend direction and magnitude
- One-sentence assessment

### Top 3 Wins
The three metrics showing the strongest positive movement. For each:
what improved, by how much, and what likely caused it.

### Top 3 Concerns
The three metrics showing the most worrying trajectory. For each:
what declined or stalled, by how much, and what might be causing it.

### Recommended Actions
3-5 specific, prioritized actions based on the data. Each action should
name the channel, the metric it targets, and the expected impact.
"Post more on social media" is not specific enough. "Increase LinkedIn
posting from 2x/week to 4x/week to test whether engagement rate
scales with frequency" is.

### Data Quality Notes
Flag any missing metrics, suspicious numbers (e.g., a 500% spike that
might be a tracking error), or gaps in the data that limit the analysis.

RULES
- Compare to baseline from Memory. If no baseline exists, note this and
  analyze the data on its own merits.
- Be direct about what is working and what is not. Do not soften bad news.
- Every recommended action must tie to a specific metric and channel.
- If a metric has dropped 20%+ from baseline, flag it prominently.
- If a metric has improved 20%+ from baseline, call it out as a win.

Baseline Your Current State

A trend requires at least two data points. Before the metrics analyzer can tell you whether things are improving, it needs to know where you started. That starting point is your baseline, and you are going to store it in Memory so your VP can reference it in every future analysis.

Gather your current numbers. If you have real data from Google Analytics, your email platform, and your social media dashboards, export the last three months. If you are following along with the tutorial and want to test the skill first, use this sample data for a fictional business:

month,website_sessions_organic,website_sessions_social,website_sessions_direct,email_subscribers,email_open_rate,email_click_rate,instagram_followers,instagram_engagement_rate,linkedin_followers,linkedin_engagement_rate,leads_organic,leads_social,leads_referral,revenue_marketing_attributed
2026-01,1240,380,520,1850,34.2,4.1,2100,3.2,890,4.8,12,4,8,4200
2026-02,1310,420,490,1920,32.8,3.9,2180,3.0,920,5.1,14,5,7,4800
2026-03,1450,510,530,2010,35.1,4.5,2290,3.4,960,4.6,18,7,9,5600

This is a small business doing reasonably well. Organic traffic is growing. Social sessions are climbing. Email open rates are healthy but the click-through rate is inconsistent. Instagram followers are up but engagement is mediocre. LinkedIn has fewer followers but higher engagement. Leads are trending upward across all channels. Revenue attributed to marketing is growing month over month.

Not perfect numbers. Real numbers never are. The mix of strong and weak channels is what makes this data useful for testing.

Paste the CSV into your Cowork project along with this prompt:

Store this as our marketing baseline in Memory. These are our Q1 2026
metrics. Use the March values as the primary baseline for future
comparisons, and note the January-March trend for each metric.

[paste your CSV here]

After storing, confirm what you saved and highlight any metrics that
already show a clear trend direction.

Your VP will parse the data, store the baseline values in Memory, and give you a summary of what it sees. The March numbers become your reference point. The three-month window gives trend context.

Bring your own data
The sample CSV works for the tutorial, but for real insights, export data from Google Analytics, your email platform (Mailchimp, Constant Contact), and social media native insights. Your VP will analyze whatever you give it.

From this point forward, every time you run the metrics analyzer skill, your VP pulls these baseline values from Memory and compares new data against them. You do not need to re-paste the baseline. It persists.


The Reporting Framework

Data without rhythm becomes noise. You check your numbers once, get a snapshot, and then forget about it until something breaks. A reporting framework turns measurement from an event into a habit.

Three cadences cover everything you need.

Weekly: The Pulse Check

Once a week, spend five minutes reviewing the basics. You are looking for anomalies, not trends. Did traffic drop off a cliff on Wednesday? Did an email blast produce zero clicks? Is there a social post that got ten times the normal engagement?

Prompt your VP with something like:

Here is this week's data: [paste quick numbers or a screenshot summary]

Give me a 3-sentence pulse check. Anything unusual? Anything I should
act on this week?

The weekly check is not an analysis. It is a smoke detector. You are scanning for fires, not evaluating strategy. If nothing unusual happened, the answer is "steady week, no action needed" and you move on.

Monthly: The Channel Review

Once a month, run the full metrics analyzer skill with your latest data. This is where you compare current performance to baseline, look at trends, and decide whether to adjust anything.

Run the metrics-analyzer skill with this data:

[paste your monthly CSV or data summary]

Compare against our stored baseline. What improved? What declined?
What do you recommend we change?

The monthly report is your primary decision-making tool. It tells you where to double down, where to experiment, and where to cut your losses. If LinkedIn leads are up 40% from baseline while Instagram engagement is flat, that is a resource allocation decision staring you in the face.

Quarterly: The Strategy Review

Every three months, step back from the channel-level data and ask bigger questions. Are the goals from Article 1 being met? Has the market shifted? Should you add a channel, drop one, or change your content mix?

Review our last three monthly reports and our original marketing goals
from CLAUDE.md. Are we on track? What should change for next quarter?

Produce a quarterly strategy brief with:
- Progress against each goal (on track / behind / ahead)
- One thing to stop doing
- One thing to start doing
- One thing to keep doing

The quarterly review is where your VP earns its keep as a strategist, not just an analyst. It connects the data back to your business goals and recommends changes to the plan, not just the tactics.

What triggers action between cadences? Three scenarios should prompt an immediate review, regardless of the schedule:

  • Any metric drops 20% or more from baseline in a single period. Something broke and you need to find out what.
  • A channel consistently underperforms for three consecutive months. Flat is fine for a month. Flat for a quarter means the channel is not working.
  • A channel suddenly outperforms expectations. A metric jumping 30% above baseline is not just good news -- it is a signal to investigate what caused it and do more of that.

Off-Ramp 3: What You Have Built

What you have built: A complete content marketing pipeline with measurement -- you know what is working, what is not, and where to focus. This is a professional-grade marketing system.

What is ahead: Articles 14-16 automate recurring work, extend into sales, and teach you to expand the system yourself. But your pipeline is already measurably producing results.

This is a real stopping point, and it is a good one. Look at what you built across thirteen articles. Your VP knows your business, your audience, and your voice. It follows brand standards and process rules on every task. It generates briefs, plans campaigns, repurposes content across channels, and distributes it on a schedule. And now it measures the results.

Most small businesses do not have this. Most small businesses post on social media because they feel like they should, send emails when they remember, and check their analytics once a quarter when someone asks "how is marketing going?" You have a system. You have data. You have a VP that tells you what the data means and what to do about it.

If you stop here, you are running a more measured, more intentional marketing operation than companies with ten times your budget and a full-time marketing team that never looks at the numbers. The pipeline produces. The measurement tells you whether the production matters.


What is Next

Your pipeline works. You can measure it. The logical next step is to stop initiating every task by hand.

Right now, you run the pipeline when you sit down and open Cowork. You paste in data when you remember to check the numbers. You trigger the distribution planner when you have time. Every piece works, but it all depends on you remembering to start it.

Article 14 changes that. You will learn Scheduled Tasks, a Cowork feature that triggers recurring work on a cadence you define. Your VP produces content drafts every Monday, runs competitor scans every two weeks, and generates monthly performance reports -- all without you opening the project and typing a prompt. The system you built becomes the system that runs itself, with you reviewing output instead of initiating it.


This is Part 13 of 16 in the Your AI VP of Marketing series. Next: Running on Autopilot: Automating Recurring Marketing Work