Marketing Performance Review for General Contracting
Marketing Performance Review turns raw marketing and Business Development data into useful insights and decisions. It defines KPIs, pulls data from core systems, analyzes trends by channel and segment, and structures recurring review meetings. The process connects marketing activity to pipeline and revenue so budget and effort can be adjusted based on facts, not gut feel. When followed, the company has a clear view of what marketing is delivering and where to focus next.
Define marketing performance KPIs and reporting structure
Step 1: Review business and marketing objectives
Look at the current business plan and annual/quarterly marketing objectives. Note which outcomes leadership cares about most (e.g., qualified opportunities, win rate, sector mix, inbound RFQs). This ensures KPIs will be tied to actual goals, not just activity counts.
Step 2: List potential marketing and Business Development metrics
Draft a list of possible metrics across the funnel: website leads, form fills, event leads, referrals, meetings set, opportunities created, hit rates, revenue by lead source, and cost per lead for major channels. Capture more than you’ll ultimately use.
Step 3: Select a focused set of KPIs
From the long list, choose a small set (typically 8–15) of KPIs that best reflect marketing’s contribution and can realistically be measured with available data. Include at least one metric for volume, quality, conversion, and outcomes (e.g., awarded work).
Step 4: Define reporting frequency and audience
Decide how often each KPI will be reported (monthly, quarterly) and who should see which level of detail (marketing team, Business Development leadership, executives). Document which reviews will be more tactical vs. high-level.
Step 5: Document KPI definitions and reporting structure
Write a one- to two-page “Marketing KPI & Reporting Guide” that defines each metric, data source, calculation method, and review cadence. Store it in the shared marketing folder and share it with marketing, Business Development, and leadership.
Step 6: Revisit KPIs annually or when strategy shifts
Set a reminder to review KPI definitions at least once a year or after major strategy changes. Adjust metrics or add/remove KPIs so reporting stays aligned with current business priorities.
Set up and maintain marketing performance dashboards
Step 1: Choose tools for dashboards or reports
Decide whether you will use built-in CRM dashboards, BI tools (if available), or structured spreadsheets. Pick an approach that the team can maintain without specialized skills and that leadership can easily access.
Step 2: Design high-level and detailed views
Plan at least two levels of reporting: a high-level view for executives (key KPIs, trends) and a more detailed view for marketing/Business Development (channel-level metrics, campaign performance). Sketch what each view should show before building.
Step 3: Connect dashboards to core data sources
Where possible, connect dashboards directly to CRM, web analytics, and email tools so data updates automatically. If connections aren’t available, design simple manual data import steps that can be repeated consistently.
Step 4: Build initial dashboard layouts
Create the first version of dashboards with charts and tables for the selected KPIs. Keep layouts clean and readable; avoid cluttering with every metric you could track. Label axes and legends clearly so non-analysts can interpret them.
Step 5: Test dashboards with sample periods
Populate dashboards with data from a recent period (e.g., last quarter) and review them with the marketing team. Check for calculation errors, confusing visuals, or missing context. Adjust layout and formulas as needed.
Step 6: Document maintenance steps and schedule
Write down how and when dashboards should be updated (e.g., “update monthly by the 5th,” “pull CRM report X, GA report Y”). Assign ownership for maintaining each view so it doesn’t fall between roles.
Collect and validate data from core systems
Step 1: List all required data sources and reports
From your KPI guide, list each system (CRM, Google Analytics or equivalent, email platform, event tools, call tracking) and the specific reports you need from each. Include timeframes and filters (e.g., last quarter, only qualified opportunities).
Step 2: Pull data for the review period
Export or run reports from each system covering the review window (monthly or quarterly). Use consistent date ranges across tools so comparisons are meaningful. Save raw exports in a dated “Performance Data” folder.
Step 3: Check data completeness and anomalies
Scan exports for obvious issues: empty columns, unexpected zeroes, sudden spikes or drops that don’t match reality, or missing months. If something looks off, confirm with system admins or users before proceeding.
Step 4: Reconcile basic counts across systems
For key funnels, compare counts across tools where they should align (e.g., number of form submissions in web analytics vs. number of new web leads in CRM). Note and investigate discrepancies that could signal tracking or process issues.
Step 5: Standardize formats and key fields
Clean data as needed: standardize date formats, normalize lead source labels, and align stage names where possible. Make light adjustments in a working spreadsheet so you can reliably link datasets when needed.
Step 6: Archive raw and cleaned data files
Save both the raw exports and any cleaned/combined files in a structured folder for that period. This allows you to revisit or debug past reports without re-pulling everything.
Resolve key data quality issues and standardize reporting periods
Step 1: Identify recurring data quality problems
Review recent reporting cycles for patterns: missing lead sources, inconsistent campaign names, delays in updating opportunity stages, or duplicate records. List the top issues that impact accuracy most.
Step 2: Define standard reporting periods and cut-off dates
Agree on how you will define reporting windows (calendar month, quarter) and at what cut-off date you will “freeze” data for reporting (e.g., include all updates made by the 3rd business day of the new month). Document these rules.
Step 3: Create simple data-cleanup routines
For the main issues (e.g., missing lead sources, “Other” overuse), define quick routines such as weekly views in CRM that show records needing updates. Assign someone to clean or chase owners for corrections before the reporting deadline.
Step 4: Align naming conventions for sources and campaigns
Work with marketing ops and Business Development to enforce consistent naming for campaigns and lead sources. Clean up obvious inconsistencies in current-period data so metrics group correctly by channel or campaign.
Step 5: Document how to handle unavoidable gaps
Write guidelines for how to note and handle unavoidable data gaps (e.g., legacy records without source, one-off tracking failures). Decide whether to exclude certain data from analysis or to clearly flag it in footnotes.
Step 6: Incorporate fixes into upstream processes
Where possible, push fixes into the processes that create data (lead source setup, campaign setup, CRM usage training) so fewer issues occur in future periods. Update those SOPs as needed to reflect new standards.
Analyze performance by channel, campaign, and segment
Step 1: Group data by lead source and channel
Using your cleaned data, group leads, opportunities, and wins by lead source/channel (e.g., website, referral, event, email campaign, bid platform). Calculate basic metrics such as volume, conversion to opportunity, and wins by channel.
Step 2: Evaluate major campaigns run in the period
Identify campaigns executed during the review window and pull their key performance metrics: sends, opens, clicks, responses, meetings, and opportunities influenced or created. Compare results to their stated objectives where available.
Step 3: Slice performance by sector and client type
Where data allows, break out results by sector (healthcare, industrial, interiors) and client type (existing vs. new, owner vs. architect-introduced). Note which combinations produce the most promising leads and wins.
Step 4: Look for outliers and trends
Identify channels or campaigns that significantly over- or under-perform the average. Note whether patterns are consistent with previous periods or represent new shifts. Pay attention to both volume and quality, not just one or the other.
Step 5: Summarize key findings in plain language
Write a short summary of what you’ve learned, such as “Referrals and existing clients drive most awarded work,” or “Trade show X delivered many leads but low conversion.” Use simple statements that can be quickly understood in review meetings.
Step 6: Prepare visual summaries for review
Create a handful of clear charts or tables that show performance by channel, campaign, and segment. Keep visuals simple and annotated so reviewers can quickly grasp the story without reading spreadsheets line-by-line.
Evaluate marketing contribution to pipeline and revenue
Step 1: Define what counts as “marketing-influenced”
Agree with Business Development on criteria for marketing-influenced opportunities (e.g., sourced by marketing channel, nurtured by marketing content, or significantly supported in pursuit). Write down these rules so they’re applied consistently.
Step 2: Pull opportunity and revenue data by source
From the CRM, export opportunities and awards for the review period with fields for amount, sector, stage, and lead source. Include both new wins and significant expansions or repeat projects where the origin can be identified.
Step 3: Group pipeline and awards by origin
Summarize opportunities and awarded revenue by lead source/channel and, where relevant, by campaign. Calculate the share of total pipeline and awards that is directly or indirectly driven by marketing vs. strictly “relationship-only” deals.
Step 4: Compare to previous periods and targets
Look at how marketing-originated and influenced pipeline has trended over the last few periods. Compare to any targets set in your annual or quarterly plans. Note whether marketing’s share is growing, stable, or shrinking.
Step 5: Identify high-value sources and weak contributors
Highlight which sources and efforts generate the most high-quality pipeline and awarded work, not just leads. Also identify sources that produce high volume but low conversion or misfit opportunities, which may need rethinking.
Step 6: Capture implications for budget and focus
Summarize what the numbers suggest for future marketing investment (e.g., “Events A and B justify repeat,” “Channel X should be reduced,” “Referrals deserve additional support”). These notes will feed into decisions during review meetings.
Prepare marketing performance reports and summaries
Step 1: Define report formats for each audience
Decide on the format and length for executive, Business Development, and marketing team reports (e.g., 1–2 slide executive summary, 5–10 slide detailed deck, or a structured written memo). Match detail level to what each audience will actually read.
Step 2: Draft a top-level executive summary
Write a brief overview that answers three questions: What happened? Why? What should we do next? Include 3–5 key bullets with supporting charts or numbers for executives who only have a few minutes.
Step 3: Develop detailed sections for the marketing team
In the more detailed report, include breakdowns by channel, campaign, sector, and lead source, along with commentary on patterns and anomalies. Highlight both successes and problems that need attention.
Step 4: Include context and comparisons
Where possible, show comparisons to previous periods and to targets, not just standalone numbers. Note any external factors (e.g., seasonality, major project wins or losses) that help explain shifts in performance.
Step 5: Review reports for clarity and accuracy
Proofread for errors, confusing charts, or jargon. Have at least one other team member sense-check the story and numbers, especially any claims about cause and effect.
Step 6: Save and distribute reports before review meetings
Store final reports in a shared “Performance Reports” folder with clear file names (e.g., “2026Q1_Mkt_Performance_ExecSummary”). Send them to participants ahead of the review meeting so they can skim before the discussion.
Run recurring marketing performance review meetings
Step 1: Schedule regular review meetings
Set a recurring cadence (e.g., monthly tactical review for the marketing team and quarterly strategic review with Business Development and leadership). Put these meetings on calendars well in advance to ensure attendance.
Step 2: Prepare and share agenda in advance
Create a concise agenda that maps to the reports: KPI overview, channel performance, pipeline contribution, major wins/losses, issues, and decisions. Share the agenda and reports at least a day before the meeting.
Step 3: Facilitate a focused walk-through of key metrics
During the meeting, briefly walk through top-level metrics and highlight the most important trends. Avoid reading every number; focus on where things changed significantly or where results differ from expectations.
Step 4: Encourage discussion and root-cause thinking
Invite Business Development and operations perspectives on why certain channels or campaigns performed as they did. Ask “what might explain this?” rather than jumping to conclusions. Capture explanations that are supported by both data and field experience.
Step 5: Drive toward specific decisions
As themes emerge, steer conversation toward concrete decisions: activities to stop, channels to invest in, campaigns to repeat or retire, and process improvements to pursue. Clarify which decisions are final vs. which need more investigation.
Step 6: Capture actions, owners, and deadlines
Before the meeting ends, summarize agreed actions, assign owners, and set deadlines. Record these items in a shared action log that will be referenced in future meetings to track follow-through.
Document decisions, action items, and experiments
Step 1: Create or update a performance action log
Maintain a simple log (spreadsheet or project tool) with columns for date, decision/action, owner, due date, related KPI, and status. This becomes the single source of truth for what came out of reviews.
Step 2: Log all agreed changes and tests
Right after each review, enter all decisions and experiments into the log: e.g., “pause sponsorship X,” “run follow-up campaign for sector Y,” “test new landing page layout,” “tighten lead qualification rules.”
Step 3: Define expected impact and measurement
For each action or experiment, note which KPIs or metrics you expect to move and how you will measure success (e.g., improvement in conversion rate, cost per lead, or meetings booked). This turns vague intentions into testable changes.
Step 4: Share the updated log with stakeholders
Send the updated action log to marketing, Business Development, and any other involved stakeholders. Highlight new items added and items carried over from previous periods.
Step 5: Integrate actions into operational plans
Ensure that actions that affect other processes (campaign planning, website updates, lead handling) are added as tasks in those workflows. Assign project IDs or tags so you can link results back to specific changes.
Step 6: Review and refine the log structure periodically
After a few cycles, adjust the log’s structure to make it easier to use (e.g., grouping by theme, adding priority fields). Keep it simple enough that it is actually maintained, not abandoned.
Track follow-through on actions and update future plans
Step 1: Check status of action items before each review
Before the next performance review meeting, go through the action log and update statuses based on progress: Not Started, In Progress, Completed, or Blocked. Reach out to owners for quick status updates where needed.
Step 2: Highlight overdue or blocked items
Identify items that are past due or stuck and note them for discussion. Consider whether they are still important, need to be re-scoped, or should be dropped in favor of higher-impact work.
Step 3: Assess impact of completed actions
For actions that have been implemented long enough to show effects, look at relevant KPIs and campaign results. Note whether expected improvements occurred, were mixed, or didn’t materialize.
Step 4: Incorporate learnings into upcoming plans
Use what you’ve learned to adjust quarterly marketing plans, channel budgets, and campaign ideas. For example, scale up experiments that worked and discontinue approaches that repeatedly underperform.
Step 5: Update documentation and SOPs where needed
If performance review findings lead to lasting changes in how you operate (e.g., new lead handling rules, different campaign standards), update relevant SOPs and training materials so the new way becomes standard practice.
Step 6: Communicate key changes to the broader team
Share a short summary of major changes and lessons with the wider marketing/Business Development team and, when relevant, operations. This keeps everyone aware of how performance data is shaping the company’s marketing approach.
👈 Use this SOP template inside Subtrak
Edit with AI. Customize in seconds. Store and share all your SOPs and checklists in one place.