A content audit is the systematic review of every published piece on a website against three benchmarks: its current performance, its potential performance, and the program's strategic priorities. The output is a segmentation of the content library into four action buckets (refresh, retire, merge, and maintain) with a prioritized work plan for the next 90 days of optimization. The audit produces decisions, not just data.
Most B2B SaaS audits do not. They run for two weeks, tag twelve hundred URLs, and the spreadsheet sits in a Notion page while the team gets pulled into next quarter's production calendar. By month six the data is stale and the team starts over. This is the operator framework for running a content audit that produces named decisions with named owners on a 90-day execution clock: six data sources, four-bucket segmentation, scored refresh prioritization, and the cadence that keeps the library compounding rather than decaying.
01 / What a content audit actually is
The distinction between input and output matters. A spreadsheet of every URL with traffic, ranking, and refdomain data is the input to an audit. An audit produces a list of 30 to 80 specific pieces with named action assignments, named owners, and named completion dates. Programs that confuse the input for the output produce documents that sit unused while the underlying problem (a library accumulating dead weight faster than it accumulates new authority) keeps compounding.
What the audit covers
A complete B2B SaaS content audit examines six dimensions of every published piece. Ranking performance over a 12-month trajectory. Traffic performance with source attribution. Engagement signals (scroll depth, bounce, time on page) on existing visitors. Pipeline contribution traced through the CRM. Content accuracy (broken links, outdated stats, deprecated product references). Technical health (Core Web Vitals, structured data, internal linking density).
The dimensions are non-substitutable. Auditing only on traffic misses pieces with low traffic but high conversion. Auditing only on rankings misses pieces that rank but no longer convert. Auditing only on engagement misses pieces with strong engagement but stagnant pipeline contribution. The full six-dimension picture is what produces defensible action decisions.
02 / Why most B2B SaaS audits produce spreadsheets nobody acts on
Three structural failure patterns explain why most audits never convert to action. Each is correctable but each requires explicit ownership of the failure mode before the fix lands. The same patterns show up across our complete B2B SaaS content marketing playbook every time we inherit a stalled program.
Pattern 1: Auditing without segmenting. The team pulls data on 1,200 URLs, sorts by traffic, and stops there. No URL gets assigned to a bucket. Without segmentation, the team cannot prioritize. The spreadsheet becomes a reference document rather than a work plan.
Pattern 2: Segmenting without prioritizing. The team segments URLs into buckets but treats every URL in the "refresh" bucket as equally important. Without prioritization scoring, the team picks refreshes intuitively, which usually means picking the URLs that produce the most current traffic (already at ceiling) rather than the URLs with the most refresh upside (in positions 6 to 15 that could move to 1 to 5).
Pattern 3: Prioritizing without owner accountability. The team produces a ranked refresh list but does not assign named owners with named completion dates. The list sits while production work pulls writer time onto new pieces. By month two, the team has shipped four new pieces and zero refreshes, and the audit data is now stale.
The fix is structural: every audit ends with a 90-day execution plan that names 8 to 15 specific pieces, assigns each to a named owner, and sets weekly check-ins for completion accountability. Audits without this scaffolding produce theater. Audits with it produce ranking gains and pipeline recovery. This is the implementation layer beneath the parent framework on content optimization for B2B SaaS programs.
03 / The four-bucket segmentation in detail
Every published piece on a B2B SaaS site falls into one of four action buckets after the audit. The proportions are typical for a mature library (2+ years of content production) and vary by program age.
Maintain (10 to 20 percent of mature libraries). The piece is performing at or near its potential. Rankings are stable in positions 1 to 5. Traffic is steady or growing. Pipeline contribution is measurable. The only work the piece needs in the current quarter is light maintenance: stat updates, broken link fixes, minor copy refreshes. No structural changes warranted.
Refresh (40 to 60 percent, the bulk of the library). The piece has clear improvement upside. Rankings have declined, traffic has declined, content has aged, or the conversion path is broken. The piece is fundamentally sound but needs work to recover or extend ranking. Refresh budget gets allocated based on prioritization scoring (covered in chapter 06).
Merge (5 to 15 percent in libraries with 4+ years of content). The piece overlaps significantly with another piece on the same site targeting the same query cluster. Both pieces dilute each other's ranking. The audit identifies the dominant piece, the loser gets 301 redirected to the dominant, and the dominant piece absorbs the relevant content from the redirect. The largest single-quarter ranking gains we see on mature engagements come from consolidation work, not new content production.
Retire (5 to 15 percent). The piece cannot be saved at acceptable investment. Traffic is zero or near-zero. Pipeline contribution is zero. The topic is no longer strategic. The piece is removed via 410 Gone (if no backlinks worth preserving) or redirected to a related surviving piece (if backlinks exist and the topical fit is reasonable).
The decision logic
Three variables determine which bucket a piece lands in: current ranking position, current pipeline contribution, and current topic relevance to the strategic priorities.
Pieces in positions 1 to 15 with any pipeline contribution and topic relevance go to refresh. The base is strong; refresh recovers and extends ranking efficiently.
Pieces in positions 20+ with zero pipeline contribution but topic relevance go to refresh or retire depending on backlinks. With 10+ referring domains, refresh and consolidate. With no backlinks, retire.
Pieces with topic overlap to another piece on the same site go to merge. The audit names which piece becomes the dominant survivor. For the boundary case between refresh and retire, see the refresh-versus-retire decision framework, which formalizes the cutoff.
Pieces with no topic relevance to current strategy go to retire, even if traffic is high. Content that does not support current strategic priorities is a distraction that competes for both writer attention and reader attention.
04 / The six data sources every audit pulls
The data layer determines what decisions are possible. The audit framework here pulls from six specific sources, each contributing data the others cannot replicate.
Google Search Console. Current rankings, ranking trajectory over 12 months, click-through rate by query, impression data. Pull via the GSC Performance report exported to a spreadsheet. The 12-month trajectory matters more than the current snapshot because decay shows up in trajectory before it shows up in snapshot.
GA4. Sessions by URL, traffic source breakdown, bounce rate, average engagement time, conversion to next-action events. Pull via the Reports section, filtered by Page Path. GA4 is where engagement decay shows up early (rising bounce rate, declining time on page) before ranking decay follows.
The CRM (HubSpot, Salesforce, or equivalent). Pipeline contribution attribution by content asset. Lead source data showing which pieces the buyer touched before converting. This is the data most programs cannot pull because the attribution infrastructure was never built. Programs without CRM attribution should refer to the measurement framework that ties content marketing to pipeline to build it first; audits running without pipeline data make refresh decisions that lose conversion.
Ahrefs or Semrush. Backlinks acquired, referring domains, organic keyword count, ranking distribution. Pull via the Site Explorer URL-level reports. Backlink data determines whether pieces with declining performance should be refreshed (backlinks are an asset) or 301-redirected to consolidate equity.
On-page audit (Screaming Frog or equivalent crawler). Broken outbound links, deprecated product references, outdated screenshots, schema markup gaps, structured data errors. Crawl the site quarterly to catch these patterns. Outdated references compound credibility damage even when ranking signals look healthy.
PageSpeed Insights. Core Web Vitals (LCP, INP, CLS, FCP, TTFB). Pull per URL for the top 200 traffic pages. Technical health affects both ranking and conversion. Pages with degraded CWV scores frequently underperform their ranking potential by 15 to 40 percent.
Why all six are necessary
Audits running on subsets miss specific failure modes. Audits on traffic plus rankings only miss pieces with strong rankings but failing conversion (high traffic, no pipeline). Audits on engagement plus CRM miss pieces with low traffic but high refresh upside. Audits without backlink data make merge-vs-retire decisions blindly. The full six-source picture is the operating minimum for defensible action decisions.
05 / The 90-day audit workflow
The audit runs across 12 weeks from data extraction to execution completion. The cadence prevents the audit-then-stall pattern that produces spreadsheets without action.
Week 1, data extraction. Pull all six data sources for the top 500 URLs by impression volume. Consolidate into a single working document (Airtable, Notion database, or Google Sheets; tool choice matters less than schema consistency). Each URL gets one row with all six dimensions populated.
Week 2, initial segmentation. Apply the four-bucket logic to every URL. Each URL gets assigned to maintain, refresh, merge, or retire based on the decision logic in chapter 03. The output of week 2 is a segmented library with bucket counts visible.
Week 3, refresh prioritization scoring. Apply the four-axis scoring system (chapter 06) to every URL in the refresh bucket. Rank by composite score. Identify the top 12 to 18 refresh candidates for the quarter.
Week 4, action plan finalization. For each of the top 12 to 18 refresh candidates, assign a named owner, a completion date, and a refresh brief (what specifically gets changed on the piece). For merge candidates, assign an owner for the consolidation work and a 301 redirect plan. For retire candidates, queue the removal work for the technical team.
Weeks 5 to 11, execution batches. Refresh work runs at 2 to 3 pieces per week. Merge work runs at 1 piece per week. Retire work batches at the end. Weekly check-ins surface blockers and adjust priorities.
Week 12, results measurement. Pull rankings, traffic, and pipeline data on the executed pieces. Compare against pre-audit baseline. The execution-to-impact window is typically 30 to 60 days post-completion, so week 12 measurement captures only the first signals; full impact lands by month 5 to 6.
The cadence is calibrated to B2B SaaS production rhythms. Programs trying to compress the timeline into 4 to 6 weeks produce hasty segmentation and poor refresh briefs. Programs stretching the timeline beyond 16 weeks lose accountability and execution drift creeps in.
06 / Refresh prioritization scoring
When the audit identifies 30 to 80 refresh candidates, the team has to choose which 12 to 18 to work on this quarter. A simple four-axis scoring model surfaces the highest-leverage candidates objectively.
| Axis | 1 point | 3 points | 5 points |
|---|---|---|---|
| Current ranking position | 16 to 30 | 6 to 15 | 1 to 5 |
| Pipeline contribution last 12 months | None or unmeasured | $5K to $50K attributed | $50K+ attributed |
| Topic strategic relevance | Tangential to current ICP | Adjacent to current ICP | Core to current ICP |
| Refresh effort | 12+ hours | 6 to 12 hours | Under 6 hours |
A piece scoring 16 or higher (out of 20) is a high-priority refresh. 12 to 15 is medium priority. Under 12, deprioritize for the current quarter.
Why this scoring beats traffic-based prioritization
The intuitive prioritization is "refresh the top 10 traffic pages." This systematically misses the highest-ROI refresh candidates. Pieces in positions 1 to 3 with high traffic are usually already at ceiling, so refresh effort produces marginal gains. Pieces in positions 8 to 15 with moderate traffic but high pipeline contribution are the highest-ROI candidates because they are close to position 1 to 3 ranking with proven conversion.
The four-axis scoring captures the right signals. Pieces close to top rankings (position score 3) with proven pipeline (pipeline score 3 to 5) and core strategic relevance (relevance score 5) score 11 to 13 even before effort considerations, putting them firmly in the medium-to-high priority range. The scoring math identifies these candidates objectively rather than relying on intuition.
The worked example
A B2B SaaS company auditing their content library identifies a piece ranking at position 9 for b2b saas crm comparison. The piece has produced $32,000 in attributed pipeline over the past 12 months. The topic is core to the company's ICP. The refresh effort estimate is 8 hours.
Scoring: position 3 + pipeline 3 + relevance 5 + effort 3 = 14. Medium-to-high priority. Worth refreshing this quarter.
Another piece ranks at position 2 for a tangential topic, drives 400 monthly visits, but has zero pipeline contribution and 12 hours of refresh effort estimated.
Scoring: position 5 + pipeline 1 + relevance 1 + effort 1 = 8. Deprioritize. The high ranking is irrelevant when the conversion math fails.
07 / Common audit failure modes (and the fix for each)
Five failure modes consistently undermine audit quality across the B2B SaaS engagements we have inherited or audited.
Failure 1: Audit scope creep. The team starts with "audit the top 500 URLs" and ends with "audit every URL ever published." Scope creep stretches the timeline from 4 weeks to 12 weeks before action even starts. The fix is to constrain the initial scope tightly (top 200 to 500 URLs by impression volume) and run subsequent audits on different scope cuts.
Failure 2: Refreshing pieces that should have been retired. The team invests 8 hours refreshing a piece that ranks at position 28 with zero backlinks and zero pipeline contribution. The refresh moves the ranking to position 22. The investment did not earn its keep. The fix is to apply the four-axis scoring strictly and retire pieces failing on two or more axes.
Failure 3: Retiring pieces that should have been refreshed. The team removes a piece that ranked at position 18, looked weak on traffic, but had 25 referring domains. The 301 redirect goes to a vaguely related page that does not absorb the equity. The fix is to never retire without auditing the backlink profile first.
Failure 4: Mass refresh on identical templates. The team refreshes 12 pieces in two weeks using the same structural refresh template (add an FAQ, update the intro, swap the meta). The pieces improve marginally because the refresh applies the same generic improvements regardless of what each piece specifically needs. The fix is to write a specific refresh brief for each piece naming the specific changes warranted by the data.
Failure 5: Audit results without execution tracking. The audit produces the right segmentation and prioritization, but execution runs on hope rather than tracking. Pieces sit half-finished. The fix is the weekly check-in cadence and the named-owner-named-date scaffolding.
08 / Audit cadence and triggers
Full audits run every 6 months. The 6-month cadence works for most B2B SaaS programs because product changes, ICP shifts, and ranking volatility produce enough new data to justify re-segmentation, while annual-only audits accumulate execution backlogs the team cannot work through within the year.
Between full audits, three additional audit patterns keep the library responsive:
Monthly top-20 spot checks. Pull GSC and GA4 data on the top 20 traffic pages every month. Flag any showing 20 percent or more decline in either dimension. Refresh decisions for these pieces happen between full audits because the impact-per-hour math is high.
Quarterly cluster mini-audits. When a sub-pillar or pillar's cluster posts reach 90 days post-launch, run a focused audit on that specific cluster. Surface early decay before the full library audit catches it.
Annual technical SEO drift audit. Once per year, pull Core Web Vitals and structured data audits across the entire site. Technical drift accumulates slowly and shows up in ranking declines that look like content problems but are actually infrastructure problems.
Triggers for ad-hoc audits
Three triggers warrant unscheduled audit work.
A Google core algorithm update affecting the brand's category. Pull traffic data before and after the update for affected URLs. Segment by which URLs gained and lost. The pattern usually reveals which content quality signals the update reweighted.
A major product launch or ICP shift. The library needs re-segmentation against the new strategic priorities. Pieces that scored high on relevance in the old strategy might score low in the new one.
A management change that questions content marketing's value. The new CMO or CFO needs evidence the program produces pipeline. A focused audit on the top 30 pipeline-contributing pieces produces the evidence faster than waiting for the next scheduled audit cycle.
09 / FAQ
What is a content audit?
A content audit is the systematic review of every published piece on a website against three benchmarks: current performance, potential performance, and the program's strategic priorities. The output is a segmentation of the content library into four action buckets (refresh, retire, merge, and maintain) with a prioritized work plan for the next 90 days of optimization.
How long does a B2B SaaS content audit take?
A full audit covering the top 500 URLs takes 4 weeks from data extraction to action plan finalization. Execution on the prioritized refresh candidates runs across the following 8 weeks, putting the full 90-day cycle at 12 weeks. Programs trying to compress the timeline below 4 weeks produce hasty segmentation; programs stretching beyond 16 weeks lose execution accountability.
How often should B2B SaaS companies audit their content?
Full audits every 6 months, with monthly spot checks on the top 20 traffic pages, quarterly mini-audits on new clusters at 90 days post-launch, and an annual technical SEO drift audit. The 6-month cadence balances thoroughness with execution capacity for most B2B SaaS programs.
Can AI tools run content audits?
AI tools can produce most of the data extraction and surface-level pattern detection quickly. AI tools cannot make the strategic decisions about refresh vs. retire vs. merge, because those decisions require context about the program's strategic priorities, the CRM attribution data, and the team's capacity to execute. The right workflow uses AI for data pull and pattern detection while humans own the segmentation and prioritization decisions.
What is the difference between a content audit and a content inventory?
A content inventory is a list of every URL on the site with basic metadata (publish date, author, word count). An audit takes the inventory as input and applies performance benchmarks plus action segmentation to produce a work plan. Inventories are data; audits are decisions. Most teams confuse the two and stop after producing the inventory.
Should small B2B SaaS sites audit their content?
Yes, with adjusted scope. A site with 30 to 80 published pieces can run the same framework on a compressed timeline (2 weeks instead of 4 for the data and segmentation work). The audit math still produces value at small library sizes because even a single piece moving from position 12 to position 4 produces meaningful traffic gains.
How do we measure whether a content audit succeeded?
Three metrics within 90 days of execution completion. Average ranking position improvement on the executed refresh pieces. Aggregate traffic change on the refresh pieces (typically 30 to 80 percent lift on pieces that scored 16 or higher). Pipeline contribution attributable to the refreshed pieces (measured through the content measurement framework). Programs that cannot measure these three are running audits without a feedback loop and should add the measurement infrastructure before the next audit cycle.
The audit framework above produces decisions, not spreadsheets. It runs on a 90-day clock with named owners and weekly check-ins. The strategic framework this implementation playbook applies, including the broader content optimization discipline, the cluster post relationships, and the integration with measurement and AI Search workflows, lives on the content optimization sub-pillar. The audit framework is one workstream within the broader optimization program.
Want this audit framework run on your content library?
30-minute call. We will audit your top 50 pages live on the call, surface the refresh upside hiding in plain sight, and tell you honestly whether a structured optimization workstream is the right next investment for your stage. The audit output is yours regardless of whether you engage us.
Average response time: under 4 business hours.
This is an implementation guide under content optimization. The strategic framework covering audit, refresh, retire, consolidate, and AI Search optimization lives on the parent sub-pillar.
Read the optimization sub-pillar →Where this fits in the broader playbook
This piece sits inside the optimization discipline for B2B SaaS programs, which is one chapter of the discipline of B2B SaaS content marketing, mapped end to end. For deeper framing on the same discipline, see how we keep the content library compounding.
Closely related reading from the same sub-pillar
- the refresh vs retire framework — the refresh-vs-retire decision the audit produces.
- the SME interview process — SME interviews that surface audit-identified gaps.
- the pillar page format — the pillar format audits often recommend rebuilding into.
From other corners of the program
- the technical SEO discipline for B2B SaaS programs — the cross-cluster discipline that compounds this work.
- the full B2B SaaS SEO playbook — the upstream pillar this connects to.
When you're ready to put this into a program, get in touch or look at how engagements are scoped. Average response time: under 4 business hours.



Usama Khan
