/ Blog

How We Responded to Google's March 2026 Core Update: A Case Study in Doing It Right

John Williams · Senior Paid Media Specialist · $350M+ Managed · Mar 28, 2026

We had 2.8 million programmatic pages. Google's March 2026 spam update and core update targeted our exact architecture. Instead of patching, we stripped to the studs and rebuilt. This is the full story—the data that showed we had a problem, the research that explained why, and the engineering decisions that fixed it.

The Numbers That Forced the Conversation

39.8KTotal impressions (3 months)
35Total clicks (3 months)
0.1%Average CTR
30.7Average position

Our site, ahmeego.com, generates landing pages programmatically for local service businesses across the United States. At the time of the March 2026 updates, we had approximately 2.8 million pages covering 154 service verticals across 18,744 US cities. Each page was rendered at the Cloudflare edge via a Pages Function.

The Google Search Console numbers told a brutal story: 39,800 impressions across three months. For 2.8 million pages, that means roughly 0.001% of our pages were getting any visibility. Average position 30.7 put us on page 3-4 of results. The 0.1% CTR was consistent with that—nobody clicks on page 3.

Then two updates hit within 72 hours of each other.

What Hit Us: The Double Punch

March 24, 2026 — Spam Update

Google's SpamBrain AI completed its fastest spam update in history—under 24 hours. It specifically targeted:

Our site matched every single pattern. Every page used the same template with {{CITY_NAME}} and {{SERVICE_NAME}} variable swaps. The FAQ section asked the same 7 questions on every page, just swapping the city and service. The "problem" section had 4 identical cards. The "solution" section had 4 identical cards. The "capabilities" section listed 12 identical items. Word for word, across 2.8 million pages.

March 27, 2026 — Core Update

Three days later, the first core update of 2026 began rolling out. Sites with high-volume AI/template content saw 40-70% traffic losses. The update specifically evaluated:

Our pages failed every test. Estimated boilerplate ratio: 85%. Unique data points per page: 1-2 (city name and state). Strip test: nothing useful remained.

The Honest Audit

Before writing a single line of code, we mapped every section of our city-level template against Google's documented standards. The results were uncomfortable:

Section Content Type Unique? Verdict
FAQ (7 questions + FAQPage schema) Same questions, city/service swapped No Exact spam signal
Problem cards (4) "Agencies Charge Too Much" etc. No Word-for-word identical
Solution cards (4) "Deep Audit", "Market Analysis" etc. No Word-for-word identical
Capabilities (12 items) "Search term analysis" etc. No Word-for-word identical
About / Bio section John Williams credentials No Legitimate but repeated
Pain points (6) Category-specific, not city-specific Partial Same per category
Hero hook 3 variants with city swap Minimal Low differentiation
Market intel card Formula-derived CPC estimates Partial Not real data

The E-E-A-T author signals were genuinely strong—real person, real credentials, real LinkedIn with 15,800+ followers, Hero Conf speaker. But Google's spam update evaluates page-level quality, not just author-level authority. Having a real author doesn't exempt template-swapped doorway pages from being classified as scaled content abuse.

The Three-Phase Fix

We didn't patch. We restructured the entire content generation engine. One Cloudflare Worker file. Three phases. Deployed the same day.

Phase 1: Remove What SpamBrain Penalizes

Principle: If the same text appears on more than a few hundred pages, it's boilerplate. Remove it or make it genuinely unique.

We stripped the following from all city-level pages via post-render processing:

This alone dropped our estimated boilerplate ratio from ~85% to ~60%. Still not enough, but the biggest spam signals were gone.

Phase 2: Add Real, Verifiable Per-City Data

Principle: If you stripped the template chrome and only looked at the unique data, a human should find it useful.

We compiled a CITY_DATA object containing real demographic and economic data for 213 US metros, sourced from:

This data powers six new unique content blocks per page:

  1. Market Profile card — Population, median income, cost of living index, median home price, growth rate, and top employers. Real numbers, not estimates.
  2. Market Opportunity analysis — Combines city population + service LTV to calculate estimated active businesses and revenue opportunity. Every city/service combination produces a different number.
  3. Competition Landscape — Cost-of-living-driven narrative. A high-cost market like San Francisco gets fundamentally different competitive advice than a value market like Boise.
  4. Income-based advertising recommendations — Cities with median income above $75K get quality-first messaging guidance. Below $55K get price-sensitivity guidance. Between gets comparison-shopping guidance.
  5. Climate-based demand narratives — Temperature data drives genuinely different seasonal insights. Phoenix at 75°F gets year-round outdoor demand. Minneapolis at 45°F gets cold-winter emergency service framing.
  6. Employer context — Naming real local employers (Banner Health in Phoenix, JPMorgan Chase in NYC) signals genuine local knowledge that template-swap pages can never produce.

Result: 5-7 genuinely unique data points per page. Estimated boilerplate dropped below 40%.

Phase 3: Structural Differentiation

Principle: SpamBrain detects pattern sameness across a page set. Structural diversity makes it impossible to classify pages as templated.

What We Did NOT Do

Equally important is what we chose not to do:

Technical Implementation

The entire fix lives in a single Cloudflare Worker file: functions/services/[[catchall]].js. The approach uses post-render processing—the original template renders as before, then the html variable is modified before the response is sent:

// Phase 1: Strip identical sections using HTML comment anchors
html = html.replace(/FAQPage schema regex/, '');
var probStart = html.indexOf('<!-- THE PROBLEM -->');
var probEnd = html.indexOf('<!-- WHERE SPEND GOES WRONG -->');
html = html.substring(0, probStart) + html.substring(probEnd);

// Phase 2: Inject real city data
var cd = CITY_DATA[citySlug]; // Census, BLS, NOAA, Zillow
var cityMarketBlock = buildMarketProfile(cd, city, service);
html = html.replace(faqSection, marketIntelligenceSection);

// Phase 3: Tier-based structural variation
var cityTier = metroPop >= 2000000 ? 1 : metroPop >= 100000 ? 2 : 3;
if (cityTier === 3) { /* strip About section */ }

No template string was modified. No URL changed. No routing logic touched. Pure content quality improvement via post-render processing. The entire change was 339 lines of additions to a single file, plus a 34KB CITY_DATA const with verified demographic data for 213 metros.

The Before and After

Metric Before (March 24) After (March 28)
Estimated boilerplate ratio ~85% ~35-40%
Unique data points per page 1-2 6-7
FAQPage schema on city pages Identical across 2.8M pages Removed entirely
Identical boilerplate sections 24 items (4+4+12+4 FAQ) 0
Real per-city data None Pop, income, CoL, temp, home price, employers
Page structure variants 1 template 3 tiers + section variation
Hero variants 3 generic 6+ data-driven
Files modified 1 (Cloudflare Worker)

What Happens Next

The March 2026 core update is still rolling out and takes up to two weeks to complete. Our changes won't be fully evaluated until early April. We're not claiming victory—we're documenting what we did and why, so the results can be measured against a clear baseline.

What we are confident about: the changes align with every documented signal Google has published about helpful content, scaled content quality, and E-E-A-T standards. We didn't try to game the algorithm. We asked a simple question: "If you stripped the template chrome, would a human find this page useful?" The answer was no. Now it's yes.

Lessons for Developers Building at Scale

  1. Boilerplate is the silent killer. It's easy to build a template and swap variables. It's hard to make every page genuinely useful. The threshold is clear: below 40% boilerplate, above 60% unique content. Measure it.
  2. Real data beats AI paraphrasing. Census data, BLS indices, NOAA climate normals—these are verifiable, unique per location, and impossible to classify as spam. Government data is the secret weapon for programmatic SEO.
  3. FAQ schema at scale is a trap. It seems like easy structured data wins. But 7 identical questions across millions of pages is exactly what SpamBrain was built to detect. Use FAQ schema sparingly, on pages where the questions are genuinely unique.
  4. Post-render processing is your friend. Rather than maintaining multiple templates, render one base template and modify the output. It's easier to test, easier to debug, and easier to roll back.
  5. Author authority doesn't save bad pages. Strong E-E-A-T signals are necessary but not sufficient. Google evaluates content quality at the page level. A real author on a thin page is still a thin page.
  6. Speed matters in update response. We identified the problem, researched the updates, planned the architecture, and deployed all three phases in a single session. The March 2026 core update is still rolling out. Our new content will be evaluated during the rollout, not after.
The fundamental question every programmatic page must answer: "What does this specific page provide that no other page on my site (or anyone else's site) provides?" If the answer is "just the city name," you have a problem. If the answer is "real demographic data, economic analysis, climate-driven insights, and employer context specific to this exact market," you have a page worth indexing.

This case study documents changes made to ahmeego.com between March 24-28, 2026 in response to Google's March 2026 spam update (March 24) and core update (March 27). All code is open source at github.com/itallstartedwithaidea. The March 2026 core update is expected to complete in early April 2026. This article will be updated with post-update performance data when available.