/ Blog
Home Blog Contact Buddy Ads Builder Audit Engine

Reporting on which headlines/descriptions work the best.

John Williams · Senior Paid Media Specialist · $350M+ Managed · May 6, 2026
Ad Copy & Creative

If you've ever tried to explain to a client which headlines are actually driving results inside a Responsive Search Ad, you already know the pain: Google's asset reporting gives you "Good," "Best," and "Low" labels, and almost nothing else. As practitioners, we're left reverse-engineering creative performance from a system that was deliberately built to obscure individual asset data — all in the name of automation. But that doesn't mean you're stuck guessing. With the right reporting structure, testing methodology, and a clear-eyed understanding of what Google's data can and can't tell you, you can absolutely build a compelling creative performance story for your clients.

Why RSA Asset Reporting Is Frustratingly Limited (By Design)

A common question in the r/PPC community is exactly this: a client wants to know which headlines and descriptions are performing best so the team can make smarter creative decisions. The problem? Google moved away from standard text ads — where you had clean, isolated A/B data — and replaced them with Responsive Search Ads that use machine learning to mix and match up to 15 headlines and 4 descriptions in real time.

The individual asset view inside Google Ads gives you a performance rating (Low, Good, Best) and an "impressions served" indicator, but it does not give you clicks, conversions, CTR, or conversion rate at the individual asset level. This is intentional. Google wants the algorithm to optimize, and exposing granular per-asset conversion data would encourage advertisers to manually override the system — which Google argues reduces performance.

Key Insight: Google's asset performance ratings are based on relative impression share compared to other assets in the same ad, not on conversion data. A headline rated "Best" simply appeared more often — it doesn't necessarily mean it drove more conversions.

Understanding this distinction is critical before you build any reporting framework. You're not working with clean experimental data. You're working with a system that tells you what the algorithm preferred, which is a proxy — not a direct measure — of business performance.

What Google's Asset Reporting Actually Tells You

Before diving into workarounds, let's get clear on what you can extract from native Google Ads reporting.

Asset Performance Ratings

Inside any RSA, click into the ad, then navigate to the "Assets" tab. You'll see each headline and description labeled as:

Combinations Report

This is underused and genuinely valuable. Under the "Combinations" tab inside an RSA, Google shows you the top combinations of headlines and descriptions that appeared together, along with impression counts. This gives you a window into which message pairings the algorithm favors. It won't show conversion data, but if you see that a particular value proposition pairing (e.g., Headline 1 = price anchor + Headline 2 = urgency cue) consistently dominates impressions, that's directionally meaningful.

Best Practice: Export the Combinations report monthly and track which headline pairings earn the most impressions over time. Cross-reference those pairings with campaign-level conversion rate trends to build a circumstantial case for what creative themes are working.

Segment by Ad Strength

Ad Strength (Poor → Excellent) correlates loosely with RSA performance, but don't confuse it with asset-level performance. Ad Strength rewards diversity of message and keyword coverage in your headlines — it's a quality signal, not a conversion signal.

The Three Practical Methods for Measuring RSA Creative Performance

Given the limitations of native reporting, here are the three methods I've used across accounts with anywhere from $5K to $500K+ in monthly spend.

Method 1: Pinning-Based A/B Testing

This is the most direct approach. By pinning a specific headline to Position 1, 2, or 3, you force Google to always show that headline in that slot. You can then create two identical RSAs within the same ad group — identical in every way except for the pinned headline — and run them head-to-head.

  1. Create RSA Variant A with Headline 1 pinned to Position 1 (e.g., "Free Shipping on Orders Over $50")
  2. Create RSA Variant B with Headline 1 pinned to Position 1 (e.g., "Same-Day Delivery Available")
  3. Keep all other headlines and descriptions identical across both ads
  4. Use ad rotation set to "Rotate indefinitely" — or accept that Google will tilt toward the winner
  5. Run for a minimum of 2-4 weeks, targeting at least 300-500 clicks per variant before drawing conclusions
Common Mistake: Pinning every headline to remove Google's flexibility. If you pin all 3 positions, you've essentially created a standard text ad and you're no longer running a real RSA. Pin only the element you're testing; leave the rest flexible so Google can optimize around your test variable.

The tradeoff with pinning is that Google's own data suggests pinned RSAs underperform flexible RSAs by roughly 10-15% on CTR in aggregate. You're sacrificing some efficiency for measurement clarity. For high-stakes creative decisions — a major rebrand, a new value proposition, a pricing strategy change — that tradeoff is worth it. For routine optimization, it probably isn't.

Method 2: Ad Group Isolation Testing

For accounts where you have enough volume, you can test a specific headline theme by creating a dedicated ad group that isolates a message. This works especially well when testing entirely different creative angles (e.g., price-led vs. trust-led vs. urgency-led).

This isn't a perfectly controlled experiment — there will be auction variance — but at scale (I'd suggest <1,000 clicks per variant before analysis), you can draw reasonably confident directional conclusions.

Method 3: Leverage the "Low" Rating as a Culling Signal

If you don't have the volume or appetite for controlled experiments, a more pragmatic approach is to use the "Low" asset rating as a systematic culling mechanism. The rule is simple:

  1. Any headline or description that holds a "Low" rating for 30+ days gets replaced
  2. Document what you replaced and what you replaced it with
  3. Track whether the overall ad's CTR and conversion rate improves after the refresh
  4. Over time, you build a library of what themes tend to earn "Best" ratings vs. "Low" ratings in your specific vertical
Key Insight: The "Low" rating is the most actionable signal in RSA reporting — even if you can't measure individual asset conversions, a headline that the algorithm consistently deprioritizes is a headline worth replacing. Treat it as a negative signal and iterate from there.

Building a Client-Facing Creative Performance Report

This is where the rubber meets the road. Clients don't want to hear "Google won't tell us." They want a story about what's working and why. Here's a reporting framework that works in practice.

The Creative Scorecard Template

Metric What It Measures Where to Get It Update Frequency
Asset Rating (Best/Good/Low) Algorithm preference RSA Asset View Monthly
Top Impression-Earning Combinations Preferred message pairings RSA Combinations Tab Monthly
Ad-Level CTR Message resonance with searchers Ads Report Weekly
Ad-Level Conversion Rate Post-click relevance & intent match Ads Report Weekly
Pinned Variant Performance (if testing) Direct headline comparison Ad-level segmentation Per test cycle
Creative Refresh Cadence Team's speed of iteration Internal tracking Quarterly review

Framing the Narrative for Clients

When presenting creative performance, help clients understand the difference between algorithmic preference (what Google shows) and business performance (what converts). A useful framing:

This four-part structure gives clients visibility and confidence without overpromising granularity that the platform simply doesn't provide.

Creative Themes That Consistently Perform Across Verticals

As practitioners often discuss, there's a recurring debate about whether headline performance generalizes across accounts. The honest answer is: it depends heavily on vertical, intent, and audience temperature. But after managing spend across e-commerce, SaaS, lead gen, and local services accounts, a few creative patterns emerge consistently.

Headlines That Tend to Earn "Best" Ratings

Headlines That Tend to Earn "Low" Ratings

Best Practice: Audit your RSAs for message diversity. Each of your 15 headlines should communicate a genuinely different reason to click — features, price, urgency, social proof, brand trust, guarantee, speed. If you have three headlines that all say "Free Shipping" in different ways, Google will rate at least two of them "Low" for redundancy.

When to Use Third-Party Tools vs. Native Reporting

Some practitioners turn to third-party tools like Optmyzr, Search Ads 360, or custom scripts to supplement Google's native creative reporting. Here's an honest take on where these add genuine value:

Common Mistake: Over-investing in third-party tooling before establishing a consistent native reporting process. The highest-value creative insight — replacing "Low" assets, tracking combinations, running pinning tests — costs nothing and only requires disciplined use of what's already in the Google Ads interface.

What to Do Next: Your Creative Reporting Action Plan

If you're starting from scratch or trying to build a more rigorous creative reporting process for a client, here's a concrete five-step plan:

  1. Audit all active RSAs for asset ratings this week. Flag any headline or description that has held a "Low" rating for 30+ days. These are your immediate replacement candidates. Document what you're replacing so you can track the before/after impact on ad-level CTR and conversion rate.
  2. Pull the Combinations report for every active RSA and save it to a spreadsheet. Note which headline pairings dominate impressions. Look for creative patterns — are price-led combos earning more impressions? Trust signals? Urgency cues? This is your first directional read on what the algorithm (and by extension, your audience) responds to.
  3. Set up a monthly creative export routine. Whether manual or via script, export asset ratings and combination data monthly. Without a time-series view, you're flying blind on creative momentum.
  4. Design one pinning-based A/B test for your highest-volume ad group. Identify the single biggest creative question your client has — price messaging vs. value messaging, urgency vs. trust, brand name in headline vs. not — and structure a clean pinned test to answer it. Give it 3-4 weeks minimum and target at least 300 clicks per variant.
  5. Build a simple creative performance brief for your client. Using the four-part narrative framework above (what Google shows, what drives results, what you're retiring, what you're testing), create a one-page monthly creative report. Clients who understand the process of creative iteration are far more patient with ambiguous data than clients who feel left in the dark.

RSA creative reporting will never be as clean as the old ETA days. But practitioners who build systematic processes around asset management, directional testing, and client communication will consistently outperform those who either surrender to the black box or try to fight it entirely. The answer, as always, is somewhere in the middle: use Google's automation intelligently while maintaining enough visibility to learn, iterate, and grow your creative library over time.

Related Reading

Googe is suggesting Performance Max campaign. Should I ...

Read more →

So I Decided to Try Performance Max...WTF?

Read more →

What is the point of PerformanceMax in Google Ads when ...

Read more →
AI Disclosure: This article was generated with AI assistance based on a community discussion on Reddit r/PPC. Expert analysis and practitioner perspective by John Williams, Senior Paid Media Specialist with $350M+ in managed Google Ads spend. AI was used to draft and structure the content; all strategic recommendations reflect real campaign experience.