If you've ever tried to explain to a client which headlines are actually driving results inside a Responsive Search Ad, you already know the pain: Google's asset reporting gives you "Good," "Best," and "Low" labels, and almost nothing else. As practitioners, we're left reverse-engineering creative performance from a system that was deliberately built to obscure individual asset data — all in the name of automation. But that doesn't mean you're stuck guessing. With the right reporting structure, testing methodology, and a clear-eyed understanding of what Google's data can and can't tell you, you can absolutely build a compelling creative performance story for your clients.
Why RSA Asset Reporting Is Frustratingly Limited (By Design)
A common question in the r/PPC community is exactly this: a client wants to know which headlines and descriptions are performing best so the team can make smarter creative decisions. The problem? Google moved away from standard text ads — where you had clean, isolated A/B data — and replaced them with Responsive Search Ads that use machine learning to mix and match up to 15 headlines and 4 descriptions in real time.
The individual asset view inside Google Ads gives you a performance rating (Low, Good, Best) and an "impressions served" indicator, but it does not give you clicks, conversions, CTR, or conversion rate at the individual asset level. This is intentional. Google wants the algorithm to optimize, and exposing granular per-asset conversion data would encourage advertisers to manually override the system — which Google argues reduces performance.
Key Insight: Google's asset performance ratings are based on relative impression share compared to other assets in the same ad, not on conversion data. A headline rated "Best" simply appeared more often — it doesn't necessarily mean it drove more conversions.
Understanding this distinction is critical before you build any reporting framework. You're not working with clean experimental data. You're working with a system that tells you what the algorithm preferred, which is a proxy — not a direct measure — of business performance.
What Google's Asset Reporting Actually Tells You
Before diving into workarounds, let's get clear on what you can extract from native Google Ads reporting.
Asset Performance Ratings
Inside any RSA, click into the ad, then navigate to the "Assets" tab. You'll see each headline and description labeled as:
- Best — Top ~10-15% of assets by impression share. Google consistently selects these in auctions.
- Good — Middle performers. Shown regularly but not dominant.
- Low — Bottom performers. Google rarely selects these. Strong signal to replace them.
- Unrated — Insufficient data. Common for newer assets or accounts with lower volume.
- Pending — Under review or learning.
Combinations Report
This is underused and genuinely valuable. Under the "Combinations" tab inside an RSA, Google shows you the top combinations of headlines and descriptions that appeared together, along with impression counts. This gives you a window into which message pairings the algorithm favors. It won't show conversion data, but if you see that a particular value proposition pairing (e.g., Headline 1 = price anchor + Headline 2 = urgency cue) consistently dominates impressions, that's directionally meaningful.
Best Practice: Export the Combinations report monthly and track which headline pairings earn the most impressions over time. Cross-reference those pairings with campaign-level conversion rate trends to build a circumstantial case for what creative themes are working.
Segment by Ad Strength
Ad Strength (Poor → Excellent) correlates loosely with RSA performance, but don't confuse it with asset-level performance. Ad Strength rewards diversity of message and keyword coverage in your headlines — it's a quality signal, not a conversion signal.
The Three Practical Methods for Measuring RSA Creative Performance
Given the limitations of native reporting, here are the three methods I've used across accounts with anywhere from $5K to $500K+ in monthly spend.
Method 1: Pinning-Based A/B Testing
This is the most direct approach. By pinning a specific headline to Position 1, 2, or 3, you force Google to always show that headline in that slot. You can then create two identical RSAs within the same ad group — identical in every way except for the pinned headline — and run them head-to-head.
- Create RSA Variant A with Headline 1 pinned to Position 1 (e.g., "Free Shipping on Orders Over $50")
- Create RSA Variant B with Headline 1 pinned to Position 1 (e.g., "Same-Day Delivery Available")
- Keep all other headlines and descriptions identical across both ads
- Use ad rotation set to "Rotate indefinitely" — or accept that Google will tilt toward the winner
- Run for a minimum of 2-4 weeks, targeting at least 300-500 clicks per variant before drawing conclusions
Common Mistake: Pinning every headline to remove Google's flexibility. If you pin all 3 positions, you've essentially created a standard text ad and you're no longer running a real RSA. Pin only the element you're testing; leave the rest flexible so Google can optimize around your test variable.
The tradeoff with pinning is that Google's own data suggests pinned RSAs underperform flexible RSAs by roughly 10-15% on CTR in aggregate. You're sacrificing some efficiency for measurement clarity. For high-stakes creative decisions — a major rebrand, a new value proposition, a pricing strategy change — that tradeoff is worth it. For routine optimization, it probably isn't.
Method 2: Ad Group Isolation Testing
For accounts where you have enough volume, you can test a specific headline theme by creating a dedicated ad group that isolates a message. This works especially well when testing entirely different creative angles (e.g., price-led vs. trust-led vs. urgency-led).
- Duplicate an ad group
- In the test ad group, replace all headlines with variations of the theme you're testing
- Run both ad groups simultaneously, targeting the same keywords and audiences
- Compare CTR, conversion rate, and CPA between the two groups
This isn't a perfectly controlled experiment — there will be auction variance — but at scale (I'd suggest <1,000 clicks per variant before analysis), you can draw reasonably confident directional conclusions.
Method 3: Leverage the "Low" Rating as a Culling Signal
If you don't have the volume or appetite for controlled experiments, a more pragmatic approach is to use the "Low" asset rating as a systematic culling mechanism. The rule is simple:
- Any headline or description that holds a "Low" rating for 30+ days gets replaced
- Document what you replaced and what you replaced it with
- Track whether the overall ad's CTR and conversion rate improves after the refresh
- Over time, you build a library of what themes tend to earn "Best" ratings vs. "Low" ratings in your specific vertical
Key Insight: The "Low" rating is the most actionable signal in RSA reporting — even if you can't measure individual asset conversions, a headline that the algorithm consistently deprioritizes is a headline worth replacing. Treat it as a negative signal and iterate from there.
Building a Client-Facing Creative Performance Report
This is where the rubber meets the road. Clients don't want to hear "Google won't tell us." They want a story about what's working and why. Here's a reporting framework that works in practice.
The Creative Scorecard Template
| Metric |
What It Measures |
Where to Get It |
Update Frequency |
| Asset Rating (Best/Good/Low) |
Algorithm preference |
RSA Asset View |
Monthly |
| Top Impression-Earning Combinations |
Preferred message pairings |
RSA Combinations Tab |
Monthly |
| Ad-Level CTR |
Message resonance with searchers |
Ads Report |
Weekly |
| Ad-Level Conversion Rate |
Post-click relevance & intent match |
Ads Report |
Weekly |
| Pinned Variant Performance (if testing) |
Direct headline comparison |
Ad-level segmentation |
Per test cycle |
| Creative Refresh Cadence |
Team's speed of iteration |
Internal tracking |
Quarterly review |
Framing the Narrative for Clients
When presenting creative performance, help clients understand the difference between algorithmic preference (what Google shows) and business performance (what converts). A useful framing:
- "Here's what Google is choosing to show most:" — Asset ratings & top combinations
- "Here's what's driving results at the ad level:" — CTR and conversion rate by ad
- "Here's what we're retiring and why:" — Low-rated assets being replaced
- "Here's what we're testing next:" — New creative hypotheses in the pipeline
This four-part structure gives clients visibility and confidence without overpromising granularity that the platform simply doesn't provide.
Creative Themes That Consistently Perform Across Verticals
As practitioners often discuss, there's a recurring debate about whether headline performance generalizes across accounts. The honest answer is: it depends heavily on vertical, intent, and audience temperature. But after managing spend across e-commerce, SaaS, lead gen, and local services accounts, a few creative patterns emerge consistently.
Headlines That Tend to Earn "Best" Ratings
- Keyword insertion or close variants in the headline — Relevance signals are still powerful. Headlines that mirror search intent tightly tend to win.
- Specific numbers & proof points — "Rated 4.9/5 by 2,000+ Customers" typically outperforms "Highly Rated Service." Specificity builds credibility.
- Clear CTAs with low friction language — "Get a Free Quote Today" vs. "Contact Us." The former signals the next step; the latter is vague.
- Price/offer transparency — In e-commerce especially, headlines that surface pricing ("From $29/Month," "Save Up to 40%") tend to pre-qualify clicks and improve downstream conversion rate.
Headlines That Tend to Earn "Low" Ratings
- Generic brand taglines that don't communicate unique value
- Headlines that duplicate meaning with other headlines in the same ad (Google detects message redundancy)
- Headlines shorter than 15-20 characters — they don't utilize available space and get deprioritized
- Overclaiming superlatives without proof ("The Best Solution in the World")
Best Practice: Audit your RSAs for message diversity. Each of your 15 headlines should communicate a genuinely different reason to click — features, price, urgency, social proof, brand trust, guarantee, speed. If you have three headlines that all say "Free Shipping" in different ways, Google will rate at least two of them "Low" for redundancy.
When to Use Third-Party Tools vs. Native Reporting
Some practitioners turn to third-party tools like Optmyzr, Search Ads 360, or custom scripts to supplement Google's native creative reporting. Here's an honest take on where these add genuine value:
- Optmyzr's Ad Text Optimization tools — Useful for bulk analysis of asset performance across multiple RSAs and ad groups. Saves significant time at scale.
- Google Ads Scripts — You can write a script that exports asset ratings and combinations data to a Google Sheet automatically, enabling trend tracking over time. The native UI doesn't retain historical asset rating data — once an asset is replaced, its history is gone. A script-based export solves this.
- Data Studio / Looker Studio dashboards — Connecting Google Ads API data to a visualization layer lets you track ad-level CTR and CVR trends alongside creative refresh dates, which helps correlate creative changes with performance shifts.
- Search Ads 360 — Primarily valuable for large enterprise accounts managing creative across multiple Google Ads accounts. Overkill for most mid-market clients.
Common Mistake: Over-investing in third-party tooling before establishing a consistent native reporting process. The highest-value creative insight — replacing "Low" assets, tracking combinations, running pinning tests — costs nothing and only requires disciplined use of what's already in the Google Ads interface.
What to Do Next: Your Creative Reporting Action Plan
If you're starting from scratch or trying to build a more rigorous creative reporting process for a client, here's a concrete five-step plan:
- Audit all active RSAs for asset ratings this week. Flag any headline or description that has held a "Low" rating for 30+ days. These are your immediate replacement candidates. Document what you're replacing so you can track the before/after impact on ad-level CTR and conversion rate.
- Pull the Combinations report for every active RSA and save it to a spreadsheet. Note which headline pairings dominate impressions. Look for creative patterns — are price-led combos earning more impressions? Trust signals? Urgency cues? This is your first directional read on what the algorithm (and by extension, your audience) responds to.
- Set up a monthly creative export routine. Whether manual or via script, export asset ratings and combination data monthly. Without a time-series view, you're flying blind on creative momentum.
- Design one pinning-based A/B test for your highest-volume ad group. Identify the single biggest creative question your client has — price messaging vs. value messaging, urgency vs. trust, brand name in headline vs. not — and structure a clean pinned test to answer it. Give it 3-4 weeks minimum and target at least 300 clicks per variant.
- Build a simple creative performance brief for your client. Using the four-part narrative framework above (what Google shows, what drives results, what you're retiring, what you're testing), create a one-page monthly creative report. Clients who understand the process of creative iteration are far more patient with ambiguous data than clients who feel left in the dark.
RSA creative reporting will never be as clean as the old ETA days. But practitioners who build systematic processes around asset management, directional testing, and client communication will consistently outperform those who either surrender to the black box or try to fight it entirely. The answer, as always, is somewhere in the middle: use Google's automation intelligently while maintaining enough visibility to learn, iterate, and grow your creative library over time.