Quality Score has been a lightning rod for debate in the PPC world for years — and with good reason. A common question in the r/PPC community captures the frustration perfectly: practitioners are seeing disconnects between high Quality Scores and poor auction performance, low Quality Scores on top-performing keywords, and campaigns that win at auction with "below average" components. So is Quality Score broken, or have we just been misusing it? After managing over $350M in Google Ads spend, my honest answer is: it's not broken, but it's wildly misunderstood — and optimizing for it as a primary KPI is one of the most common mistakes I see from beginner to intermediate practitioners.
Let's start with the fundamentals, because Google's own documentation leaves a lot of room for misinterpretation. Quality Score is a diagnostic tool — a 1–10 score assigned at the keyword level that reflects Google's assessment of three components: Expected Click-Through Rate (eCTR), Ad Relevance, and Landing Page Experience. Each component is rated as "Below Average," "Average," or "Above Average."
Here's the critical distinction most practitioners miss: Quality Score is not the same as Ad Rank quality. Ad Rank — which actually determines your auction position and cost-per-click — is calculated in real time using signals that are far more granular and contextual than the keyword-level Quality Score you see in your dashboard. That dashboard number is a lagging, aggregated estimate. It doesn't account for:
As practitioners often discuss in the r/PPC community, Quality Score is not a key performance indicator and should not be optimized or aggregated at the account level. This is correct. Chasing a portfolio-level "average Quality Score" metric is a distraction at best and actively harmful at worst.
Even if Quality Score isn't a KPI, the three components are genuinely useful diagnostic signals when used correctly.
eCTR compares your keyword's expected CTR against all other advertisers targeting similar keywords, normalized for ad position. A "Below Average" eCTR is the most actionable signal — it typically means your ad copy isn't resonating with searcher intent for that keyword. In my experience managing large accounts, eCTR "Below Average" ratings are almost always fixable at the ad group and copy level.
Benchmark context: For branded keywords, I routinely see CTRs of 15–40%+ that produce "Above Average" eCTR ratings. For competitive non-branded commercial keywords, "Average" eCTR can correspond to actual CTRs as low as 2–5%. The comparison is always relative to the competitive landscape for that specific keyword cluster.
Ad Relevance measures how closely your ad copy aligns with the searcher's query intent. This is where over-segmentation and keyword stuffing in ad copy actually used to help — but with Responsive Search Ads (RSAs) now dominant, Google's machine learning handles a lot of this matching automatically. "Below Average" Ad Relevance in 2024 usually signals one of two things:
This is the component where I see the most improvement potential across accounts I audit. "Below Average" Landing Page Experience is Google's signal that your destination URL is either slow, not mobile-optimized, has thin content relative to the keyword's implied intent, or has a high bounce rate relative to the query type. Core Web Vitals have become increasingly influential here — I've seen accounts gain a full Quality Score point on key terms after improving LCP (Largest Contentful Paint) from 4.5 seconds to under 2.5 seconds.
The frustration practitioners express is real, and it stems from structural changes in how Google Ads has evolved over the past five years.
When the industry ran mostly on exact and phrase match with manual CPC bidding, Quality Score had a more direct and visible relationship with CPCs and positions. The formula was simpler to observe: higher QS = lower CPC for similar position. With broad match + Target CPA/ROAS smart bidding now dominating spend allocation for most sophisticated accounts, Quality Score's influence on individual auction outcomes is increasingly mediated by Google's real-time auction quality assessment — which you cannot see.
This means you can have a keyword sitting at QS 5 that Google's real-time system consistently scores highly in auctions because the actual user signals (intent, device, audience, query specificity) align well with your landing page and historical conversion data. Conversely, a QS 9 keyword might perform poorly in conversion auctions because the high historical CTR was driven by curiosity clicks from early-funnel searchers who never converted.
With Expanded Text Ads (ETAs), you could engineer ad copy specifically for keyword insertion and see Quality Score improvements within days. With RSAs, Google's system serves different headline/description combinations to different users, which means your "Ad Relevance" score is an averaged assessment across potentially thousands of asset combinations. The score becomes less actionable and more of a rough signal.
Quality Score is initialized at 6 for new keywords and then calibrated based on auction history. For low-volume keywords — anything generating fewer than a few hundred impressions per month — the Quality Score can remain statistically unstable for months. I've seen brand new competitor-targeting keywords with QS 4 outperform established keywords with QS 8 in terms of actual conversion cost, purely because the landing page experience and real-time auction quality assessment were superior.
| Scenario | QS Relevance | What to Focus On Instead |
|---|---|---|
| Manual CPC campaigns, competitive SERP | High — QS directly affects CPC at given position | Improve eCTR through ad copy testing |
| Smart bidding (tCPA/tROAS) campaigns | Low — real-time signals dominate | Conversion volume, CPA, ROAS trends |
| New account or new ad group setup | Medium — early signal of structural issues | Ad group theme tightness, landing page speed |
| Budget-constrained campaigns | Medium — affects impression share efficiency | Impression Share Lost to Rank metric |
| Branded keywords | Low — you should dominate by default | Branded search volume trends, competitor activity |
| Competitive high-CPL B2B keywords | Medium-High — CPC efficiency matters at $50-200+ CPCs | Landing page conversion rate, QS component breakdown |
Here's the framework I use when I review Quality Score data in account audits. The key is to use it as a diagnostic filter, not a performance dashboard.
Export your keyword data and sort by cost. Filter for keywords that represent 80% of your spend (your core terms). Now look at Quality Score for only these keywords. A QS 4 on a keyword spending $5/month is irrelevant noise. A QS 4 on a keyword spending $5,000/month warrants investigation.
For each of your high-spend, lower Quality Score keywords (QS <6), check actual CPA and ROAS against your campaign targets. If they're performing at or above target, note it and move on — the diagnostic score doesn't override real performance data. If they're underperforming and have low QS, now you have a genuine optimization opportunity.
Quality Score changes slowly — especially for landing page experience improvements, which can take 4–6 weeks to fully reflect after a site speed fix. Don't expect instant feedback. Set a 30-day checkpoint after any structural changes.
One of the most persistent bad habits I see — especially in agency reporting — is rolling up Quality Score into an account-level average metric and reporting it to clients or leadership as a KPI. "Our account average Quality Score improved from 6.2 to 7.1 this quarter" is a meaningless statement that can actually mask real problems.
Here's why aggregating Quality Score is misleading:
If you've been spending time optimizing for Quality Score as a primary metric, here's how to refocus your efforts on what actually moves the needle:
Quality Score isn't broken — but it's a tool that's been used for the wrong job by a lot of well-meaning practitioners. Used correctly as a diagnostic filter rather than a performance target, the component-level data genuinely helps you identify structural weaknesses in ad copy relevance and landing page experience. The mistake is treating the aggregate number as a proxy for account health. Your real account health lives in your conversion volume, CPA trends, ROAS, and Impression Share data — and that's where your optimization energy should go.