Recently, Google released a provocative new White Paper on AdWords Quality Score, setting the industry mildly abuzz. Google rarely releases supplementary information about its formula beyond the formal information in its core help files.
Perhaps Google is simply “clarifying” how Quality Score works in order to cut through a sea of misinformation. It was a helpful intervention by Google, perhaps channeling 1980’s legal services pitchman Joel Hyatt (“Somehow, in all these dusty old law books, an idea got lost; the idea that law is for people.”) Somehow, in all the metaphysical huffing and puffing about Quality Score, a core concept got lost: that keyword advertisers should test and iterate compelling, highly focused ad campaigns with a view to their attributed KPI performance.
Beyond that, though, Google seems to have subtly updated the actual Quality Score formula.
That’s no surprise. Quality Score (initially, a simple, proto-version of it) has been in force since AdWords was launched in 2002. Google, as is its wont, has tinkered with it over time to achieve multiple goals:
- To maximize Google’s revenue, by effectively finding the balance that generates the highest CPM (cost per thousand impressions) on commercially-oriented user search queries;
- To ensure that ads are nearly as relevant as, or even more relevant than, the associated unpaid listings;
- To enforce editorial and other Google policies;
- To police some elements of landing page relevance;
- To nudge advertisers into pairing the best ads with the best keywords;
- To police websites and business models, to avoid spammy, deceptive, or shady offers that may reflect poorly on Google and do a disservice to legitimate advertisers.
Google has employed various formulas over the years to steer advertiser behavior towards these goals. It’s a fascinating study – if you’re a policy wonk – in how a private-sector company can study and deploy “policy instruments” to achieve compliance in a variety of ways. Google has tinkered with various incentive schemes. It’s said that Googlers of late have taken to consulting the pop-policy book Nudge: Improving Decisions About Health, Wealth, and Happiness. Further proof that Google does not shrink from social engineering and top-down control of systems, similar to legions of technocrats since the New Deal era.
At every important stage in the evolution of Quality Score, Google has attempted to buff its rough edges, and that seems to be true, too, of this iteration. They’re clearly looking to avoid perverse incentives, weird loopholes, unnecessary workarounds, and clunky optics. Nowadays, Quality Score is so smooth that if it were a flooring surface, you’d need suction cups to walk across it.
Quality Score today determines not only ad position, but auction eligibility. That means that anything but the highest Quality Score keywords don’t enjoy 100% impression share. So a lower bid doesn’t just lose you a bit of visibility by dint of a lower ad position, it will cost you exposure, period. Google has engineered the demand curve subtly.
Spot the Subtle Change?
The rough bit that Google has sanded off this version of Quality Score seems to be the tendency of poorly-built accounts and pockets of poor performance to spill over into the account as a whole.
I quote directly:
- “How you structure your account doesn’t matter. If it doesn’t affect user experience, it shouldn’t affect quality or Quality Score.”
- “There is no such thing as ad group-level, campaign-level, or account-level Quality Score.”
That was news to many of us.
There is little doubt in many of our minds that Google explicitly spoke in the past of an “account-wide” Quality Score factor.
Google used to draw fairly heavily on your own account performance in its predictive Quality Score scheme. So there was, in the past, a penalty to be paid for lazily-built ad groups. Google still draws information from your account to judge new keywords, but it’s clear the approach is becoming smoother.
I see a couple of trends driving the change:
- Google now has a truly stupendous amount of Big Data to base keyword judgments on, outside of your account. Google now needn’t fear “greenlighting” the smarter pockets of your account just because somewhere else in your account you built something stupid. Google doesn’t want the process of building an account to be the exclusive terrain of a few “Quality Score engineering” experts. They want accounts to work well in the parts that are built well.
- In being able to smooth these rough edges with the help of better predictive data, economic efficiency improves overall. More advertisers do better, and Google makes more money.
Let’s turn to how this polished gem can help you.
7 Things to Worry Less About
There are several outdated “problem areas” that, in the past, would lead you to steer away from certain practices due to a fear of the dreaded Quality Score Contagion. You have less to worry about than you think. Google isn’t “grading” your account like some evil Quality Score Santa.
Consider:
- Broad keyword experiments. Primarily for reasons of perceived Quality Score contagion, perfectly useful broad keywords and very short exact matches were often avoided for extended periods of time until an account “got established.” Building a tight, “core intent” account out first, before fanning out to broader and less profitable inventory, still makes sense from an ROI and relevance perspective. But we no longer need to approach the task with Quality Score Contagion Panic uppermost in our minds.
- Multi-match-type ad groups. I always had little patience for unsubstantiated theories such as building ad groups around single match types only. An ad group is a robust unit that actually does quite well, thank you, if you aggregate a reasonable number of keywords and match types in it.
- Panic-pausing keywords that, for whatever reason, and sometimes no good reason, are sporting a frighteningly poor Quality Score, such as 1 or 2. That used to be a “thing”. You still need to make these judgments eventually, but don’t exaggerate their importance.
- Brand keywords. Whether it’s low CTR, strange post-click behavior, or Google profit motive, brand keywords (other than your own) can often sport low QS’s. So do you steer clear of them entirely? Google’s new documentation implies: don’t worry about that.
- Managing a B2B campaign strictly to ROI metrics by carefully filtering out non-prospects via the ad copy. Sure, you’re still going to feel pain in many B2B accounts because you’re swimming against the tide of how a consumer search engine approaches “relevance.” Sure, you’re going to pay a “tax” for having a low CTR. But ROI is your goal, don’t force yourself to write poor-performing, high-CTR ads just because of what some guy on a street corner told you about Quality Score.
- Using DKI in ads. Beware of any claims about dynamic keyword insertion that speak primarily to the Quality Score benefit. Incorporate DKI along with more conventional headlines in your ad testing regimen as you normally would.
- Turning on whichever account settings you want. Feel the need to give the whole account a “Quality Score bump” by switching your ad rotation settings to “Optimize for Clicks,” as opposed to “Rotate Indefinitely”? Think twice. There’s probably no reason to go on such a magical mystery tour, unless click volume goals are 100% aligned with your profit goals.
“I’m going to put this guy RIGHT OUT OF COMMISSION!”
There are a few advice-givers who will naturally feel their wings a bit clipped by Google’s timely White Paper. It’s as if Google tired of gurus taking liberties with the truth, so they put out the information – ȧ la Jerry Seinfeld as he returned an expensive logo blazer – “purely for spite.” Despite the array of confusing claims out there, consider putting some stock in what Google actually says.
Andrew Goodman
Latest posts by Andrew Goodman (see all)
- PPC Audits: A Guide to Playing Fair - July 22, 2015
- What Do These Google AdWords Features Really Do? - May 26, 2015
- Not Goliath? Dominating AdWords Won’t Be Easy - April 29, 2015
- Unleash Your Quality Score Beast Mode (Bing Ads Edition) - March 3, 2015
- Google, Partners, and Loyalty (Or: Thanks for the Espresso Machine, But…) - February 3, 2015
Shelley Ellis says
Hey, Andrew, what do you think about the Google account “health reports” when it comes to QS? Here’s the list:
Average Mobile Bid Adjustment
Mobile Search Position
Has Mobile Preferred Ads?
Has Sitelinks?
Has Call Extensions?
Has Location Extensions?
Has App Extensions?
Has Conversion Tracking?
Andrew Goodman says
Hi Shelley,
I’ve come to see most benchmarking and account notifications as a form of clutter unless customized by the advertiser or agency, but certainly all of the tick-boxes you mention should be considered. Extensions definitely boost QS and in many cases trust and click volume, so they should be looked at and implemented. It has become a bit of an arms race though, and much of it is “on faith,” instead of “data driven,” which is why I wrote an article recently (half tongue in cheek) called “Why I Hate Ad Extensions.” You might want to check it out.
As for conversion tracking: a no-brainer, right? As far as I know this has nothing to do with QS.