How an Inaptly Named Bulk-Discount Program Can Lead to Confusion and Bad Business Decisions Names and labels aren’t everything, but as those of us who work in marketing know (by virtue of the fact that we have jobs), they sure can make a difference. Google’s inappropriately named “Quality Score” is no exception. There’s been a lot of industry chatter recently regarding Quality Score – more analysis of its impact, more investigations into how it’s calculated, even occasional calls to optimize to Quality Score as if it’s a performance metric. The reason for this trend is, in my opinion, psychological. Just as we SEM ad-writer/analysts play with words in our ads to elicit different reactions from users viewing SERPs, Google has done a number on the SEM industry—even if unintentionally—by inserting the word “quality” into “Quality Score.” Sure, there are relevance-related variables in the QS equation. Tying keywords to their ads and landing page tends to give them a good starting QS when launching a new SEM campaign. This is what Google tells us, and we all see it in practice. But what Google also makes very clear is that once a campaign starts generating traffic, the most important factor by far in determining QS is click-through rate. To step back a moment, the Quality Score that we see in the Google UI is really a bird’s-eye view. Quality Score is a moving target calculated one impression at a time. In other words, the combination of one keyword with one ad in one position at one moment in time, and the combination of the same keyword in the same ad group with a different ad in some other position at some other time, will generate different Quality Scores at the point-of-query. The QS number we see for a keyword in Adwords is an average of all those query-by-query QS calculations, NOT a flat ranking that Google assigns once. Google could have chosen to show QS at the ad level instead of the keyword level (as YSM does with its Quality Index). If Google did it that way, the ad QS number would also be an average, effectively amounting to Google telling the advertiser, “If you take all the times that all the keywords in this ad group have triggered this particular ad, and average all those unique quality-score instances together, this would be your average Quality Score for this ad.” Google does not show QS by ad, but the impact of Quality Score is arguably more clear when looking at performance data at the ad level. Take this ad group as an example (the #s are real, but ad copy is not shown to protect the innocent): The first thing to note here is that the campaign in which this ad group lives is set to “Rotate Ads Evenly” rather than auto-optimize, meaning that ads are supposed to be displayed against an equal % of queries rather than giving more impressions to ads with higher CTRs. However, there is still a severe impression discrepancy (column B). How can that be? Google’s answer is Quality Score. If the QS of a keyword-ad combo causes the ad’s position to fall off the first page of search results, and if the user does not click through to that second page of paid ads (and who does?), then that ad will not display. It has still received its equal opportunity to show up in the search results, but it won’t get the impression. So in the sample above, Ad #1 has a higher QS and shows up on the first page of search results more frequently, which leads to a higher impression count. Without looking at the ad copy itself, what exactly is it about these ads that’s causing this Quality Score difference? See column D: CTR. The ad that dominates the CTR game gets a higher QS, gets shown on the first page more often, and gets more impressions as a result. Again, none of this is guesswork or any sort of conspiracy theory. This is Google’s explanation. Now let’s move over to the conversion-rate metrics (columns H & I). What do we see here? Performance flips. The high-CTR (high-QS) ad is by far the worst converting, delivering by far the worst CPA. Translation: From the perspective of the advertiser’s ROI business goals, the ad that the Google algorithm is giving the highest Quality Score is… wait for it… the worst-quality ad, driving the worst-quality traffic to the site on average. So why would we keep that ad running? Because if we drop it out of the rotation, CTR for the whole ad group plummets, as do the keywords’ average Quality Scores, and we lose significant impression share. Yes, you’re reading this correctly: We keep the worst-quality ad running in order to maintain a high Quality Score. I won’t go so far as to say categorically that in order to get good-quality traffic you have to have a bad Quality Score, but in many (if not most) cases that is in fact how it works out. To paraphrase, what Google is effectively telling the advertiser about this ad group is this: “Ad #1 is going to lead to a lot more clicks, which is good for us. We get more money out of the SERP real estate that you occupy with Ad #1 than with Ads #2-3, so you can have that real estate at a cheaper rate with Ad #1.” If Google called a duck a duck – or in this case called “Quality Score” a “Bulk Discount Opportunity Rating” or something like that – there would be no confusion. The bulk discount is a respected, time-tested sales tool that has its proper place. We could take these findings to our clients and say, “Google is charging us less for individual clicks on Ad #1 because of its high ‘Bulk-Discount Score.’ But these other two ads convert much better and do a better job of helping us meet your ROI goals. So which is more important to you right now? Volume or efficiency?” But because Google has labeled this system with the word “quality” rather than “bulk discount,” the already delicate balancing act between volume and efficiency that we perform every day on behalf of our clients gets even more complicated. “Google says we have a bad Quality Score,” we sometimes hear, “so can we please do something to improve our Quality Score so we get better quality traffic?” As outlined above, that just ain't how it works. Fortunately, we can demonstrate that Google uses the word “quality” differently than we do by showing countless cases like the one above where optimizing to CPA necessarily leads to taking actions that hurt Quality Score. But that word “quality” makes it a lot harder to swallow. The insight that Google does provide into Quality Score is extremely valuable, and can inform good SEM decision making—if Quality Score is understood for what it is. It’s an important research tool in that it serves as an indicator of what Google will favor. But for advertisers interested in measurable ROI, Quality Score is not a measure of quality, nor is it a metric to which SEM campaigns should be optimized. And it needs a name change.