The mathematical formula for AdRank is so simple that it’s surprising how frequently, and grievously, people can sometimes get it wrong. Google makes it clear that the ads in a given auction are sorted in the sponsored search results from highest AdRank to lowest, where ‘AdRank’ equals the ad’s bid multiplied by its Quality Score.
Quality Score (QS) is a value reported by Google that is likely closely tied to an ad’s clickthrough rate (CTR). I’ve described why this is so in a previous post of mine called ‘Why is Clickthrough Rate the Main Factor in Quality Score‘. In a sense then, Quality Score acts like a multiplier whereby Google treats dollars from some advertisers as worth more than dollars from other advertisers. If your Quality Score is double mine, then I need to bid $2 for every $1 that you bid. The actual cost-per-click (CPC) that an advertiser pays is simply the minimum amount they would need to have bid to beat the ad located below them:
So, if your AdRank (that is, your bid multiplied by your Quality Score) is 6.000 and I have a Quality Score of 2, I must bid $3.01 to beat you. If I raise my Quality Score to 3, then I need to only bid $2.01 to beat you. And if I manage, by testing lots of different ad copy, to find a version that gets a great clickthrough rate which results in a Quality Score of 6, then I need to only bid $1.01 to beat you.
Since each ad is only required to pay the minimum cost-per-click necessary to beat the AdRank of the ad located below it, the naïve perspective is that it is always beneficial to increase QS. Optimization, in this view, is synonymous with maximization.
Unfortunately, it can easily be shown that it is not always in an advertiser’s best interest to have the highest Quality Score possible. This is why it was disturbing to see a recent post by George Michie, consistently one of the most lucid voices on subjects related to pay-per-click advertising, saying the exact opposite recently at the Rimm-Kaufman Group’s blog. Like most of Mr. Michie’s posts, this one is worth reading in its entirely, but to quote the passage most relevant to this discussion:
“…there is no complexity involved with QS strategy. You want the QS to be as high as possible always. That doesn’t vary by season, or by time of day, or by category. It doesn’t depend on stock positions, margin structures or return rates. Higher is better, and the mechanisms for making improvements are obvious.”
If only it were so simple. But it can be made clear by example that higher Quality Scores are not always better. Consider an advertiser who is faced with showing two different ads: one that’s generic enough to attract many clicks and another that’s specifically targeted so that it gets few clicks, but the clicks that it does get are from users who are likely to actually purchase the product. (In advertising parlance, this is called ‘qualifying’ potential customers.) The numbers below are fictitious, but realistic:
We can see that by showing the generic ad 1000 times, we attract 100 clicks, resulting in 15 conversions. If each conversion brings in $100, then our revenue is $1,500. If each click costs $5, then our profit is $1,000 (or $1.00 per impression), the clickthrough rate (CTR) for the ad is 10% and the Quality Score might be something like 10. In contrast, the targeted ad might also be shown 1000 times, but get only 30 clicks, among whom are the same 15 people who would have clicked on the generic ad. So, our revenue is still $1,500, but the cost (at $5 per click) is now only $150. Thus, our profit is $1,350 (or $1.35 per impression).
Of course, because the clickthrough rate is now only 3%, the Quality Score is likely to be lower (let’s call it a value of 3, even though the Quality Score value that Google reports is probably not a direct, linear function of CTR). But that doesn’t matter to the advertiser, because the profit is higher. Mr. Michie said: “You want the QS to be as high as possible always”, but in fact the purpose of all economic activity is to maximize profit, not clickthrough rate nor Quality Score. So, when performing tests of competing ad creatives, we should judge them by the differences in profit they generate per impression, irrespective of the Quality Score values that Google reports.