Average position is a really perverse metric. Let’s say that I have only 2 keywords in an account, each with one ad: On Day 1, ad #1 is in position 2 and gets 100 impressions per day, while ad #2 is in position 9 and gets 10 impressions per day. The account’s average position on Day 1 (100×2 + 10×9, divided by 110) is thus 2.64.

Now let’s say that on Day 2 both ads move up one position. Ad #2 is now in position 8. An increase in position tends to result in more impressions, so let’s say that ad #2 now gets 40 more impressions, for a total of 50 impressions on Day 2. Ad #1 is now in position 1 and let’s say it also gets 40 more impressions, for a total of 140 impressions for that ad on Day 2. The account’s average position on Day 2 is thus 2.84.

That is, the average position has *dropped* (from 2.64 to 2.84) even though both ads in the account moved *up *one position.

What makes average position even more perverse is that this relationship is only true sometimes. For instance, in the example above, if ad #2 had been in position 6 on Day 1 and moved to position 5 on Day 2 (instead of from position 9 to 8), then the account’s average position on Day 1 would have been 2.36 and on Day 2 would have been 2.05. That is, the average position would *not *have dropped as both ads moved up one position.

In case that hasn’t frustrated you enough, the average position of a group of ads/keywords can change even if all the ads *stay in the same position*. If ad #1 had been in position 2 on both days and ad #2 had been in position 9 on both days, but the number of impressions had still been 100 and 140, and 10 and 50, as described above, then the average position on Day 1 still would have been 2.64, but the average position on Day 2 would have been 3.84. That is, the average position would have dropped by more than 1 full point even though neither ad changed position at all!

To make matters even worse, the average position of an individual ad/keyword isn’t necessarily the position at which all, or even most, of its impressions occurred. Let’s say that a search engine tells us that one of our ads got 4 impressions yesterday and had an average position of 2.0. Looking at the figure below, we see that there are only 5 possible ways to show 4 impressions such that their average position is 2.0. (If you’re not convinced these are the only solutions, try for yourself to find others.)

The most obvious is solution 1. If all 4 impressions were shown in position 2, then their average position will be 2.0. Slightly less obvious is solution 2: if 2 of the impressions were in position 2 and one each in positions 1 and 3, then their average position will still be 2.0. Even less obvious is solution 3: if two impressions happened in both position 1 and position 3, then the average position will still be 2.0 even though *no* impressions actually occurred in position 2!

There are two other possible configurations of impressions, solutions 4 and 5, which you can check for yourself have average positions of 2.0.

That’s it. Those are the only 5 possible configurations of impressions with an average position of 2.0. Unfortunately, we have no way from the data the search engines provide to determine which of these 5 cases actually occurred. What’s strange is that 3 out of these 5 possibilities have more impressions in position 1 than in position 2! If that ad got 1 click yesterday, did that click come from an impression that was actually in position 2, or was from an impression in position 1 (or position 3? or position 4 or 5)? When it comes to determining which position performs best for this ad, I’d like to know!

If the search engines told us not only the average position at which our impressions occurred, but also the standard deviation of that average position, then we could figure out which configuration of impressions actually occurred. For example, if they told the average position was 2.0 and the standard deviation was 0.0 (that is, no impressions happened outside position 2), then we’d know that solution 1 was the case that actually happened. If they told us the standard deviation was 1.0 (that is, the ad was shown on average 1 position away from position 2), then we’d know that only solutions 3 or 4 could have been the actual configuration of impressions.

Part of the problem here, I think, is terminology. The metric we commonly call the ‘average position’ is really the ‘impression-weighted position’. And just as there’s an impression-weighted position, there’s also a click-weighted position. So, if the search engines told us that our ad got 4 impressions in average position 2.0 with a standard deviation of 1.0, and also 1 click in average position 3.0, then we’d be able to determine immediately that configuration 3 was the one that actually occurred.

The reporting burden for the search engines would only be marginally increased, since they’d have to report 4 metrics for every ad instead of one (impression-weighted position, impression-weighted standard deviation, click-weighted position and click-weighted standard deviation, rather than just ‘average position’), but the benefit to search marketers would be enormous. (Perhaps that’s why they don’t do it…)

In the meantime, we’ll just have to take our average position with measure of skepticism by remembering how perverse the average position metric can be.

## Related Articles:

- About the Author
^{[6]} - Latest Posts
^{[7]}

- AdWords Position Preference is Dying. Good Riddance.
^{[11]}- April 7, 2011 - Is Google Exploiting Neuromarketing in Reporting Quality Scores?
^{[12]}- March 21, 2011 - Does Google Reward High Quality Scores with More Impressions?
^{[13]}- February 14, 2011 - Like a Rock: The ‘Bid-CPC’ Relationship
^{[14]}- January 19, 2011 - From Business Intelligence to Bathtub Insights
^{[15]}- December 30, 2010 - Google’s New “Automated Rules”
^{[16]}- December 9, 2010 - Braking the Rules
^{[17]}- December 6, 2010 - Google Rich Snippets for Shopping Sites: A New Dilemma
^{[18]}- November 4, 2010 - Quality Score Never Shined My Shoes
^{[19]}- October 19, 2010 - Ad Auctions are Not Auctions
^{[20]}- August 24, 2010