*i.e.*, position 1) tends to be the highest and tends to be some fraction of that maximum value in lower positions, modeling CTR can be thought of as just determining the value of that fraction (

*f*) at any position, with the clickthrough rate at any position being simply the CTR in position 1 times

*f*. Figure 1 shows this graphically. There are few restrictions placed on the form of the

*f*-function, but in general, it should decline as the position drops. (Average position can be a perverse metric

^{[1]}, so for this post I have only chosen examples where this effect is minimal.) Both in SEM industry practice and in academic investigations, two general theories have been proposed for the form of the

*f*-function: the

**'separability'**approach and the

**'cascade'**approach. The hypothesis behind

**the separability model**is that there are two or more independent factors that govern the probability that an individual web surfer will click on a given ad. To get clicked, the ad must first be noticed, and eye-tracking studies such as those by Enquiro have shown that web surfers tend to look more at the top and left-side of a webpage. Therefore it is reasonable to assume that ads in those positions are more likely to be noticed than ads in lower positions. Additionally, under the separability hypothesis, there exists another position-dependent probabilistic factor: that once an ad has

*already been*noticed, the chance it will be clicked depends on its position in the ad listings. The rationale is that users might place greater trust in an ad that has a higher position over an essentially identical one in a lower position. So, the combination of being placed higher not only makes an ad more likely to get noticed, but, when it is noticed, it is also given a greater level of trust than ads in lower positions. One way of expressing the separability relationship is in a form like: where pos_max represents the position below which each of these components is essentially zero and is approximately equal to the number of ads that appear on a given SERP (i.e., about 6-10). The brackets indicate selecting the minimum of the ratio contain in them and the number 1. The value of F is selectable and depends on the number of independent factors and their linearity. (In practice, the value of F is often seen to be about 3.) The other predominant hypothesis concerning the dependence of CTR on position is

**the cascade model**. This relationship assumes that users sequentially examine ads from the highest position to the lowest, choosing at each step whether to click on the ad or not before proceeding to the next ad. (Some versions of this model also assume that the user has a chance at each ad of dropping out of the process entirely.) Mathematically, this hypothesis can be expressed in several forms, one of which is: where Q is some number greater than 1. When Q is 2, this equation is identical to Zipf's Law

^{[2]}, though in continuous form. Examination of real-world click data indicates that Q varies by keyword, but is typically in the range of 1.2-1.9, with values around 1.4 being seen commonly. (In 'A Formal Analysis of Search Auctions Including Predictions on Click Fraud and Bidding Tactics', Kitts

*et al.*propose an exponential form of this relationship. This is effectively a cascade model, not a third type of hypothesis for the dependence of CTR on average position.) Though the two models appear radically different, they are actually very similar in their general behavior. Under both hypotheses, CTR declines

*monotonically*with position (that is, it never experiences an increase as position drops). Neither accounts for any discontinuity that might be present in the CTR when going from an ad in the top promoted positions to the right-hand rail. Also, both models have adjustable parameters that give enormous flexibility in ability to match performance data, but at the expense of having to perform a fitting process. Where these two models differ most, for the range of realistic parameter values, is in their estimation of the clickthrough rate at positions lower than about position 4. Unlike the separability model, which assumes that the CTR reaches zero at pos_max, in the cascade model the CTR never reaches zero. Comparisons to experimental data might help determine which of these categories of models is more realistic. For

*organic*listings (not paid search ads), Craswell

*et al.*(in An Experimental Comparison of Click Position-Bias Models

^{[3]}) found the cascade model to be more appropriate. For paid search ads, the Bid Simulator, a new feature that Google has developed for AdWords, is proving to be enormously useful. For those who are unfamiliar with the Google Bid Simulator (GBS), for each word that gets sufficient traffic, the GBS provides estimates, for 4-7 various possible bids, of the cost, number of impressions and number of clicks each word could have gotten based on the actual data from the past 7 days. In the previous AdWords interface, the GBS also provided estimates of the average position at various bid levels, so, by dividing the estimated clicks and estimated impressions column and comparing to estimated average position, we can get the GBS's estimate of the clickthrough rate (CTR)

*vs.*the average position. One thing that's important to realize about how the GBS works is that the estimates it provides are ‘model free’. That is, the numbers aren’t really

*Google’s*estimates, because Google engineers haven’t told the Bid Simulator their beliefs about the relationship between say, position and impressions or position and clicks. Instead, the GBS just gathers data from auctions that have already occurred and reports a condensed version of that data to you. So, when we study the estimates from the Bid Simulator, we are actually studying the behavior of the participants in the auction (the advertisers and the search engine users), not Google's guesses (for better or worse) about those behaviors. In Figure 2, I've shown the Bid Simulator's estimates of CTR

*vs.*average position for two keywords in the same account from early August 2009, the actual performance for the 7-day period which the simulator examined (plus a couple of days before that period and a couple of days after), and the best-fit separability and cascade model (the equations for which were provided above). The top figure shows a broad-match keyword related to finding a company that provides Internet service. We can see that both the separability and cascade models give very similar estimates for the CTR down to an average position of about 4. The separability model more-closely approximates both the actual performance for this word and also the Google Bid Simulator's estimates of the CTR at lower average positions. The cascade model can be adjusted to better fit the actual data and GBS estimates at low positions, but only by making the closeness of the fit to the GBS estimates at positions 1-5 much worse. The lower graph is for an exact-match keyword which is also related providing Internet service, from the same account (but different campaigns, adgroups and matchtypes as the broad-match term just described). In this case, we can see that the best-fit cascade model seems to better describe the CTR. (The separability model can be made to fit somewhat better at low position, but as with the cascade model previously, doing so make the fit at high position much worse.) Over the past weeks I've looked at simulations for hundreds of high-traffic words in accounts from a wide variety of industries and can find that in some cases the separability model works better, in some the cascade model seems to work better and in some neither seem to work well. Of course, when the average positions are all above about 4.0, both approaches seem to be reasonable, provided that their parameters are chosen well. Below position 4.0, which model works better seems to depend on the keyword, but Google’s Bid Simulator is proving to extremely useful in helping to differentiate between them.