Since the initial contest in the year 2000, the Association for Computing Machinery and the Swedish Institute of Computer Science have helped to host and run the Trading Agent Competition (TAC), based on a game developed by Professor Mike Wellman of the University of Michigan, in which teams of computer science students spend months writing programs to compete against each other in a simulated marketplace. Prior contests have dealt with supply chain management and simulated travel agents. Dr. Wellman, Professor Amy Greenwald of Brown University and Professor Peter Stone of the University of Texas have even co-written a book on the subject of bidding agents for the TAC.
But 2009’s competition, held in Pasadena, California, for the first time dealt with ad auctions of the sort which search marketers deal with on a daily basis. One of Prof. Wellman’s then graduate students, Patrick Jordan (now a researcher at Yahoo! Labs), devised a simplified form of Google’s ad auction, the core of which should be very familiar to anyone who has ever managed an AdWords account: Competing retailers bid on different keywords (in this case related to TV/audio equipment). Simulated users generate queries and then consider clicking on ads based on the order the ads appear in the results. If a user clicks on an ad, the search engine charges the advertiser, and the user might then purchase a product at the advertiser’s website, bringing revenue to that player.
Each simulated day, each advertiser receives a keyword-level performance report similar to what actual AdWords account managers receive: impressions, clicks, cost, average position, etc, as well as a keyword-level report of the number of sales that resulted and the revenue generated. As in the real world, there is a delay between the end of the previous day and when the data is reported for that day. The advertiser must use this information to set bids, select between more-targeted and more-generic ad creatives and even choose keyword- or account-level spending limits. All of these decisions must be done entirely by the computer code that each team writes, with absolutely no human intervention allowed during any given game. Each game continues for 60 simulated days, with the advertiser who accrues the most net profit winning. Oh, and to keep the game moving, each agent only has 10 seconds to make its decisions for the day. Dozens of games are played, in a semi-finals round and a finals round, to determine the overall winner of the tournament.
Last year’s TAC Ad Auctions game attracted entrants from universities around the world (a list of participants is at SICS’s website). We’re proud to say that third place was won by Professor Greenwald’s team at Brown University, which was assisted in the design of their agent, Schlemazl, by representatives from The Search Agency. Second place was taken by AstonTAC, developed by Aston University in Birmingham (England) and the winner was TacTex, designed by Pardoe, Chakraborty and Stone from the University of Texas at Austin. But only about a 5% difference in performance separated these agents from each other.
The results of the ad auction tournament are described in greater detail in Dr. Jordan’s Ph.D. thesis, but noteworthy are the additional games he ran after the competition was over using only the top three agents. He found that the final placement of these agents depends strongly on the reserve bids set by the search engine. In the actual competition, the search engine required only a very low bid in order for an advertiser to be allowed to participate in a given auction. However, this does not maximize the publisher’s revenue. As the search engine raised the minimum bid requirement, it made more money, and Aston’s agent became the most-favored equilibrium solution. As the minimum bid requirement was raised to a level slightly past the search engine’s optimal value, Brown’s agent became the most-favored.
Earlier this spring, I interviewed David Pardoe, who has written a paper describing his champion ad auction agent in detail and who has also twice won the supply chain management competition. It is fair to say that his program was aided greatly by a quirk of the game: To promote competitiveness, in the 2009 competition the agents were told about the average positions of the other advertisers. Dr. Pardoe was ingeniously able to use this information to determine the order which his competitors reached their budget limits and with that, gain a significant advantage over them.
When I asked how much overlap there was between the agent he built for the supply chain management tournaments and the one for the ad auction competition, he revealed that there was very little. He attributes his victory in the ad auctions competition to his experience from the supply chain management competition and to many rounds of developing the new agent. “Knowing how to experiment,” he said, “is more important than having a really cool algorithm.”
The 2010 competition will be held Monday and Tuesday, June 7-8 at the 11th ACM Conference on Electronic Commerce in Cambridge, MA, but with some rules changes. The average positions told to each agent will be based on a sample of the auctions, rather than all of the auctions. And the reserve bids will be raised, to make them closer to what an actual search engine might use. “We modified the precision of the average position estimate in the reports,” Dr. Jordan, the Game Master for the 2010 competition, said, “to more actually reflect the degree of uncertainty advertisers face in actual sponsored search auctions” and “we increased reserve scores to more actually portray the preferences of the publisher.”
Will the rule changes work against TacTex? Dr. Jordan says, “These [rule] changes are not meant to increase competitiveness in the colloquial sense: making it more likely that any given participant can win the tournament. Rather, our goal is to incentivize participants to develop competent strategies in a very complex, challenging environment, so that we can use insights from these strategies to understand behavior in real markets.”
Brown University team member Eric Sodomka put it more succinctly: “TacTex is definitely the favorite.” Though he’s currently focusing on finishing his Ph.D. dissertation, Dr. Pardoe put an end to the speculation that he might skip this year’s tournament. “I’ll be there”, he said.
So, please wish everyone luck and we’ll keep you posted after the competition as to how they do.
A follow-up to this article is available: ‘Congratulations to All in the Trading Agent Competition‘
- AdWords Position Preference is Dying. Good Riddance. - April 7, 2011
- Is Google Exploiting Neuromarketing in Reporting Quality Scores? - March 21, 2011
- Does Google Reward High Quality Scores with More Impressions? - February 14, 2011
- Like a Rock: The ‘Bid-CPC’ Relationship - January 19, 2011
- From Business Intelligence to Bathtub Insights - December 30, 2010
- Google’s New “Automated Rules” - December 9, 2010
- Braking the Rules - December 6, 2010
- Google Rich Snippets for Shopping Sites: A New Dilemma - November 4, 2010
- Quality Score Never Shined My Shoes - October 19, 2010
- Ad Auctions are Not Auctions - August 24, 2010
Tags | trading agent competition