Bid Optimization

Taking on the search engines with our own powerful algorithms:
One rival system calls their Bid Optimization sub-system Quant which they advertise with the slogan: ‘£30 million in sales, from PPC? You're kidding right?’ Actually we did £50 million, and they inherited one of our customers and they have still not been able to exceed our performance.

AFS’s search management sub-system, delivers vastly improved returns from your media spend (i.e. Google costs), by performing billions of mathematical calculations on your search campaigns, to determine the optimum bidding and budget scenarios, across all your keywords, adgroups, campaigns, accounts and engines, to hit and exceed any chosen Key Performance Indicator. But our analysts and paid search experts review these decisions on a daily basis. Sometimes human intervention is necessary. Does this really work? We argue it does but we always manually examine and verify all actions and decisions.

AFS’s automated tools and paid search platform, delivers vastly improved returns from your media spend, by performing trillions of mathematical calculations on your search programs and campaigns, to determine the optimum bidding and budget scenarios, across all your keywords, adgroups, campaigns, accounts and engines, to hit and exceed your key performance targets and indicators. Sometimes the decision-making is straightforward. Sometimes a portfolio spread of risk is adopted – because we are not sure of the precise behaviour of the search engines… Sometimes we will ask you how much risk you would like to endure (most of our clients say none) but we do ask. There is a risk/return tradeoff.

Most experienced marketers hit a brick wall where they can’t get any more return from their budget without sending mega-bucks more. However, with more complex campaigns, and portfolios of keywords that have grown to unwieldy levels, finding the opportunities for gain has become more difficult and time consuming. As has weeding out those parasitic elements that drain the profitability of your website.

Increasing competition
Not only that, competition is at an all time high and becoming even higher as new Asian entrants continually enter the market, and the market is changing by the hour. With current time, resources and tools, it's often hours, days or weeks before marketers are able to draw lucid conclusions from ROI data and react.
You need something that can model all the possible variables, in order to quickly find the best way to allocate budget across your campaigns to get the best possible returns form your adspend, and do it instantly!

AFS’s background – computer chess
Others talk of what algorithms can achieve. We were actually part of that process. We go back to Control Data Corporation computers (CDC 7600/6600/6400s 60 bit machines which gave us 120 bits word length in Fortran IV) joint with Northwestern University running on the University of London’s computer centre in Guildford Street where some of our team members were classified as trouble makers for hogging extreme amounts of what was then one of the most powerful computers in the world - working on Chess 4.0 to 4.7.

For a time in the 1970s and 1980s it was unclear whether any chess program would ever be able to defeat the expertise of top humans. In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years. He won his bet in 1978 by beating Chess 4.7 (our program playing on the most powerful computer at the time), but acknowledged then that it would not be long before he would be surpassed. In 1989, Levy was defeated by IMB’s Deep Thought in an exhibition match. Deep Thought had many of the algorithms first developed in Chess 4.7. Deep Thought metamorphosed into IBM’s Deep Blue which did go on to beat Levy and indeed Garry Kasparov (who initially defeated Deep Thought). The basic technology was built on advanced predictive modeling and learning algorithms. We actually helped to invent and worked on Chess 4.7. Those same systems are also present in current world-class chess programs such as Junior, Fritz, Naum, Crafty, Rebel and the world leading chess engine Rybka (rated at 3232 Elo).

This landmark event showed that by combining computing power, with advanced statistical modeling and pre-programmed expertise, a machine could not just solve complex problems, but also out think and beat competitors in head on competition. It could recall history, and make judgments. It could learn. This paved the way for computing & artificial intelligence into industry, to solve complex problems like, financial modeling – of which among our team members is someone who has published extensively on modelling.

Today, AFS has built on those same technologies and principles to create a suite of systems (not a single system) to help with our bid management and optimization. A set of systems that examines and utilizes the mathematical and data demands placed on marketers, in analyzing and managing large complex search campaigns, and continually improving your profitability and/or targets in what in an increasingly competitive environment on the net.

A bid tool working for you in a fiercely competitive environment
Our systems are very different to the mainstream bid tools you may have come across like Atlas, PPC Bidmax, DC Storm, DART search or Bid Buddy. All these tools use user programmed rules, which you have to build, based on the limited time and scope you have to appreciate the many trade offs of bidding one way versus another. What's more, everyone is using them, which means everyone is fighting the same battles with the same methods and tools. Even Greenlight’s Adapt/Quant is way behind our set of systems. You don't use Deep Blue to play poker. Similar we adapt the right set of learning and artificial intelligence system to the right business application. Even then as we know, human monitoring is vital. Fast decision making computers will always come out on top. But reviewing and thinking laterally requires the best human brains. And computers can make mistakes or take them to a place which is outside their comfort zone (- in a situation which they have not been programmed to deal with).

Using every piece of data and information
Just as our world shattering chess programs had some very simple systems built it, we do the same (see reference below). For example the first moves of any chess game used a library of grandmaster moves. Similarly we examine history in bidding against Google but we also know what is influencing Google at this very moment and how its bidding algorithms might have changed. We then make tiny real world tests to ensure that what our systems have predicted does actually happen. Think of this as a wind-tunnel test of a scale model of a car or airplane. Just common sense yet all this is done within nano-seconds and then reviewed.

No one has more productivity tools, smart software aids, and sophistication.
For further discussion, see:

Flexibility and learning systems – sometimes very simple systems
Our systems are flexible and react to events and use a combination of data mining techniques, historical events, actual wind-tunnel type tests and predictive learning algorithms, neural networks and the type of decision making ability built and maintained by our teams of search experts and PhD's. These systems work through your paid search’s program's historical data, Google’s internal databases and multiple search centers, and examines what is happening in Google (by the second), and what is influencing Google to change, Google’s current behavior on each keyword/search term and so on.

Using human input (don’t underestimate it) and these advanced algorithms, and such techniques as probability analysis, decision tree lookups, and learning systems, tempered by continually monitoring of Google (toe-in-the-water type tests), enables us and our systems to see through the opaque bidding landscape, to predictive modelling all the possible results, that could come out of every possible configuration of your campaigns. But human control is vital and sometimes better than all the modelling and automated systems.

Continually updating and testing
Our bidding system then continually, and automatically adjusts your bidding and budget allocation across all aspects of your search accounts, through times of day, week, month and year, and also geographically. But it does more than that it keeps on splitting and refining your account and campaign structure. The difference between our system and that of others is that we do the wind-tunnel testing, and we do toe-in-the-water testing continually. That means we adapt and are flexibility to what Google and the other search engines are doing right now.

Three Google trends which are explicitly taking into account by our system:

Monitoring what Google is doing, building in flexibility and taking a global view of all your activity and what is happening on the net, allows intelligent budget and bidding decisions to be made that will maximise your company’s sales volume in a way that maximizes your profit or meets whatever other target you may wish (maximizing sales for a given budget). Our systems also allow sales that contain a richer mix (higher average order value). You tell us what you want and we can tune our systems to meet that objective.

Miracles are really possible...