Experiment: Should we use Google’s machine learning Target CPA bidding strategy?

ExperimentsPaid Search

If you have managed any Adwords campaigns you may have received a few calls from Google’s account managers offering optimisation recommendations. One recurring recommendation is to try out Adwords Smart Bidding, which is a set of conversion based bid strategies that make use of machine learning.

If Google’s machine learning can make our jobs easier and get better results for our clients, we should all be enthusiastically throwing our usual techniques out the window. But is this the case?

The following bid strategies use Google’s advanced machine learning, and their descriptions from Google’s support resources:

  • Target CPA automatically sets Search or Display bids to help get as many conversions as possible at the target cost-per-acquisition (CPA) that you set. Some conversions may cost more or less than your target.
  • Target ROAS automatically sets bids to help get as much conversion value as possible at the target return on ad spend (ROAS) you set. Some conversions may have a higher or lower return than your target.
  • Enhanced cost-per-click (ECPC) automatically adjusts your manual bids to help you get more conversions, while trying to achieve the same cost-per-conversion.

We were interested in increasing conversion volumes while reducing CPAs for one of our clients, so we set up an experiment to test whether Google’s target CPA bidding strategy could beat our existing in-house method.

Outline

We started  with the hypothesis that Google’s target CPA bidding strategy will outperform our in-house bidding strategy by providing lower CPAs for a similar number of conversions.

Control: We chose a campaign with sufficient clicks and conversion volume. We would use our inhouse manual bidding system on this campaign.

Variant 1: We cloned the control campaign in Adwords, and only changed the bidding settings to Target CPA but left all other variables unchanged

We set our target CPA to $7.97. We chose to go with a 90/10 traffic split weighted towards Target CPA this would allow for as much data to be shared with Google’s auto bidding. To ensure significance for the original campaign we chose to run the test for a minimum of 4 months.

Results

google smart bidding test

google smart bidding test split

google smart bidding test results

google machine learning test results

After running the experiment for 122 days we found the following based on 99.9% confidence interval:

  • Our in house average  CPCs were $1.73 vs Google’s $2.62 (a 51% difference)
  • Our in house average position was 2.1 vs Google’s 1.7
  • Our average cost per conversion (click) was $4.68 vs Google’s $6.03 (a 28% difference)
  • Our average conversion rates were 37.02% vs 43.46% (a 17.35% difference)

Both strategies stayed under or target CPA, which was $7.97. The main difference between the two bidding strategies seemed to be that Google’s strategy favoured increasing average positions even if it increased costs. Our internal strategy was more conservative, aiming to maximise profit with lower CPAs even if it meant lower positions and lower CTRs. Given that Google’s systems were aiming to hit our set target of $7.97, we think the Adwords Smart Bidding target CPA strategy definitely met the brief we gave. However, our internal strategy did hit lower CPAs overall, which is a win for the client.  

Discussion

We found that our in-house bidding process was more efficient when it came to cost per conversion. However, our conversion rate was lower than Google’s Target CPA bidding by about 17%.  This could be because Google’s Target CPA favoured a slightly higher position than our in house bidding systems.

The adoption of either strategy mostly depend on the client’s preference. Some clients prefer maximising revenue at a slightly higher CPA whilst others would prefer maximising profitability at the expense of some revenue shortfall. Our client prefered maximising profit so our inhouse strategies made sense for him but this isn’t to say that all other clients will favour this.

We should also note that the position variation makes it more difficult to interpret these results. Given this uncertainty, we’ve decided to run a follow up experiment for this client, and we’ll post about it here in the coming months.

Some of the experiments we may look to try include

  • Compare our inhouse bidding methods to Target ROAS instead of Target CPA
  • Use a position script to maintain consistent Avg positions on our inhouse bidding method to try and hold the variable constant