About Test and Learn: "Which campaign causes the lowest-cost conversions to occur?" results guide

30/11/2019

This article explains what types of information you'll see in the results of a Test and Learn "Which campaign causes the lowest-cost conversions to occur?" campaign comparison conversions test. It also provides context to help you interpret your results and make adjustments to your advertising strategy based on them.

What can I learn from a campaign comparison conversions test?

You can learn which of two campaigns had a lower average cost per incremental conversion.

What's an incremental conversion?

An incremental conversion is an event that wouldn't have happened without your campaigns. You define what conversion events to measure when you create your test.

When and where can I see my results?

After getting at least 100 conversion events since your test started, you may be able to see some initial results. However, we recommend not judging or making adjustments based on your results until the test has ended. (You set the test's schedule when you create it.) When it has, we'll send you an email with the results.

You can also view them in Test and Learn. To do so:

  1. Go to Test and Learn.
  2. Click Learn.
  3. Click your test.
Please note that if you never get 100 conversion events, we won't be able to show any results. Here are common reasons for not getting 100, and what you can do about them:

  • Spending too little. This can prevent your ads from reaching enough people to get 100 conversion events. Try increasing the amount you're spending on the campaigns if you can.
  • Choosing conversion events that aren't frequent enough. You tell us which conversion events to measure when you create your test. If the ones you choose aren't frequent enough (e.g. purchasing an expensive item that requires a lot of consideration), we may not be able to get 100. Try measuring events that occur more frequently. For example, if you couldn't get 100 purchase conversions, you could measure add-to-basket conversions instead.
  • Not testing for long enough. We may need more time than you set in the test schedule to get 100 conversion events. Try extending your test schedule if you can. As a general guideline, we recommend testing for at least four weeks.

What will I see in my results?

There are two report types in campaign comparison conversions test results:

  • Summary report. This answers the question "Which campaign causes the lowest-cost conversions to occur?" by declaring a winner based on which campaign had a lower average cost per incremental conversion. We also tell you how confident we are in that declaration. There's one summary report for each test.
  • Detail report. This shows information about how a campaign performed. Each test has two, one for each campaign tested.

Below is more detail on what information is contained in each report type.

Summary report

Summary reports include:

  • Cost per incremental conversion: The average cost of the conversions that wouldn't have happened without a campaign. We show the amount for each campaign during the test. We calculate it by dividing the number of incremental conversions that a campaign got by the amount that you spent on it. For example, a campaign might have a £6.37 cost per incremental conversion.
  • Winner: The campaign with the lower cost per incremental conversion.
  • Confidence: A percentage representing how confident we are in our declaration of winner. Results that we're at least 90% confident in are considered reliable.

    Our testing methodology includes thousands of simulations based on your test. If a campaign wins in 80% of our simulations, we'd be 80% confident in declaring that campaign the winner.

Detail report

Detail reports include two general sections:

  • Lift results
  • Incremental efficiency

See below for more details on what information is contained in each section.

Lift results

Lift results include these metrics for your campaign:

  • Lift%. A percentage indicating how much your campaign increased the rate at which people converted (as defined by the conversion events you chose when you created your test).

    For example, if your campaign increased the conversion rate by 50%, that could mean something like: You would have got 100 conversions during the test without your campaign, and got 150 with it. That means it resulted in 50 incremental conversions.

    We divide the number of incremental conversions by the number of non-incremental conversions and multiply that by 100 to calculate Lift%. In this case, that means:
    • 50/100 = 0.5
    • 0.5 x 100 = 50%
  • Incremental conversions.The number of conversions that wouldn't have happened without your campaign.
  • Incremental sales. The amount of money you made through incremental conversions.

    Note: You won't see this metric unless you've set up your pixel, SDK and/or offline events to collect purchase values, and at least one of the events you've defined as a conversion for the test is purchases.

Incremental efficiency

Incremental efficiency results include these metrics for your campaign:

  • Cost per incremental conversion. See explanation in "Summary report" section above.
  • Incremental ROAS. ROAS stands for "return on ad spend", and refers, generally, to what you got from your ads relative to the money that you spent on them. Incremental ROAS, specifically, is calculated by dividing your incremental sales amount (see above) by the amount that you spent on your campaign. This metric is expressed as an order of magnitude – for example, a campaign might have an incremental ROAS of 4.45 times.

    Note: You won't see this metric unless you've set up your pixel, SDK and/or offline events to collect purchase values, and at least one of the events you've defined as a conversion for the test is purchases.

What should I do after viewing my results?

It depends on your test setup and what your results show.

Ideally, the campaigns you tested only had one meaningful difference between them and we're at least 90% confident in our declaration of winner. (90% is the confidence level that we consider reliable.) In such cases, you can rely on the results as a basis for adjusting your campaigns. You may want to move the budget of the losing campaign to the winning campaign, or edit the losing campaign to give it the characteristic that the winning campaign had.

What if my campaigns had multiple differences?

This makes it more difficult to say what caused the difference in performance, which makes it more difficult to draw broader conclusions about the results. However, the winning campaign still had a lower average cost per incremental conversion. This means that it isn't unreasonable to move budget from the losing the campaign to the winning one, for example.

What if my results have a confidence level of less than 90%?

You'll need to decide how confident is confident enough for you to use results as the basis for action. It helps to remember what this metric means: 90% confidence means that we think there's a 90% chance that we'd get the same winner if we ran the test again. Put another way: If we ran the test ten times, we think we'd get the same result nine times. If you're comfortable taking action when we think we'd get the same result fewer than nine times out of ten, you can. If you aren't, you can extend your test to see if more data increases our confidence.

However, you may also want to consider running another test comparing different campaigns or a different distinguishing characteristic. If we aren't very confident in your results, it's possible that you haven't yet found the key distinguishing factor for improved campaigns.

What if my results have a confidence level of 50%?

This means that neither campaign is more likely than the other to get you a lower cost per incremental conversion. This isn't inherently bad or good. If you're satisfied with the average cost per incremental conversion of both campaigns, you may want to keep running them without adjustments. However, if you aren't satisfied with the average cost per incremental conversion of either campaign, you may want to try to make improvements to both. It's important to remember that when a campaign is declared the winner, that just means it outperformed the other campaign. It doesn't necessarily mean that it's effective (or that the other is ineffective) overall.

* Nguồn: Facebook