Test and Learn guide: Ad account conversion lift results
This article explains what types of information you'll see in the results of a Test and Learn ad account conversion lift test (How many conversions are all my Facebook ads causing?). It also provides context to help you interpret your results and make adjustments to your advertising strategy based on them.
Where and when can I see my results?
You can check your results in Test and Learn. To do so:
- Go to Test and Learn.
- Click Learn.
- Click your test.
Note that we're unable to show results until your ads have received at least 100 conversion events since the start of the test. Once you meet that requirement, we can show initial results. However, we recommend not judging or making adjustments based on your results until the test has ended. (You set the test's schedule when you create it.) We will email you when your test is complete.
What will I see in my results?
You'll see whether or not your ads affected the rate at which people converted, and how many additional conversions your ads caused. This includes a breakdown of the effect of your ads on each conversion event you chose. You can switch between conversion events using the "Test objectives" menu. For each event, we show these types of information:
- Lift results. Information about the results caused by your ads. Lift results include the metrics:
- Lift%. A percentage indicating how much your ads increased the rate at which people converted (as defined by the conversion events you chose when you created the test). For example, if your ads increased the conversion rate by 50%, that could mean something like: You would have got 100 conversions during the test without ads, and got 150 with them. That means they got you 50 additional conversions. We divide the number of additional conversions by the number of conversions that you would have received without ads and multiply that by 100 to calculate Lift%. In this case, that means:
- 50/100 = 0.5
- 0.5 x 100 = 50%
- Conversion lift. The number of conversions that wouldn't have happened without your ads.
- Confidence. A percentage representing how confident we are that your ads caused conversion lift. Results that we're at least 90% confident in are considered reliable. Our testing methodology includes thousands of simulations based on your test. If your ads caused conversion lift in 80% of our simulations, we'd be 80% confident that your ads caused conversion lift during the test.
- Incremental efficiency. Information about the cost of the results caused by your ads. Incremental efficiency includes the metric:
- Cost per conversion lift. The average cost of the conversions that wouldn't have happened without your ads. We calculate it by dividing the total amount that you spent on your ads by the number of additional conversions. For example, you might have a GBP 6.37 cost per additional conversion.
- Test details. This section shows how we arrived at your results, including the groups involved and their sizes, the number of conversions in each group and the equations that we applied to this data to calculate your results.
Notes:
- The conversion lift type in the visualised equation has a drop-down arrow next to it that lets you see other data types if available (e.g. sales lift for conversion types that include purchase values).
- Click View test and control group balance to compare the age and gender distributions for your test and control groups.
- Campaign details. Data on your amount spent, impressions, frequency and reach during the test. (In this context, "campaign" refers to your test.)
- Breakdown by demographic. This chart lets you highlight age/gender subsets of people who converted due to your ads. You can use these breakdowns to discover what type(s) of people are converting, how often and at what costs.
- Click attribution compared to lift results. Your test tracks conversion events throughout its schedule. It isn't restricted by an attribution or conversion window. This chart helps you understand what impact your ads are having that might not be captured by the attribution windows available in Ads Manager, especially if your ad runs for longer than 28 days (the longest attribution window available in Ads Manager).
What should I do after viewing my results?
It depends on what your results show and at what level you want to make adjustments. Ideally, we were at least 90% confident in the effect your ads had. (90% is the confidence level that we consider reliable.)
If your ads were found to drive a significant increase in conversion lift, you could simply increase the amount that you're spending across your ad account to try to scale that effect. Or, if you were measuring multiple conversion events, you could look at which ones experienced lift and which ones didn't. Then you could either redistribute budget accordingly or increase the budget of effective campaigns and try new strategies for ineffective ones.
You could also use the demographic breakdowns to improve your targeting. Clear trends in that chart could provide actionable context for redistributing budgets to your most valuable audiences.
If you want to run more tests, here are two general ideas:
- If your ads didn't cause conversion lift for some events and you want to try new strategies to achieve that, implement the strategies and run another ad account conversions test to measure those events.
- If you're satisfied that your Facebook ads are causing conversion lift, you could switch to running campaign comparison conversions tests, which let you compare two campaigns to see which one is more effective. This may be a better way to refine specific aspects of your advertising strategy.
What if my results have a confidence level of less than 90%?
You'll need to decide how confident is confident enough for you to use results as the basis for action. It may be useful to think of results that we're at least 90% confident in as "actionable" (worth taking decisive action on) and results we're less confident in as "directional" (worth considering, but maybe not using as the sole basis for a major decision).
It helps to remember what this metric means: 90% confidence means that we think there's a 90% chance we'd find that your ads caused conversion lift if we ran the test again. Put another way: If we ran the test ten times, we think we'd get the same result nine times. If you're comfortable taking action when we think we'd get the same result fewer than nine times out of ten, you can. If you're not, you can extend your test (with more time, conversion events and/or budget) to see if more data increases our confidence.
It's also worth considering the differences between confidence levels below 90%. For example, results we're 85% confident in may be worth taking more seriously than results we're 55% confident in.
What if my results have a confidence level of 50%?
This means that it's as likely that your ads are causing conversion lift as it is that they aren't.
What if I'm not seeing any results?
If you never get 100 conversion events, we won't be able to show any results. Here are common reasons for not getting 100, and what you can do about them:
- Spending too little. This can prevent your ads from reaching enough people to get 100 conversion events. Try increasing the amount you're spending if you can.
- Choosing conversion events that aren't frequent enough. You tell us which conversion events to measure when you create your test. If the ones you choose aren't frequent enough (e.g. purchasing an expensive item that requires a lot of consideration), we may not be able to get 100. Try measuring events that occur more frequently. For example, if you couldn't get 100 purchase conversions, you could measure add-to-basket conversions instead.
- Not testing for long enough. We may need more time than you set in the test schedule to get 100 conversion events. Try extending your test schedule if you can. As a general guideline, we recommend testing for at least four weeks.
* Nguồn: Facebook