Split Testing

30/11/2019

Split testing helps advertisers understand how different aspects of their ads affect campaign performance. Split testing lets you test different versions of your ads so you can see what works best and improve future campaigns.

  • Measures Audience Outcomes, Sales Outcomes and Brand Outcomes (What's this?)
  • May not be available to all business objectives at this time.

Split Testing Guides and FAQS

Split testing lets you test different versions of your ads so you can see what works best and improve future campaigns. For example, you can test the same ad on two different audiences to see which ad performs better. Or, to test two delivery optimisations to determine which selection yields better results.

To get started, navigate to Ads Manager and create a split test. Use this guide to understand the basics of split testing, including variables, budget and scheduling.

How split testing works

Facebook's split testing feature allows advertisers to create multiple ad sets and test them against each other to see what strategies produce the best results. Here's how it works:

  • Split testing divides your audience into random, non-overlapping groups.
  • This randomisation helps to ensure the test is conducted fairly because other factors won't skew the results of the group comparison. It also ensures that each ad set is given an equal chance in the auction.
  • Each ad set tested has one distinct difference, called a variable. Facebook will duplicate your ads and only change the variable that you choose.
  • To get the most accurate results from your split test, you'll only have the opportunity to test one variable at a time. For example, if you test two different audiences against each other, you can't also test two delivery optimisations simultaneously because you wouldn't know for sure which change affected the performance.
  • Split testing is based on people, not cookies, and gathers results across multiple devices.
  • The cost per result of each ad set is calculated and compared. The ad set with the lowest cost per result, such as cost per website purchase, wins. We make these calculations with Facebook's attribution system. We use data from the test itself and thousands of simulations based on it, which helps us determine our confidence level in the results.
  • Once the test is complete, you'll receive a notification and email containing results. These insights can then fuel your advertising strategy and help you design your next campaign.

Objectives available for split testing

Facebook split testing supports the following business objectives:

  • Traffic
  • App Installs
  • Lead Generation
  • Conversions
  • Video Views
  • Catalogue Sales
  • Reach
  • Engagement
  • Messages
  • Brand Awareness

Variables available for split testing

Advertisers will have the option to test one of the following variables. You can test five different strategies with one of these variables.

  • Target audience
  • Delivery optimisation
  • Placements
  • Creative
  • Product sets

Learn more about the variables that you can test.

Setting a budget and schedule

Your split test should have a budget that will produce enough results to confidently determine a winning strategy. You can use the suggested budget that we provide if you're not sure about an ideal budget. (We calculate the suggested budget by analysing successful split tests that have run in the past and that had settings similar to your test). We'll also provide a mandatory minimum budget to help guide you. The budget and audience will then be divided between the ad sets. You can choose to divide it evenly or weight one more than the other(s), depending on your preference.

We recommend 4-day tests for the most reliable results, and if you aren't sure about an ideal time frame, you can start with four days. In general, your test should run for at least three days and no longer than 14 days. Tests shorter than 3 days may produce insufficient data to confidently determine a winner and tests longer than 14 days may not be an efficient use of budget, as a test winner can usually be determined in less than 14 days.

For this reason, we recommend a test between 3-14 days for tests created in the API. When creating a split test in Ads Manager, you must create a test with a schedule between 3-14 days.

Next steps

When the test is over, you'll receive a notification in Ads Manager and an email with the results. Learn more about how the winning ad set is determined. Once you receive your split test results, you can review them to discover insights about the best-performing ad set. These insights can help you determine your advertising strategy and design your next campaign.

Create a split test

A split test lets you test different versions of your ads so you can see what works best and improve future campaigns.

In Ads Manager, there are three ways to create split test:

  • Guided creation. We guide you through the process of creating a split test, and your ads will be ready to run once you've completed the workflow. You can choose this method if you're a new advertiser or if you're more comfortable with a step-by-step process.
  • Quick creation. You create the structure for a split test that can be finalised later.
  • Duplication. Ad a new ad set or ad to turn an existing campaign into a split test campaign.
Create a split test when editing an active ad set

An especially useful time to run a split test is when you make an edit to an existing active ad set. Doing so allows you to test the effectiveness of your edits. Here's how it works:

After you edit your ad set, we prompt you to create a test. If you do this, we then create a new ad set with the edits that you just made and return the ad set you edited back to how it was. We'll split their audiences and budgets for you, and they'll both run until they're scheduled to stop.

To create a split test while editing an ad set:

  1. Go to Ads Manager.
  2. Hover over the ad set you want to edit.
  3. Click Edit.
  4. Edit your ad set.
  5. Find the Want to test these changes? box that appears.
  6. Click Create test.
  7. Click Confirm.
The test will begin when you publish your changes. You can check how your test is progressing at any time in Ads Manager. We'll provide you with results as soon as we have them. You can act on or ignore the results as you feel appropriate. We don't take any action for you based on the results. You can keep both ad sets running after you end the test, but the audiences will no longer be split.

When you won't be prompted to create a test when you're editing an ad set

You won't be prompted to create a test if the ad set you're editing is:

  • Inactive
  • Part of a campaign using campaign budget optimisation
You won't be prompted to create a test if the edit you make is to the:

  • Name
  • Budget
  • Schedule
Create a split test with guided creation
  1. In Ads Manager, choose the marketing objective for your campaign.
  2. Select Create split test.
  3. Click Continue.
  4. In the Variable section, choose the variable you want to test. You can test one variable at a time.
  5. Make additional choices based on the variable that you're going to test:
    • If you choose audience as your variable: Under the Audience section, select a saved audience or create a new one for each ad set.
    • If you choose delivery optimisation as your variable: Under the Delivery optimisation section, select your delivery and bid strategies for each ad set.
    • If you choose placements as your variable: Under the Placements section, select whether you would like automatic placements or choose your placements to customise where ads are shown.
    • If you choose creative as your variable: Finish setting up your ad set and click Continue. Set up different versions of your ad at the ad level.
    • If you choose product set as your variable: Choose the product sets that you want to test (up to five) and then make your selection for audience.
  6. In the Budget & schedule section, choose a budget.
  7. In the Split sub-section, choose whether you want your budget to be split evenly (even split) or unevenly (weighted split) between the ad sets that you're testing. If you choose a weighted split, you also have to choose what percentage of your budget you want to assign to each ad set.
  8. Choose a duration for your test. It must be between 1 and 30 days. Your test can start today or at a future date that you set.
  9. Click Continue when you've finished setting up your ad sets.
  10. Set up your ads and place your order.

Your test will start on the date you specified. When it's over, you'll get an email and notification with the results.

Create an audience, placements or delivery optimisation split test with quick creation

To create an audience, placements or delivery optimisation split test, you'll need to complete steps at the campaign, ad set and ad level.

Campaign level

  1. Enter the name of your campaign and click the Split test toggle.
  2. Choose the variable you want to test from the drop-down menu.
  3. Choose the number of ad sets (up to five) that you want to create in your campaign.
  4. Select Save to Draft.

Ad set level

  1. Select one ad set (not multiple).
  2. Click Edit to open the editing pane.
  3. Depending on the variable that you're testing, create your audience or make delivery optimisation or placement selections. For example, if you're testing different audiences, navigate to the Audience (Variable) section and select a saved audience or create a new one.
  4. Repeat this process for your remaining ad sets. Make sure that you set a different version of your variable for each ad set. You won't be able to proceed if you don't. To continue the example, make sure that each of your ad sets has a different audience.
  5. Click Close to save your ad sets.
  6. Select all of your split test campaign's ad sets and click Edit to open the editing pane again.
  7. Make your selections for all the non-variable sections. To continue the example again, your audiences should already be set, but you'll still need to set a budget, schedule, placements and delivery optimisation for your ad sets.
  8. Click Close to save your ad sets.

Ad level

  1. Select all your ads for the split test campaign.
  2. Click Edit to open the editing pane.
  3. Set your ad creative.
  4. Click Publish to start your campaign immediately, or Close to save your ads.
Create a creative split test with quick creation

To create a creative split test, you'll need to complete steps at the campaign, ad set and ad level.

Campaign level

  1. Enter the name of your split test campaign and click the Split test toggle.
  2. Choose Creative from the drop-down menu.
  3. Choose the number of ads sets and ads you want to create.
  4. Create your ad sets and ads.
  5. Click Save to draft.

Ad set level

  1. Select all your ad sets.
  2. Click Edit to open the editing pane.
  3. Set up your ad sets.
  4. Click Close to save your ad sets.

Ad level

  1. Select one ad at a time and select Edit to open the editing pane.
  2. In the editing pane that appears, you'll add your ad creatives including headlines, images, videos, website links and CTA (call to action). Once you've finished, click Close to save or Publish.
  3. Select another ad and select Edit. You'll then add your ad creatives for the other ad.
  4. Click Close to save or Publish to start your test campaign immediately.
Make a product set split test with quick creation

When you create a split test where your variable is product sets, you define the variable differences at the ad set level. For each ad set, the retargeting audience is based on the product set.

Create your campaign:

  1. Enter the name of your split test campaign and toggle the Split test button.
  2. Select Product set from the drop-down menu.
  3. Select the number of versions that you want to create and then name your ad sets and ads.
  4. Select Save to Draft.
Create your ad sets:

  1. Select all of your ad sets at once and select Edit to open the editing pane.
  2. Use the drop-down menu to choose the product set that you want to use.
  3. Choose the product sets that you want to test.
  4. Define your audience, budget, schedule, placements and delivery optimisations.
Create your ads:

  1. Select all of your ads at once and select Edit to open the editing pane.
  2. Add your creatives in the editing pane that appears, including formats, headlines, images, videos, website links and call-to-action buttons.
  3. Close the editing pane to save, or Publish to publish immediately. Your split test campaign is now ready.
Create a split test by duplicating an existing ad set
  1. Select an active ad set.
  2. Click Duplicate.
  3. Click the Create a test to compare a new ad set with your original ad set toggle.
  4. A new ad set will be created and the editing pane will open.

    If you're testing audiences, delivery optimisations, placements or product sets, make the appropriate change to the new ad set. Everything else can stay the same. Publish when you're ready.

    If you're testing creative, leave the duplicate ad set as is, and create a new ad. Publish when you're ready.

When creating a test from duplication, there are a few important things to remember:

  • You can only create a test from an active ad set.
  • You can only create one new ad set at a time.
  • The budget for your new ad set will default to half of what was remaining in the existing ad set's budget. To edit the budget, click Change total budget and set the amount you want. You can change how the budget is split for the daily budget, but not for the lifetime budget.
  • Your test will continue to run until you end it or the budget runs out.
  • We provide guidance on how your test is going every 12 hours. If there's a winning ad set, we tell you the winner. You may also see a tie, where the ad sets are performing similarly, or there may not be enough data yet to determine a winner.
  • You can view your results in the results panel. You can see which ad set is performing better and select Use winning ad set to use the winner in the future.

Learn about best practices for split testing.

A split test lets you compare different versions of your ads so you can see which performs better. With each test, you can use one of the following variables:

  • Creative
  • Audience
  • Optimisation event
  • Placements
  • Product set

Testing creative

Testing ads with different creative helps you understand what images, videos, text, headlines and/or calls to action (CTA) perform better.

Strong creative assets can help your ads stand out, and you can use creative testing to understand what your audience is more likely to engage with.

To get started, create a split test and select Creative as your variable. You'll then be able to create between two and five different versions of your ad. If you want to be able to attribute your results to a single creative element, we recommend only testing one (e.g. different images for each ad) and keep everything else identical.

Example tests to run:

  • Ad with a video compared to ad with single image
  • Ad with one video compared to ad with different video compared to ad with another different video
  • Ad with one headline compared to ad with different headline compared to ad with another different headline

Note: You can still test ads with multiple differences to see which one performs better.

Testing audiences

Testing different audiences helps you understand what types of people are more likely to respond to your ads.

To get started, create a split test and select Audience as your variable.

Example tests to run:

  • Women, aged 21-30 compared to women, aged 31-40 compared to women, aged 41-50
  • People living in London compared to people living in Paris compared to people living in New York City
  • A Custom Audience of people who have recently visited your website compared to people who may want to buy your products based on interest targeting

Testing optimisation events

How well your ad performs can depend on what you optimise delivery for. For example, your ad may perform better if you optimise for landing page views instead of link clicks. Test different optimisation events to see which one leads to better results.

To get started, create a split test and select Delivery optimisation as your variable.

Example tests to run:

  • Ad set optimised for conversions compared to ad set optimised for link clicks
  • Ad set optimised for landing page views compared to ad set optimised for link clicks compared to ad set optimised for impressions
  • Ad set optimised for conversions with a 1-day post-click conversion window compared to ad set optimised for conversions with a 7-day post click conversion window

Testing placements

Testing placements can help you understand what platforms (e.g. Instagram, Facebook) are most effective for your ads, and where it's best to show them on each platform (e.g. Stories, feed).

To get started, create a split test and select Placements as your variable.

Example tests to run:

  • Automatic placements compared to customised placements
  • Mobile placements compared to desktop placements

Note: Limiting placements can negatively impact ad delivery. After you run a split test and understand which placements lead to better performance, you can use those information results to inform future decisions. However, in general, remember that it's recommended to have as many placements as possible.

Testing different product sets

Testing ad sets with different product sets helps you understand which product set in a Facebook catalogue performs better in a catalogue sales campaign. A product set is a group of items from your inventory that you designate in your catalogue. You can use a product set to control the items that appear in your ad.

To get started, create a split test using the Catalogue Sales objective and select Product set as your variable. You'll then be able to create between 2 and 5 different versions of your ad set.

Note: The audience for each product set may be different. Because people automatically see products based on their behaviour and relevance, your audience targeting doesn't overlap when you create a product set split test.

Example tests to run:

  • Product set with items that are less than GBP 50 compared to product set with items that are more than GBP 50
  • Product set with shoes from one brand compared to product set with shoes from a different brand compared to product set with shoes from another different brand
  • Product set with all brands from your catalogue compared to product set with only private-label brands from your catalogue
Create a split test

Split testing compares different versions of your ads so you can see what works best and improve future campaigns.

Use these best practices to create a split test with clearer and more conclusive results.

Test only one variable for more conclusive results

You'll have more conclusive results for your test if your ad sets are identical except for the variable that you're testing.

Testing a single variable is important when you're experimenting with different creatives. If your ad sets have several creative aspects that vary (e.g. different images, different headlines and different text), then when the winning ad set is declared, you won't know which factor to credit. To make this easier to set up, after you create your first ad set, we automatically duplicate its settings for your other ad sets (aside from the variable you're testing, which should be different).

Bear in mind that testing multiple creative variables can still yield valuable results and learnings for future campaigns.

Focus on a measurable hypothesis

Once you figure out what you want to test or what question you want to answer, create a testable hypothesis that enables you to improve future campaigns. For example, you might start with a general question such as "Do I get better results when I change my delivery optimisation?" This can be refined to something more specific, such as "Do I get a lower cost per result when I optimise for link clicks or landing page views?" From there, you can set an specific hypothesis, such as "My cost per result will be lower when I optimise for landing page views." This helps you interpret your results and empowers you to take specific action on future campaigns.

Use an ideal audience for the test

Your audience should be large enough to support your test, and you shouldn't use this audience for any other Facebook campaign that you're running at the same time.

Overlapping audiences may result in delivery problems and contaminate test results.

Use an ideal time frame

We recommend 4-day tests for the most reliable results, and if you aren't sure about an ideal time frame, you can start with four days.

In general, your test should run for at least three days and no longer than 14 days. Tests shorter than three days may not produce enough data to confidently determine a winner, and tests longer than 14 days may not be an efficient use of budget, as a test winner can usually be determined in 14 days or fewer.

For this reason, we recommend a test between 3-14 days for tests created in the API. When creating a split test in Ads Manager, you must create a test with a schedule between 3-14 days.

Your ideal time frame (within the 3 to 14-day time period) may depend on your objective and business vertical. For example, let's say you're running a split test to determine which delivery optimisation leads to a lower cost per result. You know that people tend to take longer than four days to convert after seeing an ad, so you'd want to run your test for more than four days.

Set an ideal budget for your test

Your split test should have a budget that will produce enough results to confidently determine a winning strategy. You can use the suggested budget that we provide if you're not sure about an ideal budget.

Learn more about the basics of Facebook split testing or learn about creating your first split test.

Create a split test

Selecting a budget:

Your split test should have a budget that will produce enough results to confidently determine a winning strategy. You can use the suggested budget that we provide if you're not sure about an ideal budget. We calculate the suggested budget by analysing successful split tests that have run in the past and that had settings similar to your test. We will also provide a minimum budget, and this is the lowest possible budget that you can select for your campaign.

Once you select a budget, the budget and audience will then be divided between the ad sets. You can choose to divide it evenly or weight one more than the other(s), depending on your preference.

Selecting a schedule:

We recommend 4-day tests for the most reliable results, and if you aren't sure about an ideal time frame, you can start with four days. In general, your test should run for at least three days and no longer than 14 days. Tests shorter than three days may produce inconclusive results and no winning ad sets, and tests longer than 14 days may not be an efficient use of budget since a test winner can usually be determined in 14 days or sooner.

For this reason, when creating a split test in Ads Manager, you must create a test with a schedule between three and 14 days. We recommend a test between three and 14 days for tests created in the API.

Learn more about split testing and how you can use split test to improve your ads' performance.

Create a split test

Once you've created a split test, you might want to make changes to your campaign, ad set or ad. For example, if you notice that the headline in your ad has a typo, you'll want to fix the ad without cancelling the whole test. To do that, just edit your ad set or ad in Ads Manager just as you would in any other situation. Learn how.

However, remember that changing your split test after it has started may impact its results. Here are some best practices for editing split tests:

  • Only edit your campaign, ad set and ad when it's absolutely necessary. All edits can affect the accuracy of your results, so you should only edit your campaigns when it's necessary. If the change isn't necessary, you can always create a new test.
  • Avoid making significant changes to the variable that you're testing. If you make changes to the variable while the test is still running, you may not be able to know why the winner won.
  • Use caution when making changes to things other than your variable. Editing the audience, placement, creative and other factors can affect the results and undermine your understanding of what drove the results. For example, if you test two different audiences with the same ad creative, and then later change the ad creative, this change might cause one audience to interact more with the ad. When the split test is done, you won't know if the winner was determined based on the audience or the ad creative.

You may also want to cancel your test. To do that:

  1. Go to Ads Manager.
  2. Tick the box next to the campaign that includes the split test you want to cancel and open the side panel by clicking the icon.
  3. From the drop-down menu links, click Cancel test.
  4. Click Confirm and close.

After cancelling your split test, your budget and audience will no longer be divided between your ad sets. If you cancel your split test, no winner will be determined and you won't get a notification or email with results.

The winning ad set is determined by comparing the cost per result of each ad set. The ad set with the lowest cost per result, such as cost per website purchase, is calculated in Facebook's attribution system.

Based on the data from the test, Facebook simulates the performance of each variable tens of thousands of times to determine how often the winning outcome would have won.

Once a winner is determined, you'll receive a notification in Ads Manager and an email will be sent to you with the results.

Learn more: About split testing

A split test lets you test different versions of your ads so you can see what works best. By analysing your split test results, you can determine what changes you might want to make to future campaigns.

Viewing your results

Ads Manager results

In Ads Manager, you can:

  • Check the winning ad set or ad in the reporting table after your test is over. It will have a star icon next to it. Learn more about how winning ad sets are determined.
  • Apply a filter so that only campaigns, ad sets and ads that are part of split tests will be shown in the reporting table. To do this, click Filter and choose Split test from the menu.
  • Learn more about your results and the winning ad set by opening the reporting panel. Select your split test campaign by ticking the box next to the campaign and clicking View Charts or the icon . The reporting side panel will open and show the:
    • Winning ad set
    • Cost per result for each ad set (or ad for creative tests)
    • Confidence level, which tells you how likely it is that we'd get the same winner if you ran this test again

Email results

When your split test is complete, you'll receive an email with its results, which include:

  • The winning ad set and how likely it is that we'd get the same winner if you ran the test again
  • Your split test settings including:
    • The variable you tested
    • Whether your split was weighted or even
    • Budget
    • Schedule
  • A breakdown of:
    • Results. The number of times your ad set or ad got the result associated with your campaign objective
    • Cost. The average cost per result from your ads
    • Amount spent. The total amount of money you've spent on your campaign, ad set or ad during its schedule
  • How your budget was distributed across the ad sets or ads you tested
  • A link to view the full campaign in Ads Manager

Understanding your results

The winning ad set is determined by comparing the cost per result of each ad set or ad based on your campaign objective. We also provide a confidence level, which represents how likely it is that we'd get the same results if you ran the test again.

For example, let's say that you run a creative test with one video ad, one single image ad and one carousel ad. We determine that the video ad was the winner with the lowest cost per result and a 95% chance that you would get these results again. With these results, we recommend that you adopt the winning strategy.

Use this guide to interpret results with confidence levels below 75%, broken down by the number of ad sets or ads you tested. If your results are less confident than the percentages listed below, the variables you tested are not likely to be a key factor in the performance of your ad, relative to each other:

  • Two ad sets or ads tested. Less 65% confidence means that the variables you tested performed similarly.
  • Three ad sets or ads tested. Less than 40% confidence means that the variables you tested performed similarly.
  • Four ad sets or ads. Less than 35% confidence means that the variables you tested performed similarly.
  • Five ad sets or ads. Less than 30% confidence means that the variables you tested performed similarly.

If your test has a confidence level above these numbers, you can use it going forwards. Regardless of the number of ads or ad sets you tested, a percentage above 75 indicates that we consider the results very clear and actionable. This means that we recommend using what you learn from the test when making decisions about relevant campaigns in the future.

Note: Every ad set in a split test goes through the delivery system independently, which means there may be a degree of variance between the results for your ad sets caused by the delivery system and not solely by ad set differences.

Next steps: Improving future campaigns

Once you've reviewed your results and understand which strategy performed best, you can:

  • Create a new ad from the winning ad set directly from the results email
  • Reactivate the winning ad set in Ads Manager
  • Create a new campaign based on what you learned

You could also run more tests to refine your strategy further. For example, if an aged 18-30 audience outperformed an aged 31-43 audience, you could narrow that further to see how that affects performance. You might next test an aged 18-30 audience against an aged 24-30 audience. These sorts of refinements are important for getting the most out of your campaigns.

If your results have low confidence, you can test the campaign again with a longer schedule or higher budget. More time and budget typically produces more data, which helps us be more confident in your results.

A split test lets you try different versions of your ads so that you can understand which version resonates more with your audience or drives better results.

Once your split test is complete, you'll receive an email with results and can review these in the email or in Ads Manager. If you have questions about your results, you can read our guide on understanding your results.

After you've reviewed your results, you may find that no winner was declared or that your ads under-delivered. In that case, you can use the tips below to troubleshoot your split tests and improve future tests.

If you want to run more tests, we've provided some test examples below that may help you refine your advertising strategy or improve your ad performance.

Troubleshoot and improve your split tests

After you've reviewed your results, you may find that no winner was declared or that your ads under-delivered. In that case, you can use the tips below (one to three) to troubleshoot your split tests and improve future tests. Your results may also have a low percentage, which means your ads performed about the same. In this case, use the last tip below (four) to build more tests and get more learnings.

1. Use an optimal time period for your test. Your testing time period may be too long or too short. If you aren't sure about the right amount of time, we recommend 4-day tests for the most reliable results.

2. Make sure that your audiences are large enough to avoid under-delivery With non-split test campaigns, under-delivery (or when your ads do not win enough auctions to get enough results and spend their full budget) can occur when your target audience is too small or when your audience is not well defined.

With a split test, we divide your audience and budget, so there may be more potential for under-delivery with a small audience. Create a test with broader targeting if you find that your split test campaign under-delivers.

3. Increase your budget. Under-delivery can also occur if your budget is too low. If your split test under-delivers, you can also try increasing the budget to reach more people. Learn more about under-delivery.

4. Make sure that your ad sets are different enough when testing audience, placement or delivery selection. When your ad sets are too similar, we may not be able to confidently declare a winner. For example:

  • Let's say that you've tested your ad with two audiences: Men (age 18-20) who previously shopped on your website vs men (age 20-22) who previously shopped on your website. These audiences may be too similar to yield conclusive results and the audience size may be too small.
  • Try a new test with greater differences. For example: A custom audience of people who previously shopped on your website vs people who may be interested in your products based on Facebook targeting.
Gain insights with more tests

While one split test can provide valuable learnings, you can build on initial learnings by running more tests and developing a testing or advertising strategy.

Here are a few ways that you can continue testing and refining your advertising strategy. It's important to note that these are only recommendations and testing strategies will vary by advertiser and verticals.

After testing audiences:

If your ad performed better with one audience, then this audience might represent the type of audience that is more likely to be interested in your ads. To better understand and take advantage of these results:

  • Run the same test with a different objective. Your results can vary depending on your chosen objective and using a different one may yield more useful learnings.
  • Test different creatives with the winning audience. The audience may respond better to certain imagery or text or ad formats.
  • Test different placements with the winning audience. The audience may be more engaged on mobile than on desktop, for example.
After testing creatives:

If the results show improved or better performance for a certain ad creative, then you know that some aspect of that creative resonated more with your target audience. Keep testing to better understand and optimise these results:

  • Run the same test with a different objective. Your results can vary depending on your chosen objective and using different ones may yield more useful learnings.
  • If you tested one component, such as your ad's headline or text, run a test with a different creative variable, such as imagery or call to action (CTA).
  • If you tested a video ad against a single image ad, you can take the winning ad and test it against another ad that uses the same format. For example, let's say a video ad performed better than a single image ad. You can then test that video ad against another video ad to see which one performs better.
After testing placements :

If your ad performed better on a certain placement, you may still want to be careful about excluding certain placements from all future campaigns.

Your ads may perform better though on certain placements with certain audiences, or you may find that certain formats may do better on certain platforms. You can try these tests to help refine your results:

  • Run the same test with a different objective. Your results may vary depending on your chosen objective and using different ones may yield more learnings.
  • If you tested mobile placements against desktop placements and mobile performed better, you may not want to immediately stop running ads on desktop. Try testing two different ad creatives on mobile to see if certain formats (i.e. video, single image etc.) perform better on mobile.
  • If you test Instagram placements versus Audience Network placements versus Facebook placements, try testing the winning ad set with different audiences.
After testing delivery optimisations :

If one ad performed better with a particular delivery optimisation, you can continue testing different delivery selections to understand the best option for your company or products:

  • If you tested different conversion windows, keep the conversion window the same for all your ad sets and test optimisations.
  • If you tested optimisations, keep the delivery optimisation the same for all your ad sets and test different conversion windows.

Learn more about the basics of setting up a split test.

* Nguồn: Facebook