Tips for improving split tests and running more tests
A split test lets you try different versions of your ads so that you can understand which version resonates more with your audience or drives better results.
Once your split test is complete, you'll receive an email with results and can review these in the email or in Ads Manager. If you have questions about your results, you can read our guide on understanding your results.
After you've reviewed your results, you may find that no winner was declared or that your ads under-delivered. In that case, you can use the tips below to troubleshoot your split tests and improve future tests.
If you want to run more tests, we've provided some test examples below that may help you refine your advertising strategy or improve your ad performance.
After you've reviewed your results, you may find that no winner was declared or that your ads under-delivered. In that case, you can use the tips below (one to three) to troubleshoot your split tests and improve future tests. Your results may also have a low percentage, which means your ads performed about the same. In this case, use the last tip below (four) to build more tests and get more learnings.
1. Use an optimal time period for your test. Your testing time period may be too long or too short. If you aren't sure about the right amount of time, we recommend 4-day tests for the most reliable results.
2. Make sure that your audiences are large enough to avoid under-delivery With non-split test campaigns, under-delivery (or when your ads do not win enough auctions to get enough results and spend their full budget) can occur when your target audience is too small or when your audience is not well defined.
With a split test, we divide your audience and budget, so there may be more potential for under-delivery with a small audience. Create a test with broader targeting if you find that your split test campaign under-delivers.
3. Increase your budget. Under-delivery can also occur if your budget is too low. If your split test under-delivers, you can also try increasing the budget to reach more people. Learn more about under-delivery.
4. Make sure that your ad sets are different enough when testing audience, placement or delivery selection. When your ad sets are too similar, we may not be able to confidently declare a winner. For example:
- Let's say that you've tested your ad with two audiences: Men (age 18-20) who previously shopped on your website vs men (age 20-22) who previously shopped on your website. These audiences may be too similar to yield conclusive results and the audience size may be too small.
- Try a new test with greater differences. For example: A custom audience of people who previously shopped on your website vs people who may be interested in your products based on Facebook targeting.
While one split test can provide valuable learnings, you can build on initial learnings by running more tests and developing a testing or advertising strategy.
Here are a few ways that you can continue testing and refining your advertising strategy. It's important to note that these are only recommendations and testing strategies will vary by advertiser and verticals.
After testing audiences:
If your ad performed better with one audience, then this audience might represent the type of audience that is more likely to be interested in your ads. To better understand and take advantage of these results:
- Run the same test with a different objective. Your results can vary depending on your chosen objective and using a different one may yield more useful learnings.
- Test different creatives with the winning audience. The audience may respond better to certain imagery or text or ad formats.
- Test different placements with the winning audience. The audience may be more engaged on mobile than on desktop, for example.
If the results show improved or better performance for a certain ad creative, then you know that some aspect of that creative resonated more with your target audience. Keep testing to better understand and optimise these results:
- Run the same test with a different objective. Your results can vary depending on your chosen objective and using different ones may yield more useful learnings.
- If you tested one component, such as your ad's headline or text, run a test with a different creative variable, such as imagery or call to action (CTA).
- If you tested a video ad against a single image ad, you can take the winning ad and test it against another ad that uses the same format. For example, let's say a video ad performed better than a single image ad. You can then test that video ad against another video ad to see which one performs better.
If your ad performed better on a certain placement, you may still want to be careful about excluding certain placements from all future campaigns.
Your ads may perform better though on certain placements with certain audiences, or you may find that certain formats may do better on certain platforms. You can try these tests to help refine your results:
- Run the same test with a different objective. Your results may vary depending on your chosen objective and using different ones may yield more learnings.
- If you tested mobile placements against desktop placements and mobile performed better, you may not want to immediately stop running ads on desktop. Try testing two different ad creatives on mobile to see if certain formats (i.e. video, single image etc.) perform better on mobile.
- If you test Instagram placements versus Audience Network placements versus Facebook placements, try testing the winning ad set with different audiences.
If one ad performed better with a particular delivery optimisation, you can continue testing different delivery selections to understand the best option for your company or products:
- If you tested different conversion windows, keep the conversion window the same for all your ad sets and test optimisations.
- If you tested optimisations, keep the delivery optimisation the same for all your ad sets and test different conversion windows.
Learn more about the basics of setting up a split test.
* Nguồn: Facebook