Add InstantSearch and Autocomplete to your search experience in just 5 minutes
A good starting point for building a comprehensive search experience is a straightforward app template. When crafting your application’s ...
Senior Product Manager
A good starting point for building a comprehensive search experience is a straightforward app template. When crafting your application’s ...
Senior Product Manager
The inviting ecommerce website template that balances bright colors with plenty of white space. The stylized fonts for the headers ...
Search and Discovery writer
Imagine an online shopping experience designed to reflect your unique consumer needs and preferences — a digital world shaped completely around ...
Senior Digital Marketing Manager, SEO
Winter is here for those in the northern hemisphere, with thoughts drifting toward cozy blankets and mulled wine. But before ...
Sr. Developer Relations Engineer
What if there were a way to persuade shoppers who find your ecommerce site, ultimately making it to a product ...
Senior Digital Marketing Manager, SEO
This year a bunch of our engineers from our Sydney office attended GopherCon AU at University of Technology, Sydney, in ...
David Howden &
James Kozianski
Second only to personalization, conversational commerce has been a hot topic of conversation (pun intended) amongst retailers for the better ...
Principal, Klein4Retail
Algolia’s Recommend complements site search and discovery. As customers browse or search your site, dynamic recommendations encourage customers to ...
Frontend Engineer
Winter is coming, along with a bunch of houseguests. You want to replace your battered old sofa — after all, the ...
Search and Discovery writer
Search is a very complex problem Search is a complex problem that is hard to customize to a particular use ...
Co-founder & former CTO at Algolia
2%. That’s the average conversion rate for an online store. Unless you’re performing at Amazon’s promoted products ...
Senior Digital Marketing Manager, SEO
What’s a vector database? And how different is it than a regular-old traditional relational database? If you’re ...
Search and Discovery writer
How do you measure the success of a new feature? How do you test the impact? There are different ways ...
Senior Software Engineer
Algolia's advanced search capabilities pair seamlessly with iOS or Android Apps when using FlutterFlow. App development and search design ...
Sr. Developer Relations Engineer
In the midst of the Black Friday shopping frenzy, Algolia soared to new heights, setting new records and delivering an ...
Chief Executive Officer and Board Member at Algolia
When was your last online shopping trip, and how did it go? For consumers, it’s becoming arguably tougher to ...
Senior Digital Marketing Manager, SEO
Have you put your blood, sweat, and tears into perfecting your online store, only to see your conversion rates stuck ...
Senior Digital Marketing Manager, SEO
“Hello, how can I help you today?” This has to be the most tired, but nevertheless tried-and-true ...
Search and Discovery writer
Relevance is one of the most important attributes of good site search. But it isn’t something you can just flip a switch and turn on. Honing search results for relevance is a process of trial and error as you adjust your back end search parameters to meet your users’ needs and optimize conversions.
We are all familiar with A/B testing your website as a way for you to test how much a particular variable impacts your audience reaction and conversion metrics. And in a previous blog post, we wrote about how to A/B test your site search for relevance.
Today, we’ll dive into how to apply Algolia’s advanced A/B testing feature to your site search.
As you already know, A/B testing removes the guesswork from iterations on your relevance by showing you exactly how a particular setting or variable affects your predetermined conversion goals.
For example, you might want to test the way you rank the search results you return to users. Previously, you’ve added the business metric “publication date” in the ranking strategy, but you’re wondering if it might be more effective to use the business metric sales ranking. You think it might drive more conversions, but you want more data before you commit to the change site wide.
In an A/B test, half of the users on your website would be exposed to the original (or control) search experience — the one where your site ranks results by publication date. The other half of your audience will be exposed to the sales ranking system. You can then collect data about the behavior of both sets of users, and have a definitive answer on which setup drives more conversions.
One of the advantages of using Algolia A/B testing is that anyone on your team can create A/B tests entirely from our dashboard, without a single line of code (but it is of course available from the API if that’s what you prefer to do), and on any device.
What can you test with site search A/B testing? Here are some ideas:
Let’s look at a practical application of A/B testing with Algolia: an easy, six-step process. (Note: you’ll need to set up Click Analytics first in order to capture click & conversion events).
Let’s go to your search dashboard. You can already see there is an index created, with defined custom ranking – # of sales per item (aka Sales ranking)
But, you are not sure this is the best ranking strategy, so you’d like to check it against ranking by publication date.
When you A/B test, you’re testing two search indices against each other, so the first thing you need to do is create a replica — aka a copy of your search index. To create a replica, go to the Indices section of the left-hand toolbar and click the “Replicas” option. From there, you can select the option to “Create Replica Index.”
The replica will be synchronized with your main index, so any changes to the main index will be forwarded to your replica. They are identical apart from the changes you manually make to the replica.
Once you’ve created your replica, it’s time to configure it with the factor you are testing. In this case, we’ve defined a different custom ranking based on publication date instead of sales ranking.
Once you’ve configured your replica, go to the A/B testing tab on the left-hand toolbar. Click on “New test” to create your new A/B test.
Once you create a new test, you’ll have the option to pick the two variants you plan to test. The first variant should be your main index or your control. The second will be the replica you just created. It’s a good idea to provide a description for each variant for later reference, such as “custom ranking based on sales rank” and “custom ranking based on publication date.”
Underneath the variants, define the percentage traffic split between the two variables. An ideal split is 50/50, but if you have a high-traffic website (more than 100,000 page views a month), you can define an unequal traffic split and still see statistically significant results.
Ideally, we want an A/B test to run for what we call two business cycles, which takes into account short-term seasonality effects. For an e-commerce website, a business cycle would mean one week. So, in this case, I want my test to run for 14 days.
Once you’ve defined these parameters, simply press “Create.”
The most important part of A/B testing is analyzing the results to determine which variant was most successful. Algolia’s results panel is clearly structured to help you make that decision:
In the results panel, you’ll see identifying information about the test: the name, the scenarios (the two indices you tested), and the status. The test will either be “Running,” “Stopped,” “Finished,” or “Failed.” You can click on the analytics bar next to each scenario to see the individual metrics for each.
Here are the other important components of the test panel:
Once your test reaches a 95% or greater confidence score and your two-cycle time period has passed, you can analyze the results to see which index generated more clicks and conversions.
In the A/B testing example above, you can see that the sales rank custom setup (Scenario B) performed better with a 5.2% uplift on CTR and a 4% increase in conversion rate. The confidence score for both is excellent, meaning there were enough searches and events to draw the conclusion that B yields better results. Once the test is over, you can apply the winning configuration to the main index.
In this test, the numbers demonstrate that ranking search results by sales rank leads to more conversions rather than ranking by publication date.
A/B testing is a scientific process, so it’s important that you consistently follow a few guidelines to ensure the most accurate results.
User tokens are the user identifiers that connect search events with eventual conversions. IP addresses alone are not sufficient for accurate tracking and can lead to incorrect results. We highly recommend that you explicitly use a user token for the most accurate results. If you aren’t already using them, you can generate them with Algolia’s insights cookie.
It is very important to use the same user token for searches and for click events because that’s how the A/B test will connect the users with the events that those users performed. If you’re using two different user tokens, then the A/B test won’t work.
If your website uses search automators (for example, if you have an option where users can subscribe to alerts when their search returns new results), then exclude those search agents from your A/B test. If you don’t, you’ll end up with a single “user” who is performing a vast number of searches but never clicks or converts. This can severely bias one side of your test results. Make sure automations are excluded from both your testing and your analytics setups.
The confidence score is calculated based on the number of searches conducted — so you may actually reach a high confidence level within just a few hours of starting your test. That doesn’t mean these results are correct, though. It’s important to leave your test running for two full business cycles to account for all the seasonality effects that might change your results. For example, people search differently on the weekends versus weekdays.
Only change one search element at a time so that you can clearly test the effects of that change on your users’ behavior. Trying to run more than one A/B test at a time can lead to confusing and unclear results. How do you know which change led to which behavior?
Once the test is running, don’t make any changes to either of the test indices, the traffic allocations, or the test goals. Changing variables mid-experiment will make the results irrelevant, as it will be impossible to tell which modification impacted user behavior.
Improving relevance is the shortcut to improving your site search functionality. You can experiment to hone those results, but you don’t want to tank your conversions or customer satisfaction in the process.
Just like you A/B test other components of your website, you should A/B test your site search to eliminate the guesswork and zero in on the variables that make your results useful and effective. Test only a subset of the population to avoid costly mistakes — this will also give you confidence when you go to make major changes.
To learn more about A/B testing with Algolia and to see how some major companies have used it to improve their conversions, watch our A/B testing Master Class, or visit our A/B testing product page.
Join Loïse Mercier, Customer Success Specialist, and Rémy Zeiss, Product Manager, for a short webinar jam-packed with information on A/B testing in general, and with Algolia.
Powered by Algolia Recommend