Case Studies: Chapter 9 – Fashion Retailer Identifies the Best A/B Test to Run

What’s the A/B test most likely to improve PLP micro conversion rates?

This case study company is a US luxury multi-brand fashion house operating several physical boutiques in the U.S. and an online store. The multi-brand boutique sells designer brands and many others. 

The Challenge

As so often, the team tasked with CRO was systematically going through data to find improvements in all stages in the buyer journey. In this case, analytics indicated that many customers reached the Product List Page (PLP), but micro-conversions towards proceeding to Product Detail Pages (PDP) could be improved.

Step 1: What A/B test is most likely to create uplift?

The visualization of choice to see why customers are or aren’t continuing to the next step on a given page is in-page analytics, e.g., Zone-based heatmaps. 

Image: Zone-based heatmap with the Purchase Conversion Rate metric 


In this case, if shoppers reached the PLP and interacted with the filters to narrow down the products they were seeing, they were clicking primarily on the Category and Designer filters – the first two filters listed on the page. Yet, overall interaction with the filters was relatively low.  

Customers who interacted with the filters were more likely to reach a PDP. So, the opportunity was to help more shoppers see and engage with the best filters. An A/B test that entices more customers to take advantage of these filters was most likely to move the needle.

Step 2: Design the A/B Test

The team designed a test where the favored filters of shoppers on the left were simply automatically expanded when they landed on the PLP. It removed the need to click and gave these filters more real estate. Very simple to implement.

Step 3: Run the A/B Test and confirm the winner

Using their Google Optimize platform, The team implemented this A/B test and confirmed that this simple change improved their overall Ecommerce conversion rate by 8.45%. It increased revenue even higher by guiding shoppers to consider products by designers that they missed earlier. As a result, revenue went up 26%. 

Step 4: Why did the A/B Test win? What were the success factors?

In evaluating the test variant, The team confirmed that interaction with customers’ favored PLP filters had increased compared to the original PLP. That’s associated with a higher likelihood of reaching PDPs and making a purchase.

Image: Zone-based Heatmap of the winning test variant


The moral of the story

The team could have spun up myriads of A/B tests on the PLP, most of which would likely have gone nowhere. And consider this, if you had voiced an opinion that expanding PLP filters in a way that pushes other filters under the fold is a good idea,  you might have been laughed out of the room. But that’s exactly what the team A/B tested here because it was informed by data. They reaped the rewards as a result of letting data speak louder than opinions.