collecting-testing-dataA/B split testing on your eCommerce site is a great method to garner more intelligence on how your website is actually performing with your customers. Depending on the sophistication of the testing software you are using, you can collect anything from shopping cart conversions to the performance of an entire campaign through your website.

Nowadays, I often wonder if the results are creating impulsive decisions. I’ve read many articles on the topic, a lot of case studies especially, and I’ve worked with companies on setting up their testing software and reporting. I often find myself playing devil’s advocate, trying to creatively think of ways to interpret the data that the team may be overlooking. The latest case study that I read, and caused me some worry, was written by an A/B split testing company! Instead of referencing their article, let me recreate the case study so we can learn from their mistakes.

The A/B split test

  1. A particular listing page of products, in a single category, was tested.
  2. This listing page had filters at the top of the listing results, so that the customer can filter down to a specific brand or function of the product. The example below is NOT the same company, but it has similar filters in drop-down menus above the products:Better-Ecommerce-Product-Filtering
  3. They tested removing the filtering at the top of the listing, and thereby shifting the products up higher. The theory in testing was that removing the filters would increase … something. I wasn’t sure what they hoped here, the direction wasn’t clear.
  4. This test increased engagement on the page by almost 30%. 

Well, that sounds great, doesn’t it? Unfortunately, we do not have any more data to interpret these results. Without further information, I would not change this page at all. In fact, my initial reaction is to say that this increase in customer engagement on this page is a bad thing. If a customer has to click more, and spend more time on a page to find what they are looking for, it means we are increasing their difficulty to find the product they want by 30%. Maybe.

Apply these changes to your A/B split testing

  1. Have a hypothesis! What are we changing? Why? How do we do this? You cannot make changes you expect to work without knowing why they are being changed in the first place.
  2. If you are testing functionality on a specific type of page, try testing all of those pages. This may not be an option with your testing software, but most come with the option to test wildcards: www.yourstore.com/category/listing/*.  No matter where the customer finds themselves in a category’s product listing page, the test will be running.
  3. By testing all instances of the same type of page, we ensure consistency in the test. It would defeat the purpose of the test if the customer can use a different page layout or if the customer gets aggravated with the changing interface design. Your analytics should capture the whole variation as aggregate and as singular pages to view the changes on each specific category’s products. Afterall, changing filtering may help one category and not the other, and we would like to watch that variable.
  4. If the theory is to test moving elements on the page, try to get heatmaps of their mouse movement on each variation. You will always notice interesting movements and it may inspire you to try testing based off an unexpected result.
  5. To test efficiently, you will need all the data possible to help test the hypothesis. If it’s overall conversion, then you will need a lot of different data, but if it’s more simple, like decrease in bounce rates, then you can minimize the amount of data you will need to interpret.
  6. Think creatively. What else could this test have affected? Maybe an increase in on-site search, increase in customer service calls, or decrease in good search-engine-friendly content that will have a future impact.

 

 

Testing Needing More Data

 

Last updated by .

2 Responses to WHAT ARE WE CHANGING? “A/B Split Testing: Please Get More Data.”

  1. Josh Greene Josh Greene says:

    Nice summary of good changes to think about for A/B testing

  2. trish says:

    Agreed. Always, always have a hypothesis. Test with a purpose

Leave a Reply

TOPICS