This case study will give you an overview of one of the tests I carried out to improve overall e-commerce conversion rate, and the results I got for my e-commerce client.

After an in-depth analysis of my clients Google Analytics account I noticed there was a very high basket abandonment rate.

This meant that while a lot of people were interested in her products and adding them to the basket, only a small percentage of those actually went on to buy.

How did I calculate the basket / cart abandonment rate? 

The easiest way to do this is to enable advanced e-commerce tracking in your Google Analytics account and then make sure the code on your e-commerce store is set up to send the necessary e-commerce data to your analytics account.

However, my client didn’t have the code set up on her website and her e-commerce platform (Volusion) didn’t provide the code so it needed to be custom coded. My client decided against having this done so I was limited with what was being measured.

But I didn’t let that stop me.

During the data gathering stage I set up a checkout funnel in her analytics account with a sale being the goal. Depending on where your checkout pages are hosted this can be simple or complicated (Google ‘cross domain tracking’ and you will find out why).

Once I set up the funnel I was able to see how many users visited the basket/cart compared to the number of users who completed the purchase.

Checkout process funnel

You can see from the data that 487 people viewed their cart but only 297 proceeded tot he checkout and only 190 orders were placed.

This meant the cart abandonment rate was 41.68% and the checkout abandonment rate was 36.02%.

The simplest optimisation strategy is to optimise the places where you see the biggest wins, so in this case, decreasing the cart abandonment rate by optimising the cart page was the first place to start.

Was was the current overall e-commerce conversion rate?

It is important to base line some KPIs, the first is the overall e-commerce conversion rate. You can find this in your Google Analytics. For this particular client it was 4.99% for the two weeks before I ran my first test.

ecommerce conversion rate

What did I decide to test?

During the data gathering stage I installed Hotjar on my clients site. This is a tool that gathers data on user behaviour. I was able to see how users interacted with the cart page. Hotjar allows you to record sessions so you can watch scrolling, mouse moves and clicks. In many recordings I was able to identify a lot of activity around some text that reminded users of the free shipping threshold. Heavy activity is defined by lots of click or mouse movement in one area. I could see people hovering their mouse above the text, then clicking on the menu to continue shopping. I calculated the numbers from the data and once people left the cart page, only a small percentage actually came back to continue the purchase.

I concluded there were several reasons for this behaviour.

  1. The reminder was a distraction. What would happen is we removed it altogether? Would people just continue through the checkout?
  2. The threshold was too high. People were looking around but were not prepared to add extra items to make up the cart to the free shipping threshold.
  3. People wanted to add more but they struggled to navigate the site to find what they wanted.

What did I do next? I looked for the easiest test with the most immediate gains.

To test 1. I only need to remove some text.

To test 2. I would need to add a different shipping threshold – this was complicated in the clients platform and she wasn’t happy with this as a change because it affected her profitability.

To test 3.I needed to re-design the menu.

I concluded test 1 was the quickest and easiest to try. So my hypothesis was born.

If users were not reminded of the free shipping threshold they would continue the checkout process. 

The test was born. I was going to simple remove the reminder and see what effect that had.

How did I create the test?

I used Google Optimize. It’s fairly new to the market. I am using the free version so costs are kept fairly reasonable for my clients. (Just take a look at VWO and you will see what I mean).

The test was fairly simple to set up. I am a web developer so I know HTML and CSS. I only needed to hide a block of text so that was a simple CSS change.

How did I monitor the test?

Everyday (because I am obsessive like that) I checked the results of the test. There is a tab in Google Optimize where you can see how it is performing. There is also a section in Google Analytics called EXPERIMENTS where you can see more data associated with your test. Data like your e-commerce conversion rate, your bounce rate etc.

One of the most amazing benefits of using Google Optimize and Google Analytics together is you can segment your test traffic. That means you can only look at the behaviour of people who saw your test page. You can really dig into the data to see how the changes affected behaviour.

You can see if actions further down the line were as a result of your test.

That is super useful when understanding the psychology of your users.

So, which variation won? The test or the original?  

Defining a winner comes down to three things.

  1. The size of the difference you are measuring (are you detecting a small change or large change?)
  2. The sample size (how many people took part in your test)
  3. The confidence you want in your results.

So if you want to define a winner with 100% confidence, and you are detecting a really large change then you will need a much larger sample size than if you were happy to be more risky and and only want to achieve an 80% confidence rate.

Equally, if you were detecting smaller changes you will need a larger sample size to be sure you had taken into account natural data fluctuations. If you were detecting larger changes you can go for smaller sample size because the change you are detecting will be bigger than the natural fluctuations.

Clear as mud?

Luckily Google Optimize takes all that into consideration and it will tell you when you have a winner.

From my results you can see that I was able to increase the number of people moving through to the payment page to 50.54%. Compared to 43.08% for the original page.

conversion optimisation results

But, is it really a winner? Let’s look at what really matters… sales.

I segmented the test traffic – oh the power of combining two Google tools!!!

I wanted to compare how well  the non test traffic perform compared to the test traffic.

Its sales that matter

Just look at that. The test traffic was converting at 24.37% compared to just 2.45% for the non test traffic.

THAT IS FREAKING AWSOME!

The average order for this group was also higher! There could be several reasons for that, but that is an analysis for another day.

So how much extra revenue was generated from that test?  Using the conversion rates and average order sizes above, per hundred visitors the revenue generated is $820.78 for the test traffic and $74.4 for the non test traffic.

So the test traffic generated an extra 1100% revenue. 

Awesome! And that was only the first test I ran. Remember, start with the biggest gains. I have since ran a few more experiments so keep your eyes peeled for more case studies.

What does this mean for you? 

If you would like to improve the performance of your e-commerce store please contact me for more details on my 1 to 1, “done with you” optimisation services.

 

 

 

Grab my 7 Website Planning Roadmaps, Including 5 Checklists!

Not sure where to start when building your website? Use these 7 roadmaps to identify where you should start and what your plan of action should be. 

GDPR

You have Successfully Subscribed!