text

 

If you are conducting A/B tests on a regular basis, you will quickly learn that not every test is a hit. In fact it can take up to 6-10 (or even more) experiments to finally reach positive results for the first time. And that’s okay. However you can greatly increase your chances of returning positive lift if you dig into your qualitative and quantitative data first.

In this case study, I’ll show you how we at iZettle have achieved an average of 17% uplift in registration conversions across all our European markets by running our A/B tests based on the results of multiple strands of research.

Guess Work vs. Best Bet

There are two ways to set an A/B test hypothesis:

  1. You test something that you have read in a case study (eg. the classy red button vs. green button test) or that you may have heard at a conference, or simply that you think is interesting/logical to test.
  2. You test something that fixes an issue – based on your data – for your users/customers.

I encourage you to go with option B. Fixing poor user experience is not only the most helpful thing you can do for your (potential) customers, but it’s also your best bet to increase your conversion rate. To identify these friction points on your website, you should run extensive research into user behavior.

The iZettle case study

a person holding a phone

This was the homepage for iZettle ever since our first product was launched, which performed really well for five beautiful years. However, in the second half of 2016 we had to break the status quo. The reason: this homepage was focusing only on one product (the card reader), and the company started to expand its services with other financial solutions. We therefore needed a new starting page that clearly communicated our new multi-product solutions.

The design team quickly put together a test version of the site based on the new communication direction.

a man and a woman holding books

We called this new version the “one headline version”. As you can see, it was more general and more inspirational than descriptive. We tested this variation of the site with Optimizely – for a tiny percent (5%) of our website visitors. We ran several types of research to understand in depth how our audience reacted to the old and new versions of the website.

Here are the most important research methods we used during this project:

  • Usability testing
  • Data analysis in Google Analytics
  • Heatmapping
  • Exit-intent surveys
  • Data analysis in our own SQL database
  • Five-second testing

Usability testing

Firstly we ran usability tests; which is basically inviting users to sit down in front of a computer and asking them to go through your website. You can do this based on a scenario and you can also give them specific tasks. As simple as it sounds, it is very insightful. If you want to learn more about Usability Testing, read this article about Usability Testing for Data Analysts.

I’m a Data Analyst and not a UX researcher, but I still like to be part of these tests (or at least see the recordings), because it helps me to:

  • get an initial hunch for my data analyses (looking at the infinite data without any reference point can be sometimes very intimidating)
  • understand a bit more about why users behave the way they behave (in data you can only see what they do, but nothing about why they are doing it)
  • just in general meet with the users, empathize with them and learn how they interpret things from our web communication.

Besides the many small things, the top take-away we learned from our usability tests is that our potential customers need more specific information from the very first moment they land on izettle.com. In this initial new “one headline” version they did not understand immediately what the products we offer are and what problems they fix. Even if they can get the information later from scrolling through the whole homepage – we all agreed that their first impression must be much better.

Data Analysis in Google Analytics

Once we had our hypothesis (“visitors need clear and immediate information about our products/services on the homepage”), it was worth investigating whether the quantitative data validated this or not. The easiest way for us to get this information was from Google Analytics.

Let me highlight some of the findings:

  1. From those customers who started to fill in the registration form, those who actually finished it consumed roughly 1.8x more content initially, than those who didn’t finish the registration form. chart, line chart <>Note: I’ve removed the actual numbers from the chart.
  2. A significant amount of the users, who went to our registration form immediately after the homepage, turned back from there – and were looking for more information.
  3. 94% of the registrants checked at least our product details page or the pricing page before registration.
  4. 70% of the registrants checked at least one of the support articles that described the products in depth before registration.

Sounds nice and simple. However the Google Analytics research phase actually took around 2 weeks. We went through many, many different reports, metrics, subsegments and cohorts. We also spent a significant amount of time thinking about what these results mean – as we wanted to avoid a possible confirmation bias, or any other statistical bias types one can have during a data research project.

The findings pointed to the same direction as the usability tests before – and we found nothing, that would prove the opposite. It’s worth keeping in mind however, that correlation does not imply causation. As Frank Kane – Udemy instructor – said:

“You can’t say anything about causality without an experiment, but correlation can tell you what experiments you might wanna run.”

Heatmapping

Another quick and easy quantitative research type is website heatmapping. We already had some idea from Google Analytics, as to what our most visited pages are, so it made sense to run heatmaps to find out how users are interacting with them.

An interesting finding for us was that on our new “one headline” start page ~50% of our visitors were not scrolling at all – even though there was plenty of important information below the fold, that could have helped them to understand our service before they bounced.

graphical user interface, website

Why? We didn’t know surely, but our guess was that they didn’t realize they could scroll to see the additional information.

Data Analysis in our own SQL database

Another layer of our analysis was to check the long term goals. At iZettle, it’s important that our users are not just registered, but actual active and returning customers. We wanted to understand if information consumption on our website drives long term retention or not. Luckily, we found that information consumption (before registration) correlates with our long term goals (e.g. activation, retention) too.

Brainstorming

We sat together with the creative team and discussed the major findings:

  1. Our visitors need more concrete information immediately about our products, when they land on our homepage.
  2. After this first impression, we should make it clear where they can find more information (again: they may not necessarily realize that they can scroll).
  3. On the other hand we can’t overload our audience with information and too many minor details.

So we had to find the perfect balance, providing the right amount of information at every step to our users. Not too much, not too little.

Again: our first priority was to fix the homepage. Some of the versions of the above the fold section that were created are shown below:

a man and woman in a room <>Variation 1

graphical user interface, website

Variation 2

graphical user interface, application

Variation 3

graphical user interface, application, Teams <>Variation 4

Even though the designers were creating these independently of each other, there were two common solutions that showed in each variation: visually encouraging visitors to scroll and bringing more concrete info above the fold.

Five second testing

All in all, there were 12 different new designs on the table. Now the problem was that we didn’t want to implement (i.e. design + code) each version, because that would slow us down in terms of A/B testing and also it would have taken up a lot of engineering time.

Thus we ran five second tests on the variations – and filtered out those that were underperforming for sure.

Five second testing is a qualitative testing method. We showed 20-50 test-users (those who are in our target-group, but haven’t heard about iZettle yet) a homepage screenshot for five seconds only. The image was then removed and the testers were asked different questions about what they had seen — or think they have seen.

A quick example of 5-second testing: