Anyone ever done testing on their website undoubtedly has run some poorly-designed experiments; that’s how we learn.Â It’s easy to make subtle mistakes in the design of a test that cause “messy” results, but read on to learn how to avoid a few of them.
FitnessAnywhere offers a product called the TRX that is a very effective portable gym.Â One of their highly trafficked pages is the â€śpersonal fitnessâ€ť page.Â Once we got into the heads of their target customer via personas, and understood what their intent and motivations are when they arrive at this page, we quickly identified the things they should be testing.
This is the original version of this page [click to enlarge].Â We wondered if the image of a man working out was resonating with the women that were coming to the site.Â We also hypothesized that the â€śwatch videoâ€ť call to action wasnâ€™t standing out effectively in the active window due to its placement, and we knew how persuasive the video is in regards to helping the visitor truly understand the product benefits.
The other thing we noticed was that there werenâ€™t any text links to help visitors find answers to the various questions that they might have while reading the content on this page.
Finally, the benefits that this product offers visitors were hidden all the way at the bottom of the page.Â This was incredibly valuable content that we knew would work more effectively if it were brought further up on the page.
FitnessAnywhere started to move forward with implementing the recommendations we gave them to fix some of these challenges.Â Ideally, they would have implemented each change one at a time, and tested them separately, but then this blog post never would have been written.Â
The downside to testing one thing at a time is, if you get very little traffic, or the changes are so slight, you may have to run a single test for a very long time in order to see a test result. The benefit of testing one thing at a time is that you will know exactly what caused a lift or drop in the results.
If you test several things at once, all you will see is the aggregate result of those changes.Â This means, if the aggregate result is negative, it doesn’t necessarily mean all of the changes were bad – just that the ones that were outweighed the positive effect that the others had…. or vice-versa.Â In fact, it is far more likely that some of the changes are good and some are bad.Â You donâ€™t know which changes caused the overall change, and which ones tempered it.Â It’s even possible that what looks like a negative change is actually a positive change in disguise (if even one of the changes is having a positive impact that is being dragged down by the other changes).Â There’s a reason why your science teacher taught you to “isolate the variables” in your experiments!
These were the changes FitnessAnywhere made.Â Click to enlarge.
Sometimes itâ€™s obvious what the conversion goal of a test should be.Â Other times, itâ€™s not that obvious. If youâ€™re testing your homepage or an informational/category page, you might be stumped about what to tag as your conversion goal.Â Sometimes, you might feel that you should test various conversion goals separately, and run a few tests with the same variations, using different goals to see what the results actually are.
What you need to do is identify your business goals, first and foremost. If you think that the changes youâ€™re making will potentially result in more traffic moving to a product page, tag the product page as the goal.
FitnessAnywhere identified their video as a micro-conversion point that they wanted to track as the goal for their first test on this page.Â Since they were making more than one change at a time, there wasnâ€™t a clean, single goal we could isolate.Â We felt, however, that these changes would help drive more traffic to watch the video, and that watching the video was a good thing.
These were the results of their test [click to enlarge].Â What was nice about this test was that we only had to run the test for a few days to get a strong enough confidence level to complete the test. This resulted in an observed 201% improvement of the traffic who was clicking to watch the video on this personal fitness page.
One problem with this slightly messy test is that we aren’t sure which of these changes resulted in more visitors watching the video.Â We can make some pretty good educated guesses, but again, because we made so many changes at once, we don’t have proof.Â What if one of these changes had a slightly negative impact on the results and this improvement rate could have been even higher?Â Results would have been more conclusive if the client had simply made one change at a time, and run these as separate tests.
Another challenge is that it doesn’t tell us whether this improves overall sales.Â Does the lift in video-watching result in more products being added to their cart?Â More people checking out?Â If analytics were set up to collect both the rate of visitors who watch the video and end up adding an item to their cart, we could run the numbers based on this single test and feel confident about how this win results in overall sales.Â Since that data is not currently available, we aren’t sure yet, even though our hypothesis at this point would be “yes.”Â To get around that, we have recommended that the client re-run this same test with the conversion point being the shopping cart page.
We can still learn from a messy test, and we can probably learn even more by cleaning up a test or changing parts of the same test and running it again.