Tuesday, 16 February 2016

Conversion Rate Optimization--Are You Shooting Yourself in the Foot?

Conversion rate optimization (CRO) has become the go-to solution for online marketing performance woes.


Astronomical cost-per-click? Improve your conversion rate and it won't matter anymore.


Unsure about your website design? Test all your ideas...and see what works!


Limited budget? You don't have to pay for more traffic--just get more conversions from the traffic you already have.


Whatever ails your online marketing, it can be fixed with a little CRO, right?


riiight


With all the case studies out there touting results like 41% increase in sales, 400% more conversions, or 600% increase in social shares, it's easy to believe that the ROI of your dreams is just a few tests away.


Unfortunately, conversion rate optimization isn't quite that simple. Simply putting together a test is not enough to guarantee you a better conversion rate--in fact, according to VWO, only 1 in 7 A/B tests produce a winning result!


So, does that mean CRO isn't all that it's cracked up to be?


Why CRO Fails


Conversion rate optimization is an awesome way to get more conversions out of your web traffic, but only if you approach it the right way.


At Disruptive, we've run thousands of tests, so we've seen our fair share of both amazing and not so amazing results.


After helping so many companies improve their conversion rate, it's become clear that CRO often falls short of its potential because companies set their tests up to fail.


Of course, nobody intentionally rigs their tests to fail, but most companies fall victim to one of four CRO traps:



  1. No overall strategy

  2. Goal confusion

  3. Ending early

  4. Ignoring traffic


Each of these traps will ruin a CRO test, so it's important to understand each trap and how to avoid it.


Fortunately, if you can steer clear of these test-killers, you can expect a much higher success rate--in our case, about 5/7 of our tests improve conversion rate (and we learn something from every test).


Let's dive into the details.


1. No Overall Strategy


If you want to be successful at CRO, you need to look at each test as part of a bigger whole.


In general, most companies tend to look at tests in isolation. They have an idea and run a test to see if it performs better.


If it works, they cheer and switch everything to their new idea. If it fails, they assume the idea was bad and toss it.


Unfortunately, this sort of approach doesn't teach you why a specific page was a success or a failure. That means you can't effectively use that test to guide future CRO efforts.


Remember the old George Santayana quote?


george-santayana


Okay, so that's not exactly how George's original quote went...but the point still holds. If you don't learn from your tests, you're never going to make much progress with CRO.


Documentation--the Secret to Successful CRO


A good testing strategy ensures you learn something from every test. To do that, you need great documentation.


The problem with a haphazard approach to testing is that your tests become very difficult to track. It doesn't take many tests before it is hard to remember what you were testing with which tests and why.


To really get the most out of your tests, it's best to write out your strategy in advance. For example, if you want to know if a new CTA improves your conversion rate, you might put together a spreadsheet like this:


cta-test-documentation


See how each test sets up the next test? You learn something from each iteration and then use that to guide your next test.


Plus, everything is thoroughly documented, so if anyone ever wonders why you made a certain choice, you've always got a handy reference!


A lot of testing tools will document your results, which is helpful, but if you don't document the thinking behind the test, the results won't do you much good.


What Approach is Best for Your Business?


Once you've got a plan in place for documenting and analyzing your testing results, you need to decide how you want to approach testing.


Depending on the needs and constraints of your business, your testing strategy will probably fall into one of two camps:


The Dramatic Change Approach

Big page changes often result in big conversion rate changes (good or bad), so this approach is great when you need results fast.


For example, if your conversion rate is so low that there's nowhere to go but up, this approach can be a lifesaver.


So, try an entirely new page setup or interface. Experiment with new content or an alternate color scheme. The goal is to compare a radically different design with your current setup to see if you can figure out what your audience is looking for.


The problem with this approach, however, is that you can only learn about your audience in the broadest of ways.


Sure, you know that X is better than Y, but that's about all you know.


That being said, this approach can be an effective range-finding strategy if you don't know where to start or have limited traffic. You can try a variety of very different designs and then refine your best performer.


The Minor Adjustment Approach

On the other hand, if your site is producing fairly well and you want to truly optimize things, it's better to start small and work your way up.


Making relatively small changes to your site allows you to be truly methodical about CRO.


Each change reveals something new about your target audience and how they interact with your site, which makes it easy to come up with new ideas for improving your conversion rate.


Success won't come all at once, but all those little adjustments add up to truly impressive results.


when-did-that-happen


This approach is particularly effective for websites with more that 6,000 site visits/month. If you have fewer visits than that, it's hard to get your site optimized within a reasonable time frame.


Either approach is valid, but it's important to decide which strategy fits your needs before you start testing. Otherwise, you'll have a hard time making any real progress.


2. Goal Confusion


A lot of companies think they are testing one thing when they are actually testing something else.


For example, if you're taking the dramatic change approach, there's nothing wrong with simultaneously testing a new call-to-action, hero shot and page layout.


The problem is, if your new page design performs better, you can't assume it was due to the new CTA.


Sure, it could have been the CTA, but it also might have been the new hero shot, the page layout or a combination of all three!


This is where things get sticky for a lot of businesses.


They ran the test because they had an idea for a new CTA, but along the way they decided, hey, while I'm running this test, I should also try a new hero shot...and a new layout...and...


i-better-take-them-all


By the time the test runs, they are testing a lot more than their CTA. However, since the original goal was to test the new CTA, most of the benefit is ascribed to the new CTA.


Even when companies recognize that they've muddied the waters by testing extra stuff, they still tend to attribute most of the gain to whichever change they "feel" contributed the most.


However, if you're judging test results based on gut instinct, you just wasted a lot of time (and possibly money) on acquiring data you aren't even using.


In other words, if you aren't clear about what you're testing, you shouldn't be testing.


3. Ending Early


Testing can be a bit of an emotional roller coaster.


Often, if your variant design is producing better results, it can be easy to get excited! On the other hand, if it starts performing worse than your original design, there's a natural inclination to cut your losses.



While it can be tempting to call a test early, remember, CRO is all about data, not emotion. No matter how thrilling or chilling your results seem to be, you need to wait until you have meaningful data.


How Confident are You?


Ultimately, test duration boils down to statistics--the more traffic you push through your tests, the more accurate your results will be.


To make sure you get enough data, you need to decide in advance how much data you need to make an informed decision.


Typically, you'll want to set your confidence level at 95%. That being said, for low traffic sites or pages, achieving 95% confidence can take a really long time (eg, 6-12 months).


If you are only sending a few visitors per day to your site, you may need to set your confidence at around 75-80%.


In this case, the risk of not being able to run tests is often bigger than the risk of picking the wrong page, so you'll have to find a way to balance speed and accuracy.


That being said, the only time I recommend shooting for anything lower than 95% confidence is when waiting to achieve 95% confidence means you are only running a test every few months.


Otherwise, if you're getting decent traffic, don't pull the trigger early.


A few extra days isn't going to cost you all that much and it will save you from a lot of potential frustration.


Believe me, it's no fun to jump the gun, pick a winner and then watch your conversion rate slowly crumble as statistics asserts itself.


How Much Traffic Do You Need?


To put it simply, the faster you can get to 95% confidence, the faster you can learn from your results and start the next test.


How fast you can get to 95% confidence, however, depends on how much traffic you can herd onto your page and how much of that traffic you are willing to risk on your test.


herding-cats


To decide how much traffic to include in your test, you need to look at your goals and how your page is performing.


If you are just starting out or you're thoroughly unhappy with your current performance, you might as well go all in. Send 50% of your traffic to your variant and 50% to your original.


A 50/50 approach like this is the best way to quickly get out of the conversion rate doldrums. It allows you to rapidly test a large number of iterations, which means you'll quickly get your conversion rate to a more acceptable level.


Once your page is converting at an acceptable rate (or if it's already converting well), the risk-benefit equation starts to change. At this point, if your variant turns out to be a lemon, you could lose a lot of conversions.


In this situation, I generally recommend sending only 25-40% of your traffic to a new variant. That way, you've lowered your risk without overly limiting your testing ability.


If you really want to be conservative, you can cut back traffic to your variants to less than 25%. However, doing so makes it difficult to make any real progress with your tests, so I rarely encourage that sort of approach.


Testing and Waiting Go Hand-in-Hand


At the end of the day, if you want great test results, you can't afford to pull the plug early. Impatience ruins CRO.


So, once you've picked your target confidence and decided how to split your traffic, it's time to find your happy place and get used to waiting.


4. Ignoring Traffic


Finally, one of the biggest mistakes companies make with CRO is to assume that CRO will solve their conversion rate problems.


Unfortunately, your website is only part of the conversion rate equation. If you're sending the wrong traffic to your site, even a perfect page won't produce the conversions you're looking for.


For example, after conducting over 2,000 AdWords audits, I discovered that 61% of PPC budgets are spent on search terms that never convert.


throwing-money-away


That's a problem no amount of CRO will fix.


So, if you really want to make your CRO efforts effective, you have to look at your entire online marketing strategy. Do you have the right traffic? Are you sending it to the right page? Is that page providing an optimal experience?


CRO is just one piece of the marketing puzzle--don't expect it to solve all your conversion rate woes.


Conclusion


Does CRO have the potential to dramatically improve the performance of your website?


Absolutely, but a lot of companies make simple mistakes that end up shooting their CRO efforts in the foot.


However, if you can avoid some of the most common CRO pitfalls and put a great strategy in place with clear goals, appropriate timelines and great traffic, you can expect to be writing your own case study in no time!


That's my two cents, now I want to hear yours! Have you seen CRO tests fall prey to any of these traps? Are there any other common CRO mistakes you'd add to this list?


About the Author: Jacob Baadsgaard is the CEO and fearless leader of Disruptive Advertising, an online marketing agency dedicated to using PPC advertising and website optimization to drive sales. His face is as big as his heart and he loves to help businesses achieve their online potential. Connect with him on LinkedIn or Twitter.




No comments:

Post a Comment