Use A/B Testing to Increase Revenue and User Satisfaction | TechWell

Use A/B Testing to Increase Revenue and User Satisfaction

Any time there is revenue on the line, it pays to know what compels people to buy. For years marketers in brick-and-mortar stores have studied where to place items, how many products to offer, how to design labels, and how to organize displays so that more shoppers will pull the trigger on an item instead of passing it by.

They’ve done this research with split testing, which is a comparison of the effectiveness of two versions of the same thing; for instance, this could be an advertisement with the same layout and images but with different color text. Marketers understand that people are inscrutable and somewhat fickle. What may seem entirely logical to the product developer or designer may not actually appeal to customers. Smart marketers don’t trust themselves—they test their theories and let the results speak for themselves.

So too should software teams use split, also called A/B, testing to capitalize on the human subconscious, which knows what it likes and wants without following any obvious rhyme or reason. A/B testing doesn’t have to be relegated to email campaigns only; just about any software effort can benefit from knowing what increases revenue or user satisfaction, which ultimately boosts the success of the project.

You can do A/B testing on your project like this: Select two groups of people. Develop two versions of whatever it is you’re testing and give version A to one group and version B to the other group. If version A results in higher open rates, more transactions, or faster progress through a task, we say version A won the test.

Follow some basic guidelines to make the results of the A/B test useful. First, ensure that each test has an actionable item—e.g., subject lines or headings—call-to-action language, or the placement of action items. Make sure these items are actionable to facilitate measurement; two non-clickable calls-to-action can’t be measured.

Always restrict each test to one variable. In an email campaign for instance, it would be a no-no to test two subject lines sent at different times of day because there would be no way to isolate which variable—the subject lines or the timing—made the difference.

Construct a fair comparison to make sure there isn’t an obvious winner. A box that says “FREE SHIPPING for all orders over $25!” isn’t comparable to a box that says “Read our newsletter!”

What if there is no clear test winner? Review the test to make sure there is an obvious measurable difference. Think about anything you may have done that swayed the results in either direction. Ultimately it is okay to have no clear winner if you have created a good, actionable A/B test; it is good to know what your audience responds to and doesn’t respond to as well.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.