Why Usability is Vital: It Can Make or Break a Product

It’s probably safe to assume that almost everyone who regularly makes online transactions has experienced challenges or difficulties in usability. A button can’t be clicked. A particular link leads to an error page. The transaction won’t go through. There are too many steps to take. The mobile version doesn’t display all the content. The list of possible complaints could go on and on. Designers recognize that users will almost always take the path of least resistance - the least amount of effort that yields the ideal outcome. Human behavior optimizes. This then calls for products to optimize for human behavior. In UX design, this pursuit is called usability. 

Usability pertains to the degree of ease users can accomplish a set of goals with a product.  While frequently used interchangeably, usability is part of UX design. The former is the ease of use in completing a given task. The latter is the overall experience with the product.

What are the qualities of a product with good usability?

When users first encounter a new interface, they should accomplish their intended tasks without relying on somebody else. As an individual experience, highly usable products are effective, efficient, engaging, error-tolerant, and easy to learn. Think of them as the “5 E’s of Usability”.

  • Effective: Users can complete their tasks. Are users able to complete their tasks independently? What are the leading causes if users are unable to meet their functions?
  • Efficient: Users can complete their tasks through the easiest or most minor labor-intensive route. How fast are they able to complete their tasks? How many clicks and pages do they go through? Do they take steps or visit pages they’re not supposed to?
  • Engaging: Users find completing their tasks a pleasant experience. But, how do the users react while completing their tasks? Do they seem confused or annoyed on specific steps? Do they seem satisfied after the process?
  • Error-tolerant: Users can recover from genuinely erroneous actions and situations. Do they encounter error prompts even if they make a correct step? When users genuinely make a mistake, are they able to recover and return to the right page?
  • Easy to Learn: Users easily complete new tasks and even more quickly on repeat use. Does their first use of the product appear seamless? Where do they encounter bottlenecks or difficulty in the process? Upon repeating the steps or using the next iteration of the product, do users complete their tasks faster or more seamlessly?

How to test for usability?

Achieving these qualities of usability rarely comes on the first version of any product. Designers can’t wish away that their first try would be helpful enough to be shipped out. Product teams need to look out for flaws they might have overlooked and improve what could still be improved. This can only stem from Usability Testing, which is the process of testing the degree of ease in using a product.

Usability testing is different from focus groups, which is about listening to what participants say. In observing test users, it’s about what they do, not what they say. The types of usability testing depend on the complexity of the study, but they all entail the following features:

  • Representative users: Invite participants who are representative of the product’s core users.
  • Representative tasks: Ask the participants to perform the essential tasks of your product.
  • Action-centric: Observe what the participants do. Give more credence to their actions than their feedback.

Designers must aim to monitor and measure usability throughout the product lifecycle - from the time it starts as a wireframe, then as a prototype, when it’s shipped out, and as it continues in use. Depending on the need, product teams have an arsenal of usability testing methods they can choose from, each with its merit, as follows:

  • In-person: This is a formally structured, on-site, live-testing of users.
  • Remote: Users are in their environments, at home, for example, to catch more natural, on-field insights.
  • Guerilla: This testing is informally structured wherein product teams test their designs on passers-by and colleagues for quick insights. The data may be less accurate but can be quickly collected.

Why is usability so important?

User research at the beginning of the design process is almost as necessary as testing. This sets up assumptions about user profile and behavior that the prototyping and testing cycles will rely on. Further, user testing will be of no use if the insights are not incorporated into the product. Iteration is the consequence of user testing. Each new iteration should aspire to have solved a bottleneck, a bug, or any design flaw that causes headaches, which users, whether digitally savvy or otherwise, know too well.

When users encounter usability issues, especially so-called showstoppers, these could amount to time lost, missed opportunities, frustration, and loss of trust in the service they’re transacting with. The consequences could even be more severe when money is concerned, particularly with eCommerce sites, payment services, and banking apps. Minor tweaks in usability could save users from these kinds of exasperation. For product owners - the companies and organizations that deploy digital services - such implications could spell the difference in user growth, market share, brand reputation, regulatory compliance, and financial results. There are, of course, numerous considerations that influence a product’s success, such as business model, market conditions, technical and cybersecurity factors, among many others. However, usability is entirely within the control of any given organization and its product teams.

Usability as a business priority

Usability can make or break a product. Usability testing and the requisite iterations are how organizations can meet customer expectations in today’s highly digitized economy. Our experts at Radiant Digital can help your organization conduct usability testing and deliver your digital products. For more information on our digital transformation services, contact us today.

What is A/B Testing? And Why?

The definition of A/B testing

A/B testing is another name for split testing. It’s a method of comparing two different versions of a website, app, or others by testing to see which one performs the best. In short, it’s an experiment where two pages that differ are shown at random and undergo a variety of tests to see which one not just looks the best but which one also performs the best. This is great for ensuring you only show the best version to your users. A lot of the time, when modifying websites, there is a lot of estimating undertaken as to what would be the best. This type of testing ensures that the changes you are making are backed up by research.

Terminology: The original version is identified as ‘the control’ while the alternative is called ‘the variation.’ 

Why is this type of testing used?

There are many benefits to A/B testing. Through a controlled process, individuals can tweak or completely change certain aspects to see results. In addition, the data collected from testing helps implement further modifications to other sources to enhance user engagement. Furthermore, this approach can help ensure all team members are on board. Designers may have opposing viewpoints on what will perform the best. Therefore, this method helps provide quantitative data to show real-time evidence of what direction to go. The main advantage of using this approach over alternatives is the ability to heighten your control. Single words can be altered if necessary to examine all possible modifications that will help gain the desired outcome. 

How to use A/B testing

When using A/B testing, the source you want to analyze is selected. Whether this is a web page, app screen, digital advertisement, etc. An alternative to the original is created. This version includes noticeable differences. The amount of variation on this depends on you. Once the sources have been finalized, all traffic is split evenly between the two versions. Analytics is then used to identify different trends that emerge. The results from the test present three other possibilities: 

  • The new version increases ROI.
  • The new version decreases ROI.
  • Little to no impact.

Performing the A/B test

There is an extensive range of different possible variations that can be tested. Whether on a website, app, email, or anything else you want to try. Every time a potential customer visits your website, for example, you have the potential of converting them. Because of this, you need to test the critical parts to ensure it has the best chance of success.

Engaging headlines 

As soon as someone visits your page, they look at the headline, which is usually one of the first things—because of this, making the headlines as engaging as possible is critical. Therefore, A/B testing is an excellent idea to make them as short, catchy, and captivating as possible straight away. 

Don’t just play around with the content. Look at different font and color styles to decide the best first impression for your potential customers. This doesn’t just apply to websites and apps. For example, when creating email marketing campaigns, use A/B testing on your subject headers to find the most effective way for converting leads.  

Straight to the point text: The body text should state precisely what the visitor will get most shortly and interestingly possible. Once again, testing is necessary because you are more likely to see a significant increase in conversions if the text is captivating. Use this test for content adjustments and to change the formatting. For example, by changing large blocks of text into smaller, easy-to-read paragraphs, visitors who like to skim-read are more likely to look at the content. 

Providing high-quality UX design: An easy-to-use website or app makes the difference between a visitor staying on a page for longer or moving on and leaving the site. The best way to ensure a user is delighted with their UX (user experience) is by performing A/B testing to find how to create an excellent UX best.

Navigation: Straight forward navigation can distinguish between users guiding through your app or website quickly or leaving in frustration. A/B testing helps determine how the audience navigates and what should be changed to increase funnel potential. 

A persuasive call to action: The call to action is one of the essential parts of any digital transformation strategy. This is where the visitor should be persuaded by what they’ve seen, resulting in them becoming a customer. By implementing A/B testing for a call to action, you can decide on the content and placement of the CTA that will have the best results. 

How to avoid mistakes

Some businesses become carried away when it comes to testing all products or apps. This can be counterproductive as non-important factors could be taking up valuable time during development. Therefore, to avoid this, a relevant hypothesis must be used to determine if the item in question is worth testing. An excellent place to start is to look at any analytical data that already exists. For example, conversion pages are identified quickly and can be tested correctly to increase results. 

Challenges to outcome

When A/B testing is implemented into the designing phase, it’s not always smooth sailing. However, there are specific challenges that all businesses face when using this improvement tool. 

Audience differences: Sometimes, knowing the exact target market can be hard to identify early on. It may change significantly to what was visioned. If that happens, the initial design stage that used A/B testing may not be relevant later on. 

Understanding data correctly: The whole point of this method is to gain quantitative feedback that provides an insight into the best product or app to use. Therefore, all measurements acquired must be looked at closely to support the right decision. 


Getting started

Want to learn more about A/B testing? Our UX experts are available to discuss how this approach can help achieve your targets. At Radiant Digital, we’re always looking to enhance businesses’ digital transformation strategy without the fuss. 

Agile Development to Incentivize Health Management

Radiant partnered with the University of Minnesota to design and develop a research-oriented, online, mobile-responsive social intervention program called Thrive With Me. The goal of application is to promote improved medication adherence and disease self-management among men with HIV. Radiant was responsible for system design and software development, infrastructure design, maintenance and ongoing user support. In addition to developing the intervention software, we also provided expert consulting regarding overall intervention design, user experience, and field implementation.

Our Solution:

Radiant implemented an iterative development approach that featured frequent stakeholder review, automated testing, and continuous integration/continuous deployment (CI/CD). Our agile process implemented the Scrum framework as follows:

  • Before beginning development, the product team collaboratively developed the overall technical approach (technology stack, tooling) and working methods (definition of ‘done,’ testing, and continuous deployment methods).
  • Desired product functionality was captured in the form of User Stories and collected in a Project Backlog. The Product Backlog was pruned and prioritized regularly by the Product Owner with input from all project stakeholders.
  • In collaboration with development, the UI/UX team utilized interactive prototyping approaches to quickly model and verify approaches with the client and end-user stakeholders.
  • User Stories were estimated using a relative estimation method (modified Fibonacci series).
  • Work was organized into two-week sprints; the Product Owner prioritized User Stories in the Product Backlog, and the development team selected stories for the sprint based on this prioritization.
  • Sprint velocity was tracked and measured weekly; overall project velocity was tracked and measured at the end of each sprint, permitting the team to predict the story points completed by the project release date.
  • Throughout the development process, end-user perspectives were gathered from a cohort of representative users in a community advisory group. Perspectives included aesthetic concerns, evaluation of content, usability, and overall user experience.

The agile process was supported by a technical infrastructure that enabled continuous release of code:

  • The CI/CD process utilized Tuleap ALM for continuous integration. The system integrated Github for source code management and Jenkins for automated testing and deployment from developer machines to staging and production environments.

Using this agile approach, our team delivered a solution that incorporated the following features:

  • An interactive online social space wherein participants can share text, videos, and images. This social space also implemented related features such as commenting, “liking,” and flagging and was integrated with a game mechanics system.
  • An extensive game mechanics system that awarded points to users based on their activities throughout the site. Based on these point awards, participants earned badges, progressed through levels, and were recognized on the site via notifications and leaderboards.
  • An expert content system that displayed skills-training content to each user daily and highlighted content that was particularly relevant based on its profile.
  • An automated participant onboarding system that integrated with an external survey tool (Qualtrics). Leveraging custom code and Qualtrics’ API, study participants were automatically provisioned user accounts after completing a baseline survey, and survey data were stored in a user profile. These survey variables were utilized for content tailoring. This feature greatly reduced the work burden of study administrators.
  • A configurable, interactive SMS-based medication reminder system that allowed users to configure a set of daily reminders. Each reminder collected medication-taking status and mood. These data points were then collected into a “weekly check-in” that allowed users to review their medication-taking and reflect on the relationship between their mood, meds, and substance use.
  • Extensive moderation and content management controls for study administrators, including user management and condition assignment, flagged content alerts and review, process data export, content creation, editing, and revision workflows.

Our technical solution incorporated the following components:

  • Drupal CMS/AMF, with custom modules written in PHP
  • Qualtrics API integration for automated participant onboarding
  • Twilio API integration for interactive SMS
  • Responsive UI based on Bootstrap framework and developed in HTML5. Utilized automation tools such as Less preprocessor and Grunt task runner.

Radiant continues to support the intervention software during an ongoing randomized controlled trial.


  • CI/CD process permitted code release daily.
  • The scrum approach ensured that the highest-value features were implemented with available project resources.
  • Automated testing increased overall solution stability throughout the project.
  • The responsive web approach permitted access to the intervention on any device running a web browser.
  • Integration with Qualtrics yielded more efficient study management and the ability to tailor and highlight content, making the intervention more relevant to end-users.
  • Working directly with end-users during development yielded a user-focused product that was very well received during subsequent usability/feasibility testing.


Digital transformation is about sustaining a competitive edge in an increasingly fast-paced and competitive business environment. Adept application transformation can increase innovation speed, allowing your organization to deliver new products and experiences rapidly.