How to make your investment in Training and Development worth it

In 2020, an estimated 82.5 billion U.S. dollars were spent on training across the United States (Statista, 2020). Organizations today are spending more on training than they ever have in the past, as the numerous benefits provided by investing in employee training and development opportunities continue to be recognized. Yet the question remains, how much of that training being developed is genuinely translating into noticeable positive differences in employee performance?

According to the research, probably not enough. Arthur, Bennett, Stanush, and McNelly (1998) conducted a meta-analysis of skill decay studies. He reported that trainees exhibit little to no skill decay the day after, but after 1-year trainees have lost over 90% of what they learned. It's being experienced that a lack of training transfer is estimated by the level of correlation between learning scores (in training) and performance metrics (on the job). The transfer is critical because without it an organization is less likely to receive any tangible benefits from its training investments.

In the traditional sense of the word, training is most often characterized by a one-off format of instruction typically involving a classroom setting where a facilitator guides the learner through the information designated explicitly for that training session. The problem with this format is that there is no guarantee that anything taught will be retained (and applied) by the employees in attendance after the training finishes. This situation is one of the key reasons why training may not pay off, as it is not so much about the learner's abilities to retain information, but rather about the content being delivered, the method being used for delivery, and the lack of follow up within the current processes. The popular colloquialism "use it or lose it" especially holds in this situation, for one particular training session cannot be relied upon to produce noticeable and lasting results. So, what should organizations be doing differently when it comes to training their employees if they want to see an actual return on their investment? The answer is providing ongoing training opportunities instead. 

The graphic below based on the Continuous Learning Model shows that even though traditional training does initially provide some level of understanding, retained knowledge will steadily decrease over time. Meanwhile, learning remains constant if there are available eLearning courses, mobile education suited for easy access, designated time to speak with supervisors about career/skill development, communities built around learning, and opportunities to interact and learn alongside peers. This continuous model of learning, which promotes an ongoing relationship between the organization and the learner, is the solution needed to ensure that the time and money that goes into training is returned through increased employee skill and performance.

After learning how beneficial ongoing training can be compared to the traditional one-off method for retention and performance, what can companies do to move toward an organizational learning culture characterized by promoting continuous learning? 

Adopt a companywide strategy of continuous learning

First things first, to have a thriving learning culture, the organization involved must actively promote the idea of learning and training to its employees. Doing so becomes natural and habitual for employees to explore developmental options, the norm rather than the exception. Research has shown that a supportive organizational culture for newly acquired Knowledge, Skills, and Abilities (KSA's) results in trainees applying training more effectively on the job (Rouiller & Goldstein, 1993; Tracey et al., 1995). The more leaders indicate that training is essential to the organization, the better the outcomes of training.

Choose the right tool and method for training

When developing a culture of ongoing training and learning, you must be realistic with your employee's time. Requesting all employees to sit through numerous classroom training on top of their busy and fluctuating schedules may not be the most feasible option. Instead, start taking advantage of what online learning platforms offer, including ease of use, learner autonomy, instant feedback, tracking, etc. This is not to say traditional training is never the way to go, as some training may be best suited for an in-person format but considering all of your available training options should be one of the first steps in implementing continuous learning.

Get leadership involved

Culture starts from the top and trickles down, so management needs to communicate their support for continuous learning activities and participation. The reward of promoting ongoing training is that training no longer fits into one box (such as training new employees); it can now be focused on what is essential to the development of each individual at each level, including management. 

Reward learners

It may be beneficial, especially in the early stages of developing continuous training, to make the experience valuable, fun, and engaging in promoting adoption. There are many ways organizations can motivate self-directed learning, such as introducing gamification components into the learning environment. Consider awarding badges based on the completion of a task or learning activity. Use points to mark achievements and progress where learners can move through different levels based on the number of courses completed or events attended. Do something that conveys to your employees that learning is a top priority within the organization. 

Provide autonomous options

Developing a continuous learning environment may seem challenging and time-consuming when starting. But promoting ongoing learning opportunities does not have to be managed on the organizational side solely. Some systems can easily be put into a portal containing job aids and knowledge repositories or databases that track learned and automatically recommend supplemental training. It is also possible to set up employee-led programs by establishing "communities of practice" where individuals of similar interests and job knowledge can interact virtually or in-person to answer questions and discuss challenging situations. Training is not always about knowing what was learned but should also prepare learners to understand where (and whom) they can go for help in the future.

Track and analyze results

Finally, track and analyze the results of any implemented training programs. By doing so, you can figure out what is working and what isn't. A massive downfall of the traditional training method is there is usually no follow-up or effort to track what was learned. And so, by actively working to do so, retention is no longer a guessing game but rather a thought-out and practiced methodology.

Here at Radiant Digital, we specialize in providing learning solutions to suit your organization's needs. We can help you develop training opportunities and implement change management strategies to promote a successful transition to a learning culture. Reach out to us today to see how we can help.


Arthur, W., Jr., Bennett, W., Jr., Stanush, P. L., & McNelly, T. L. (1998). Factors that influence skill decay and retention: A quantitative review and analysis. Human Performance, 11(1), 57–101.

Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of training and development in organizations: What matters in practice. Psychological science in the public interest, 13(2), 74–101.

Statista Research Department (2020, December 15). U.S. Training Industry Expenditure 2020. Statista. Retrieved September 27, 2021, from 

Going Beyond the Kirkpatrick Model: Rethinking Your Training Evaluation Strategy

Measuring training effectiveness is one of the many responsibilities for learning and development professionals and one of the many priorities for senior leadership in workplaces. According to Statista Research Department, every year, U.S businesses collectively invest more than $80 billion on training their employees, and global spending on training and development has increased by 400% in 11 years. This investment cost emphasizes the importance of measuring training effectiveness and business impact. Also, as organizations provide more training offerings to upskill and reskill their employees, learning, and development professionals are hungry for guidance on creating and demonstrating the value of training to their organizations. The Kirkpatrick model is no secret to effectively evaluating training programs; however, most professionals get stuck in implementing the model's levels 3 (behavior) and 4 (results). Thus, it is no wonder that learning and development professionals seek other methods or creative strategies to evaluate the success of training programs. This article will explore effective evaluation strategies in achieving levels 3 and 4 of the Kirkpatrick Model and discuss Brinkerhoff's Success Case Method as an alternative approach to evaluating training. Depending on your evaluation goals, one or more of these solutions could provide structure in evaluating training at your organization.

Learning and development professionals have embraced the Kirkpatrick model and continue to adopt it as the standard approach for evaluating training programs. The evaluation model dates back to 1959 when published in the Journal of the American Society of Training Directors that outlines techniques for evaluating training according to four levels of evaluation. The model's primary strength is that it is easy to understand and implement as the evaluation only includes four levels: reaction, learning, behavior, and results.

Kirkpatrick Model 

Level 1

Level 1 evaluations (reaction) measure participants' overall response to the training program. This includes asking participants how good, engaging, and relevant the training content is to their jobs. Level 1 is considered simple and is typically achieved by implementing a formative evaluation in a survey immediately following training.

Level 2

Level 2 evaluations (learning) measure the increase in participants' knowledge due to the instruction during training. In level 2, it is common to assess learning using knowledge checks, discussion questions, role-play, simulations, and focus group interviews.

Level 3

Level 3 (behavior) aims to measure participants' on-the-job changes in behavior due to the instruction. This is essential because training alone will not yield enough organizational results to be viewed as successful. However, this level is also considered somewhat difficult to evaluate as it requires measurement of knowledge transfer (hyperlink previous article on knowledge transfer).

Level 4

Then lastly, there is Level 4 (results). Level 4 is the reason training is performed. Training's job is not complete until its contributions to business results can be demonstrated and acknowledged by stakeholders. Again, the majority of learning professionals struggle connecting training to performance and results for critical learning programs. When talking with other learning development professionals, the standard response to the difficulty in demonstrating learning results is the time required to measure and decide on a practical approach to capturing key performance indicators. Level 3 and 4 truly is the missing link in moving from learning to results so, what can organizations do to measure the impact of training behavior and the results.

Measuring Level 4 (Results) Strategies

One strategy organizations can implement in achieving desired results from training programs is to create leading indicators. Leading indicators provide personalized targets that all contribute to organizational outcomes. Consider leading indicators as little flags marching toward the finish line, which represents the desired corporate results. They also establish a connection between the performance of critical behaviors and the organization's highest-level impact. There are two distinct types of leading indicators, internal and external, which provide quantitative and qualitative data. Internal leading indicators arise from within the organization and are typically the first to appear. Internal leading indicators relate to production output, quality, sales, cost, safety, employee retention, or other critical outcomes for your department, group, or programs that contribute to Level 4 results. In addition to internal leading factors, external leading factors can be identified in measuring the success of a training program. For example, external leading factors relate to customer response, retention, and industry standards.

The benefit of identifying and leveraging leading indicators is they help keep your initiatives on track by serving as the last line of defense against possible failure at Level 4. In addition, monitoring leading indicators along the way give you time to identify barriers to success and apply the proper interventions before ultimate outcomes are jeopardized. Finally, leading indicators provide important data connecting training, on-the-job performance, and the highest-level result. The first step in evaluating leading indicators is to define which data you can borrow and which information you will build the tools to gather. For example, human resource metrics may already exist and can be linked to the training program/ initiative. If the data is not already available within the organization, it is crucial to define what tools to build to gather the data. Typical examples of tools that may need to be made are surveys and a structured question set for interviews and focus groups.

Alternative Approach to Kirkpatrick Model 

Alternative to using the Kirkpatrick model in measuring training success, the Success Case Method (SCM) by Robert Brinkerhoff has gained much adoption across several industries. This method involves identifying the most and least successful individuals who participated in the training event. Once these individuals are identified, interviews and other ways, such as observation, can be conducted to understand the training's effects better. In comparison, the Kirkpatrick model seeks to uncover a program's results, while the SCM wants to discover how the program affected the most successful participants. One weakness of this model is that only small sample size (successful participants) is asked to provide feedback on the training program, which may omit valuable information and data that could have been collected if all participants were included in giving feedback. This evaluation method may be more beneficial for programs that aim to understand how participants are using the training content on the job, which may result in more qualitative data than quantitative metrics. Both evaluations have benefits and disadvantages in measuring training effectiveness, so the key is selecting the best approach for your training program or perhaps combing these two approaches.

Here at Radiant Digital, we enjoy collaborating with organizations in developing training effectiveness strategies. Partner with us and learn how we can support your learning development team.


Why Usability is Vital: It Can Make or Break a Product

It’s probably safe to assume that almost everyone who regularly makes online transactions has experienced challenges or difficulties in usability. A button can’t be clicked. A particular link leads to an error page. The transaction won’t go through. There are too many steps to take. The mobile version doesn’t display all the content. The list of possible complaints could go on and on. Designers recognize that users will almost always take the path of least resistance - the least amount of effort that yields the ideal outcome. Human behavior optimizes. This then calls for products to optimize for human behavior. In UX design, this pursuit is called usability. 

Usability pertains to the degree of ease users can accomplish a set of goals with a product.  While frequently used interchangeably, usability is part of UX design. The former is the ease of use in completing a given task. The latter is the overall experience with the product.

What are the qualities of a product with good usability?

When users first encounter a new interface, they should accomplish their intended tasks without relying on somebody else. As an individual experience, highly usable products are effective, efficient, engaging, error-tolerant, and easy to learn. Think of them as the “5 E’s of Usability”.

  • Effective: Users can complete their tasks. Are users able to complete their tasks independently? What are the leading causes if users are unable to meet their functions?
  • Efficient: Users can complete their tasks through the easiest or most minor labor-intensive route. How fast are they able to complete their tasks? How many clicks and pages do they go through? Do they take steps or visit pages they’re not supposed to?
  • Engaging: Users find completing their tasks a pleasant experience. But, how do the users react while completing their tasks? Do they seem confused or annoyed on specific steps? Do they seem satisfied after the process?
  • Error-tolerant: Users can recover from genuinely erroneous actions and situations. Do they encounter error prompts even if they make a correct step? When users genuinely make a mistake, are they able to recover and return to the right page?
  • Easy to Learn: Users easily complete new tasks and even more quickly on repeat use. Does their first use of the product appear seamless? Where do they encounter bottlenecks or difficulty in the process? Upon repeating the steps or using the next iteration of the product, do users complete their tasks faster or more seamlessly?

How to test for usability?

Achieving these qualities of usability rarely comes on the first version of any product. Designers can’t wish away that their first try would be helpful enough to be shipped out. Product teams need to look out for flaws they might have overlooked and improve what could still be improved. This can only stem from Usability Testing, which is the process of testing the degree of ease in using a product.

Usability testing is different from focus groups, which is about listening to what participants say. In observing test users, it’s about what they do, not what they say. The types of usability testing depend on the complexity of the study, but they all entail the following features:

  • Representative users: Invite participants who are representative of the product’s core users.
  • Representative tasks: Ask the participants to perform the essential tasks of your product.
  • Action-centric: Observe what the participants do. Give more credence to their actions than their feedback.

Designers must aim to monitor and measure usability throughout the product lifecycle - from the time it starts as a wireframe, then as a prototype, when it’s shipped out, and as it continues in use. Depending on the need, product teams have an arsenal of usability testing methods they can choose from, each with its merit, as follows:

  • In-person: This is a formally structured, on-site, live-testing of users.
  • Remote: Users are in their environments, at home, for example, to catch more natural, on-field insights.
  • Guerilla: This testing is informally structured wherein product teams test their designs on passers-by and colleagues for quick insights. The data may be less accurate but can be quickly collected.

Why is usability so important?

User research at the beginning of the design process is almost as necessary as testing. This sets up assumptions about user profile and behavior that the prototyping and testing cycles will rely on. Further, user testing will be of no use if the insights are not incorporated into the product. Iteration is the consequence of user testing. Each new iteration should aspire to have solved a bottleneck, a bug, or any design flaw that causes headaches, which users, whether digitally savvy or otherwise, know too well.

When users encounter usability issues, especially so-called showstoppers, these could amount to time lost, missed opportunities, frustration, and loss of trust in the service they’re transacting with. The consequences could even be more severe when money is concerned, particularly with eCommerce sites, payment services, and banking apps. Minor tweaks in usability could save users from these kinds of exasperation. For product owners - the companies and organizations that deploy digital services - such implications could spell the difference in user growth, market share, brand reputation, regulatory compliance, and financial results. There are, of course, numerous considerations that influence a product’s success, such as business model, market conditions, technical and cybersecurity factors, among many others. However, usability is entirely within the control of any given organization and its product teams.

Usability as a business priority

Usability can make or break a product. Usability testing and the requisite iterations are how organizations can meet customer expectations in today’s highly digitized economy. Our experts at Radiant Digital can help your organization conduct usability testing and deliver your digital products. For more information on our digital transformation services, contact us today.

Wireframe, Mockup, and Prototype: What’s the difference?

To the uninitiated, wireframes, mockups, and prototypes appear to be synonymous. They’re used interchangeably by the layperson and understandably so. But for product and design folks in the digital space, these differentiated outputs serve different requirements. Wireframing, mockuping, and prototyping are processes in the early stages of product development, especially the web, mobile, and native applications. In this context, they’re typically defined as follows:

● A wireframe is a quick sketch of a product intended to convey its desired functionalities.

● A mockup is a realistic design of a product designed to gather feedback on its visual elements.

● A prototype is an interactive simulation of a product designed to test the user experience.

It’s worth noting that diving into each step is not a straightforward box-ticking activity. Instead, these are problem-solving exercises that entail consensus-building, testing, and iteration, among others. For instance, product teams leverage design thinking methods to bring out user-centric approaches throughout the product development process.


Wireframes are basic, black and white renderings that focus on what the features and functionalities are intended to do. For example, a low-fidelity user interface representation depicts how information is structured and which content is grouped. Wireframing is far from drawing up meaningless sets of grey boxes, although they may appear that way. Instead, as the first scratch of a project, wireframes are ideally accompanied with brief notes to explain vital visual elements and how they interact with each other. A wireframe is rarely deployed as a testing material but helps build consensus and gather early feedback. These may even be deployed for guerilla-style research where initial insights suffice and methodological rigor is not yet essential.


Taking off from wireframes, a mockup would then incorporate design choices, particularly color, font, and icons. Designers often include content to approximate the final output, even if these are placeholder text and photos. Visually, the ideal mockup should resemble the intended look and feel of any given digital product. Mockups remain a static output, but UX designers should solicit feedback regarding its visual components and aesthetic qualities. These are also particularly helpful in solidifying buy-in and support from high-level decision-makers, mainly clients and management, by dazzling them with what the outcome could eventually look like.


A prototype may or may not exactly look like the final product, but it should simulate the intended experience. It needs to be stressed that the heart of prototyping lies in user testing. Letting sample users navigate through the interface informs development teams how to enhance user experience better. As veteran UX designers and product teams know too well, prototyping is about observing what users do and not about what they say they’re going to do. In this process, the interface may not yet be linked with backend mechanisms. This enables product teams to test user experience before allowing the developers or engineers to begin their work. Interactivity can be tested with various tools without needing code. Product teams may also do some preliminary A/B testing to compare two different versions of a prototype to assess which one performs better.

Wireframes and mockups present to stakeholders how a product looks and how it should work. On the other hand, prototypes demonstrate how it works.

Wireframe Mockup Prototype
Purpose To develop and gain consensus on product functionalities To collect feedback on visual elements To collect feedback by testing user experience
Visual elements Black, white, and grey boxes that present structure Must incorporate colors, fonts, icons, and all design elements Visual elements must be interactive and demonstrate navigation
Design fidelity Low Medium to High High
Interactivity Static Static Dynamic or interactive
Time and cost invested Low Middle High
Creator Product team, project manager, and UX designer UX designer UX designer and/or developer

Is it necessary to wireframe, mockup, and prototype - and in that order?

This is an ongoing debate; product teams have the ways they have settled into. Resources and time play a significant role in any development process. Some projects can afford to do this step by step and take their time to iterate their wireframes, mockups and even conduct well-structured prototyping studies and experiments. And even then, the envisioned product is not necessarily assured even when product teams commit to all these three design processes as ideally conceived. In other cases, after producing well-designed wireframes or medium-fidelity designs, some teams opt to do prototyping first, even before finalizing their fully designed mockups. It’s also not uncommon for projects with tight deadlines to do their prototyping with an interface already coded by their developers.

Early-stage product development is a dynamic process, each with its unique dependencies, circumstances, and limitations. Even the UX tools and applications at hand influence the design process. Professionals in this space are already well-versed with prototyping tools such as Balsamiq MockupsAxurePidocoPenultimate, and Jutinmind. These tools allow designers and product teams to create, edit and collaborate on wireframes and mockups. Some of these even extends to the creation of interactive prototypes.

Method to the Madness

Designing and developing products, incredibly complex and highly technical applications, is sometimes at its most daunting in the early stages. Wireframing, mockuping, and prototyping are methods to this seeming madness. Product teams, clients, stakeholders, and users can more easily break down the design process into digestible phases allowing for better engagement in the development of the product.

Will you be working on a project that requires careful and deliberate consideration for the user’s experience? Reach out to our experts at Radiant Digital to learn more about our product design and development process.

Designing Beyond the Screen: UX from Digital to Physical

User Experience (UX) is almost always associated with screens and device interfaces. This is, of course, understandable as it was once termed as Information Architecture and rose to its present label only in around 2010. Thus, today UX Designers are titled Information Architects. However, with the growing digitalization of physical products, consumer services, and virtually everything, UX design is no longer limited to the two-way interaction between users and their devices and the entirety of their user experience with the outside world.

UX doesn't have to be just for digital

Collins, a design studio in New York, worked with beauty and skincare brand eos to study how women used lip balm. In retrieving their lip balms, two groups of behavior emerged. One group of women dumped out the contents of their bag while the other spent a few moments digging through it. The fundamental issue they realized was that a stick of lip balm, Chapstick, for example, was difficult to distinguish by feel from lipstick, eyeliner, a roll of mints, or other similarly shaped objects. This became the focus of their design. Their strategy: "Stand out by improving the packaging experience." Its innovative spheric shell can be recognized simply by touch in the dark or inside a cluttered bag. As a result, it generated massive social buzz and drew praise from design circles when rolled out to the consumer market years back. Now a staple in beauty stores worldwide, this design-driven product continues to deliver commercial results for its company.

This is just one of many stories that lend to the paramount value of user-centered design for physical goods. Be it for furniture, children's toys, kitchenware, and other everyday objects, well-designed products make so much sense that they become part of daily lives. The same can be said about today's electronic devices and digital products.

Digitalizing the physical

The past two decades have seen a rapid acceleration in the digitalization of almost everything that can be digitized. Analog devices such as watches and televisions became smart. At-home and hospital-grade medical devices have been transitioning to digital. However, even today's car dashboards are far from their analog predecessors. And while these continue to become more innovative with each new model, challenges in user experience also continue to evolve.

Integrating digital with the physical has likewise been a growing business. Peloton, for example, introduced an interface to stationary bikes. In addition, the internet of things has enabled household devices such as thermostats, refrigerators, and doorbells to be managed through mobile devices. User experience now goes beyond the interaction with a screen but also with tethered devices. 

Augmenting the reality

Further, digital technologies have grown to influence offscreen user behavior. For instance, home workout apps have taken off, primarily due to lockdowns. Similarly, mobile apps that track running and cycling activity would influence users' behavior to increase or decrease intensity on their next session.

UX design takes on a particularly influential role when an application almost entirely guides user behavior while users interact with their devices. Pokemon Go, an augmented reality mobile game, is a demonstrable example of this. On the one hand, studies have proven the physical and social benefits for its users. One key finding of the app was that Pokemon Go successfully targeted a unique set of users: people who are difficult to motivate to be physically and socially active. On the other hand, immediately after the launch in 2016, frenzy's news coverage were reports of injuries and accidents among its users. The game was also criticized for enabling its users to flock to certain memorials and cemeteries. The field of UX is conventionally about enhancing the interaction between users and products. However, this new phenomenon places an unprecedented degree of responsibility on UX designers and product teams to be mindful of their users' behavior in the real world as a direct result of engaging with their applications.

UX Research and testing are no longer a luxury

As human interaction with digital products becomes more dynamic - more physical and consequential - this could serve as an opening for UX designers to bolster their case and bargain for more user research and testing resources. While some are still skeptical of UX research (UXR), thinking that they already know their users too well, in reality, UX research and testing are no longer a luxury. 

UX practices must also evolve by incorporating user environments and circumstances. For instance, UX studies need to go beyond the usual arrangement where UX researchers observe sample users in the exact location. Instead, product teams may explore deploying ethnographic research to monitor how users go about their daily lives and how that shapes their interaction with the application. Designers may also deploy a mixed-methods approach to UX research. In this approach, results from one research method, both quantitative and qualitative, are triangulated with the results of additional ways to build a more comprehensive picture of user needs and behavior.

With the fast-evolving capabilities of digital products in the market, user interactions likewise grow in complexity where the digital collide with the physical and the environmental. Will you be working on a UX design challenge with this kind of intricacy? Reach out to our specialists at Radiant Digital to learn more about our UX methods and expertise.