The Fundamentals of MLOps – The Enabler of Quality Outcomes in Production Environments

With the increasing complexity of modeling frameworks and their relevant computational needs, organizations find it harder to meet the evolving needs of Machine Learning.

MLOps and DataOps help data scientists embrace collaborative practices between various technology, engineering, and operational paradigms.

MLOps is a set of practices that infuses Machine Learning, DevOps, and Data Engineering practices for a reliable and data-centric approach to Machine Learning systems during production.

It starts by defining a business problem, a hypothesis about the value extraction from the collected data, and a business idea for its application.

What is MLOps, and what does it entail?

It is a new cooperation method between business representatives, mathematicians, scientists, machine learning specialists, and IT engineers in creating artificial intelligence systems.

It applies a set of practices to augment quality, simplify the management processes, and automate Machine Learning and Deep Learning models' deployment in a large-scale production environment.

The processes involved in MLOps include data gathering, model creation (SDLC, continuous integration/continuous delivery), orchestration, deployment, diagnostics, governance, and business metrics.

Key Components of MLOps

MLOps Lifecycle

When it becomes necessary to retrain the model on new data in the operation process, the cycle is restarted - the model is refined and tested while a new version is deployed.

Why do we need MLOps?

MLOps streamlines process collaboration and integration through the automation of retraining, testing, and deployment.

AI projects include stand-alone experiments, ML coding, training, testing, and deployment to fit into the CI\CD pipelines of the development lifecycle.

MLOps automates model development and deployment to enable faster go-to-market and lower operational costs. It allows managers and developers to become more agile and strategic with their decision-making while addressing the following challenges.

Unrealized Potential: Many firms are striving to include ML in their applications to solve complex business problems. However, for many, incorporating ML into production has proven even more difficult than finding expert data scientists, useful data, and optimized training models.

Even the most sophisticated software solutions get caught up in deployment and become unproductive.

Frustrating Lag: Machine learning engineers often need to deploy an already trained model, which could cause anxiety. Communication between the operations and engineering teams requires incremental sign-offs and facilitation, spanning over weeks or months. Sometimes, a straightforward upgrade can feel insurmountable, especially for model enhancements, by switching from one ML framework to another.

Fatigue: Production processes lead to frustrated or underutilized engineers whose projects may not make it to the finish line. This causes process fatigue that stifles creativity and diminishes the motivation to deliver benchmark customer experiences.

Weak Communication: Data engineers, data scientists, researchers, and software engineers seem to be worlds apart from the operations team regarding the physical presence and thought process.

There is rarely sufficient streamlining to bring development and data management work to full production.

Lack of Foresight: Many ML data scientists don't have a particular way of knowing that their training models will work correctly in production. Even writing test cases and continually auditing their work with QA or Unit Testing cannot prevent the types of data encountered in production to differ from those used to train these models.

Therefore, gaining access to production telemetry to evaluate model performance against real-world data is very important.

However, the non-streamlined CI/CD processes to place the new model into production is a significant hindrance to realizing deriving value from machine learning.

Misused Talent: Data scientists differ from IT engineers. They primarily develop complex algorithms, neural network architectures, and data transformation mechanisms but do not handle Microservices deployment, network security, or other critical aspects of real-world implementations.

MLOps merges multiple disciplines with varied expertise to infuse machine learning into real applications and services.

DevOps vs. MLOps

By analogy, DevOps and DataOps help businesses organize continuous cooperation and interaction between all the process participants with machine learning models created by Data Scientists and ML developers.

Though MLOps evolved from DevOps, the two concepts are fundamentally different in the following ways.

MLOps DevOps
It is more experimental in nature. It is more implementation and process-oriented.
It needs a hybrid team composition of software engineers, data scientists, ML researchers, etc. focusing on exploratory data analysis, model development, and experimentation. Teams are mostly composed of data engineers, scientists, and developers.
Testing an ML system involves model validation and training, in addition to unit and integration testing. Mostly regression and integration testing are done.
A multi-step pipeline is required to retrain and deploy a model automatically. A single CI/CD pipeline is enough to retrain and deploy a DevOps model.
Automatic execution of steps needs manual intervention before new models are deployed. No Manual intervention is required for new model deployment since the entire process can be automated using various tools.
ML models can experience production system performance degradation due to evolving data profiles. DevOps models do not experience any performance degradation due to evolving elements.
Continuous Training (CT) is used to retrain candidate models for testing and serving automatically. And Continuous Monitoring (CM) is used to capture and analyze production inference data and model performance metrics. Continuous Training (CT) and Continuous Monitoring are not used, but Continuous Integration and Continuous Deployment are used.

The Four Pillars of MLOps

These four critical pillars support any agile MLOps solution for problematic situations to deliver machine learning applications in a production environment safely.

The Fifth Pillar of MLOps

Production Model Management: To ensure ML models' consistency and meet all business requirements at scale, a logical, easy-to-follow Model Management method is essential. Though optional, this paradigm streamlines end-to-end model training, packaging, validation, deployment, and monitoring to ensure consistency.

With Production Model Management, organizations can:

  • Proactively address common business issues (such as regulatory compliance).
  • Enable the sustainable tracking of data, models, code, and model versioning.
  • Package and deliver models in reusable configurations.

Model Lifecycle Management: MLOps simplifies production model lifecycle management by automating troubleshooting and triage, champion/challenger gating, and hot-swap model approvals. It produces a secure workflow that efficiently manages your models' lifecycle as the number of models in production grows exponentially.

The key actions include:

  • Champion/ challenger model gating introduces any new model (aka 'challenger') by initially running it in production and measuring its performance in comparison to its predecessor (aka 'champion') for a defined period to determine its worth to outperform its predecessor in terms of quality and model stability before becoming completely automated.
  • Troubleshooting and Triage are used to monitor and rectify suspicious or poorly performing areas of the model.
  • Model approval is designed to minimize risks associated with model deployment ensuring that all relevant business or technical stakeholders have signed off.
  • Model update offers the ability to swap models without disrupting the production workflow, which is crucial for business continuity.

Production Model Governance: Organizations have to comply with CCPA and EU/UK GDPR before putting their machine learning models into production.

Organizations need to automate model lineage tracking (approvals, versions deployed, model interactions, updates, etc.).

MLOps offers enterprise-grade production model governance to deliver:

  • Model version control
  • Automated documentation
  • Comprehensive lineage tracking and audit trails for the suite of production models

This helps minimize corporate and legal risks, maintain production pipeline transparency, and reduce/eliminate model bias.

Benefits of MLOps

Open Communication: MLOps helps merge machine learning workflows in a production environment to make data science and operations team collaborations frictionless.

It reduces bottlenecks formed by complicated and siloed ML models in development. MLOps-based systems establish dynamic and adaptable pipelines to enhance conventional DevOps systems to handle ever-changing, KPI-driven models.

Repeatable Workflows: MLOps allow automatic and streamlined workflow changes. A model can span through processes that accommodate data drifts without significant lags. MLOps consistently gauge and order model behavior and outcomes in real-time while streamlining iterations.

Governance / Regulatory Compliance: MLOps helps incentivize regulatory capacities and stringent policies. MLOps systems can reproduce models in compliance and accordance with original standards. As consequent pipelines and models evolve, your systems play by the rule book.

Focused Feedback: MLOps provides sophisticated monitoring capabilities, data drift visualizations, and data metrics detection over the model lifecycle to ensure high accuracy over time.

MLOps detects anomalies in machine learning development using analytics and alerting mechanisms to help engineers quickly understand the severity and act upon them promptly.

Reducing Bias: MLOps can prevent development biases that can lead to misrepresenting user requirements or subjecting the company to legal scrutiny. MLOps systems ensure that data reports do not have unreliable information. MLOps support the creation of dynamic systems that do not get pigeonholed in their reporting.

MLOps boosts ML models' credibility, reliability, and productivity in production to reduce system, resource, or human bias.

MLOps Target Users 

Data Scientists: MLOps helps Data Scientists collaborate with their Ops peers and offload most of their daily model management tasks to discover new use cases, work on feature plans, and build in-depth business expertise.

Business Leaders and Executives: MLOps lets decision-makers scale organization-wide AI capabilities while tracking KPIs influencing outcomes.

Software Developers: A robust MLOps system provides a functional deployment and versioning system to developers that include:

  • Clear and straightforward APIs (REST)
  • Developer support for ML operations (documentation, examples, etc.)
  • Versioning and lineage control for production models
  • Portable Docker images

DevOps and Data Engineers: MLOps offers a one-stop solution for DevOps and data engineering teams to handle everything from testing and validation to updates and performance management. This allows value generation from and scaling opportunities for internal deployment and service monitoring.

The benefits include:

  • No-code prediction.
  • Anomaly and bias warnings.
  • Accessible and optimized APIs.
  • Swappable models with gating of choice automation, smooth transitioning, and 100% uptime.

Risk and Compliance Teams: An MLOps infrastructure improves the quality of oversight for complex projects. A reliable MLOps solution supports customizable governance workflow policies, process approvals, and alerts.

It promoted a system that can self-diagnose issues and notify relevant RM stakeholders, allowing for tighter enterprise control over projects. Additional MLOps management capabilities include predictions-over-time analysis and audit logging.

Final Thoughts

A well-planned MLOps strategy can lead to more efficiency, productivity, accuracy, and trusted models for the long road ahead. All it takes is embracing its potential systematically and in line with your production environment needs.

Radiant Digital's MLOps experts can help to achieve this. Connect with us today to discuss the possibilities.

Digital Twin - Converging the Virtual and Physical worlds to Accelerate Transformational Innovation

If minimizing failures, shortening development cycles, and smoothening the product development cycle are your paramount goals, then Digital Twin adoption should be your top priority. Digital Twin offers visibility on your production line and helps predict the future of various processes. It essentially helps maximize OEE, optimize productivity, and improve business profitability. In other words, Digital Twin empowers engineers to analyze, explore, and assess physical assets, processes, and systems using virtual tools. This ability helps gain a highly accurate view of what is happening now and what will happen next.

What is Digital Twin?

Digital Twin is a technology that engages AR, VR, 3D graphics, cloud, AI, data modeling, and other emerging technologies to build a virtual model of a system, process, product, service, or other physical objects. This virtual replica of the physical world gets updated based on real-time updates and environmental parameters.

'Digital Twin' is often mistaken for being a simulation. But, in reality, it merges business logic, IoT data, modeling and simulation, and Data Analytics to predict a physical system's behavior. Converging virtual and physical worlds helps manufacturers and AR/VR businesses head off problems before they occur, prevent downtime, improve the scope of their products/services and even plan for the future by using simulations. A contemporary Digital Twin combines multiple interacting systems to account for different facets of the physical system.

Origins in Aerospace

Digital Twin's origination can be traced back to NASA's Apollo 13 mission. During the building process, NASA had built multiple simulators of various systems of the actual spacecraft that they initially used to train astronauts for failure management. Real-time data was used during an explosion in the oxygen tanks (which critically affected the main engine) to modify the simulators and emulate the damaged spacecraft condition. Instead of IoT, advanced telecommunications was used for two-way data transfer. Modified simulations provided critical information to the crew, ensuring their safe return to earth. Thus, NASA had nearly set up the modern-day Digital Twins framework.

Why Digital Twin is Helpful

The Digital Twin is used in specified analytics workflows to facilitate the planned business outcome, using consistently acquired environmental and operational data. The data flow consistency allows the Digital Twin model to continually adapt to environmental and operational changes and deliver the best results. Thus, the Digital Twin reflects the physical asset or system's living model offering the near-accurate representation during development. This technology can be rapidly and easily scaled for a quick deployment for similar applications.

Digital Twin helps model at these levels:

  • Component level – Focus is on a single, highly critical component within the manufacturing process/facility/production line.
  • Asset level –A Digital Twin is created for a single asset within the production line.
  • System-level – The Digital Twin helps monitor and improve a production line system.
  • Product Level- This technology helps monitor and test a product in real-time as used by real customers or end-users.
  • Process level – The Digital Twin focuses on optimizing processes like design, development, and production. It also relates to the distribution and consumption lifecycle of end-products and the development of future products.

Digital Twins allow engineers to test a physical product through virtual modeling and monitor which aspects may need to be improved, replaced, or augmented. It also helps evaluate the risks and issues that might occur in the future without ever using the product.

Additionally, Digital Twin can help:

  • Enhance product traceability.
  • Test, validate, and refine assumptions.
  • Increase the level of integration between unconnected systems.
  • Remotely troubleshoot equipment, regardless of the geographical location.

A digital twin performs real-time tests on a Minimum Viable Product (MVP), process, or system based on real-time updates (data and status) and real-world dynamics. The state of the Digital Twin changes as it receives new data from the physical object. As this model matures, it generates more accurate and valuable outputs.

Building Block of Digital Twin

Digital twin needs data, algorithms, sensors, and a data access framework. This is called a digital thread that runs through the data network, establishing links and correlations. The digital thread ensures that all the process components can access the latest sensor data in a product lifecycle (sample shown below).

A digital thread can follow the entire life cycle of different products. It can connect the design requirements to its implementation, manufacturing instructions, supplier management, and end-customer processes. A digital thread offers a complete overview of the product’s performance across business functions and departments based on real-time data.

Benefits of Digital Twin Technology

  • Improve the reliability of equipment and production line processes.
  • Enhance OEE, reduce downtime and productivity.
  • Aid better resource utilization and reduces wastage.
  • Help improve product availability, marketplace reputation, and quality.
  • Lower maintenance costs through predictive maintenance and increases visibility by at least 30%.
  • Lower production, lead time, and time-to-value by nearly 30%.
  • Improve customer service through remote configuration and product customization.
  • Make supply and delivery chains more efficient.
  • Improve ROI and profitability.

Challenges Solved by Digital Twin

In a product lifecycle, Digital Twin can solve the following challenges.

Engineering Design: Digital Twin helps meet complex product requirements and stringent regulatory requirements. It helps,

  • Support rapid development cycles.
  • Explore the effects of various design alternatives.
  • Perform simulations and testing to ensure that product designs won’t go astray.

Manufacturing Build: Digital Twin helps meet the changing demands for better efficiency, yield, and quality. It adds clarity to how a projected change might impact costs or schedules in a manufacturing project.

Operations and Service: Digital Twin helps minimize unexpected downtime, worker safety incidents, and operational glitches. It provides the current operational status and statistics on system alerts, maintenance issues, KPIs, etc.

Handling Data

Present data – Digital Twin works on real-time data from equipment sensors, manufacturing platforms and systems’ output, and data from all the systems in the distribution chain. It may also utilize departmental business system data like customer service or purchase information.

Future data – Digital Twin can implement the model with ML or analytics-based predictive data from engineers.

Essential Components of a high-fidelity Digital Twin

Business Logic: This provides a defined set of rules for the Digital Twin to manage data. It defines the interactions between business objects, the conditions, and the operational sequence. The business logic should also enforce routes and methods by which the said business objects are accessed and modified.

Internet of Things (IoT) data: Data is a vital asset of a fully functional Digital Twin. Real-time sensor telemetry data from physical devices need to be collected, stored, and relayed as input. IoT data offers visibility into the system lifecycle. It replaces assumptions with real data for systems planning and designing. This data closes the feedback loop with system usage data aiding operational decisions. Analyzing the historical data system patterns and performance stats against real-time sensor data optimizes asset utilization.

Modeling and Simulation: A Digital Twin model interface may include fields like property, telemetry, component, and relationship. For example, suppose a Digital Twin user is collecting temperature data of a room in a building. In that case, the property is temperature, the telemetry is temperature readings, and the component is a thermostat. The relationship represents how the room relates to the floor and the floor to the building, creating a knowledge graph of interrelated models. With models in place, users can simulate what-if scenarios to make design, operational, and performance decisions.

Output data analytics: Real-time sensor data collected from the physical system can be analyzed and visualized to interpret and communicate data patterns. These data patterns can help make effective decisions, predictions, and business performance improvements. Historical and current data comparisons help forecast future trends and improve high-level decision-making.

Digital Twin and Immersive Technologies

Sensors and actuators that help monitor and control systems capture real-time data. By connecting Digital Twin, data on the operations of the physical system can be analyzed and processed. The same data can be used to produce 3D objects and images overlaid with real-time sensor data. This helps product maintenance and field service where the Digital Twin can follow the product’s location and movement. Digital Twin is useful for diagnostics, predictive maintenance, and product development, while AR/VR demonstrate their utility for product visualization, equipment maintenance, and training.

Market Perspective

  • In Germany, a Siemens plant had implemented Digital Twin as their production scaled up to 15 million units a year without expanding the factory floor or additional human resources. This led to a defect rate of almost zero as 99.99% of units required no adjustments.
  • Since 2015, GE had implemented over 500,000 Digital Twins for wind turbines before they were built, which led to 20% improved efficiency.
  • Experts predict the global Digital Twin market to reach USD 48.2 billion by 2026 with a CAGR of 58%.

Wrapping up

Digital Twin can integrate system insights, improve visibility on machine states, and trigger appropriate remedial business workflows. Embracing this potentially disruptive technology now can help businesses remain relevant, competitive, and digitally transformed. Radiant is playing a key role in advancing this new technology!

Enterprise BPM Transformation - Embrace the change

Business leaders often experience the heat to remain competitive, deliver quality-driven products and services, optimize costs, and improve productivity.

Many of them are leaning on Business Process Management (BPM) software to make their daily operational processes adaptable, agile, efficient, and reliable to remain relevant in a dynamic tech marketplace.

Some companies consider BPM as an evolution of application development that often incorporates automation. The others use BPM to implement a mix of Lean, Six Sigma, CI, or TQM methodologies. Either way, BPM software tools enhance the operational environment to facilitate change and gain visibility on how things are working in reality.

This traditional model supports a domain-driven design, thus allowing for cohesive and high-performing solutions.

Before we delve deeper into the benefits and types of BPM, let us understand what BPM stands for.

What is BPM?

According to Gartner, Business Process Management is a discipline that implements various tools and methods to design, model, execute, monitor, and optimize business processes.

It is a holistic and systematic approach to achieving optimized business outcomes.

BPM focuses on making routine transactions consistent and automated for human interactions. It helps businesses reduce their operational costs by minimizing waste and rework and increasing the team's overall efficiency.

What BPM is not

BPM is often mistaken for a software product. Instead, it is a collection of organized tools that help businesses automate multi-step, complex, and repetitive business processes.

BPM is also not a Task Management process for handling or organizing a set of activities. On the other hand, it is focused more on optimizing mundane and ongoing processes that follow a predictable pattern.

Purpose of BPM

BPM is used to manage the five critical stages of an organization's business processes.

Design: Before designing a product, business analysts review business rules, interview the stakeholders involved, and discuss the management's desired outcomes. This helps understand the business rules and ensures it is implemented to produce the desired results aligned with the organizational goals.

Model: BPM helps model a framework/solution/process by identifying, defining, and representing new processes to meet the current business rules.

Execute: BPM tools help execute the business process through live and incremental testing with a small group of target users before opening it to a larger user group. Where automated workflows are involved, the processes can be expedited and streamlined to minimize errors.

Monitor: With BPM tools, teams can establish Key Performance Indicators (KPIs) and track real-time performance through dynamic reports or cognitive dashboards. BPM lets analysts focus on the macro or micro indicators of an entire process vs. each process segment in isolation.

Optimize: BPM puts an effective reporting system in place to steer operations toward process improvement. Business Process Optimization (BPO) helps redesign business processes, integrate them, and improve process efficiency by aligning individual business processes with a comprehensive strategy.

Business process management is crucial for organizations that thrive on high-volume information processing and consistently analyze and improve their data quality and usability.

By implementing a BPM solution, you can provide quick responses to challenges and opportunities. At the same time, business leaders would be well-equipped for decision-making that impacts overall company growth.

BPM is necessary if you want to:

  • Analyze, filter, and combine information for timely access, data democratization, analysis, and improvement.
  • Automatically synchronize data that is fed into the system with the existing processes.
  • Design customized operational models to cater to unique business use cases.
  • Keep track of all the digital data being provided into your Enterprise system and the systemic changes that evolve.
  • Gain an early discovery of existing loopholes and inefficiencies that impact outcomes, incapacitates the business system, or slows down processes.

Benefits of Implementing Business Process Management

Business Process Management helps organizations in total digital transformation that allows realizing significant organizational goals.

Other additional benefits include:

  1. Improved Business Agility: An optimized business processes lifecycle is necessary to meet dynamic market conditions. BPM allows organizations to streamline business processes, implement change quickly without errors, and execute critical functions with maximum productivity. Altering, reusing, and customizing workflows make business processes more responsive because of more profound insights into how the process modifications will impact business outcomes.
  2. Higher Revenues: A business process management tool alleviates bottlenecks and reduces Lead times with faster time-to-value through quick access to services and products by customers. It helps minimize revenue leaks by helping track resource utilization, wastage, and performance.
  3. Higher Efficiency: When the right information is input, teams can closely monitor delays and re-allocate resources as needed. Automation reduces repetitive tasks to a single execution, while intelligent process automation helps take human-like decisions for optimized business process results.
  4. Better Visibility: By testing different project models with different parameters/designs/scenarios, and real-time monitoring, results can be compared for better decision-making and outcomes tracking. With BPM tools, you can determine how a process would work under optimal conditions and make high-performance adjustments.

Types of BPM Systems Based on Purpose

Integration-centric: This BPM system helps organizations build, improve, and integrate processes, applications, application objects, or system artifacts in their existing enterprise systems (e.g., HRMS, CRM, ERP, etc.) or new systems to minimize management risks and overhead.

Integration-centric BPM systems characterize extensive connectors and API programming to support agile and easy model-driven development. They work primarily with the business' SOA to monitor the interaction between computer programs and other digital entities.

Human-centric: These BPM systems help streamline processes that need extensive human intervention. These include multiple levels of approval and diverse tasks performed by experts. These platforms have user-friendly interfaces that don't need programming experience and support quick tracking and user notifications.

These tasks include data review, (repeatable) document creation (contracts, proposals, and reports), image development for design, approvals or authorizations, etc.

Document-centric: This solution is required when a document (e.g., invoices, contracts, project documents, etc.) is at the heart of the process. The BPM enables role-based routing, formatting, error-checking, verification, validation, and getting the document digitally signed as each task passes along the predefined workflow.

This system's primary goal is to delve into the processes entirely and discover any obstacles that may prevent a seamless document flow. 

jBPM – Radiant's Recommendation

jBPM is a free Open source BPM software with a workflow engine to bridge the gap between business analysts and developers.

It helps improve efficiency, accelerate productivity, and improve outcomes through automated Decisioning support for business users and developers.

Stay tuned for an article on jBPM capabilities, Radiant's approach to Enterprise Business Transformation.

Using UX Research to Optimize Software Design

In recent years, the concept of user experience (UX) has become more prominent in the design and development of software, hardware, and other innovative technology. The essence of UX is putting the needs and perspectives of the people who would most likely use the technology front and center in the design and development process. The two critical components to the UX concept are user experience design (UXD) and user experience research (UXR). UXD is relatively familiar to many tech professionals, as it employs rapid iteration concepts such as design thinking to design interfaces and other aspects of technology with empathy and flexibility. This post focuses on UXR, a powerful practice for informing and shaping designs and prototypes that build the right thing for target users. UXR is the process of researching to understand what users or potential users of technology need to do work, complete a transaction, or otherwise have a worthwhile experience using a website, platform, app, or another type of technology. The essence of UXR is gathering data and synthesizing it to improve usability.

UXR plays a vital role in ensuring that the UXD process produces a very attuned design to the actual customer or user needs. It enriches the design process by incorporating user context; it’s not just a great idea about what people need that was formulated solely in a design studio or lab. UXR dramatically increases the chance that the technology to be built will solve a significant problem rather than just bringing someone’s “brilliant idea” to fruition. A UX approach that incorporates research saves targeted end-users the headache of a non-user-friendly technology and protects the company or agency from using the technology to get work done from wasting substantial sums of money.

Another critical consideration of UXR is its origins. UXR as a practice was mainly developed by master’s degree and Ph.D. qualified social scientists with extensive backgrounds in studying human behavior and applying insights to the development and improvement of products and programs. Early pioneers of this work include anthropologist Ken Anderson and sociologist Sam Lander. Early adopters of UXR as part of a strategy and UXD include tech companies such as Facebook, Google, Instagram, and Amazon.

Let's delve deeper into UXR's essential concepts that make a software designer's life easier.

Understanding a Typical UX Research and Design Flow

If one uses the design thinking framework as pictured below, UXR is typically the leading activity during the understanding phase (empathize and define) and the materialize phase (test). The graphic's circular shape indicates that the understanding phase happens at the beginning of the design cycle of a new product or the birth of a redesign or reiteration. The understanding phase focuses on formative or exploratory research to surface pain points or needs. The design cycle's testing portion entails more summative research to test a design prototype's usability and intuitiveness. Some testing designs incorporate the concept of “user delight” to surface how potential users of a technology subjectively perceive or feel about the prototype. UXR can also be used before a design cycle to surface deep-seated needs and make concept recommendations for new products that have not previously been conceptualized by other members of the UX or leadership teams.

To fully understand the value of UXR, it is helpful first to understand the importance of research methods and broad types of UXR designs. Research methods fall into two major categories, quantitative and qualitative. Quantitative methods are heavy on numerical output and are valid for answering questions about the degree to which something occurs or is a problem. Familiar examples include surveys and statistics. Qualitative methods are heavy on observational and text data. They are well suited to answering “why” questions and gleaning rich data about a small sample of respondents' particular experiences. While most methods used in UXR are qualitative, quantitative methods do come into play as well. These methods fit in a research design continuum of discovery, formative, and summative research designs.

Types of UXR Designs

Discovery Research: This method delves deeper into discovering the hidden needs and pain-points of users. Discovery research is a pathfinding process that helps receive information to conceptualize a new digital experience or other technology. So, discovery research is generally conducted before starting the design cycle to inform a company what technology to design and develop to best meet customer needs. One might say discovery research sits at the intersection of strategy and design.

It is similar to formative exploratory research but a little further upstream of the design process. For instance, at Javelyn Technology, the team conducted pathfinding research to assess the level of and need for trust in hyperlocal areas of London before starting design work on a property tech application to facilitate building trust between near neighbors. The team fleshed out the concept before getting wrapped up in complex design concepts.

Methods for discovery research include ethnographic methods such as immersive observation, contextual inquiry, workshops, and semi-structured interviews.

Ethnographic observation entails extensive and detailed observation of a workflow and how it fits the respondent's daily life. Ethnographic research might spend several hours with a respondent to understand the broader context, often over several days. Observations are usually combined with an informal or semi-structured interview. Data recording is often completed via a combination of very detailed notes and video or audio recording. While ethnographic methods are time-intensive, they can yield rich data on a current workflow's entire context and pay substantial dividends in designing the right thing.

Contextual inquiry is a modified form of ethnographic research that focuses on understanding the context and process of a specific workflow in most contextual inquiry designs. The researcher observes a process from start to finish and encourages the respondent to “think aloud” and explain each step as it is being completed. Often, the session ends with a short semi-structured interview about the respondent’s overall impressions of the process, pain points, etc. Contextual inquiry is a bit more focused than and not quite as time-intensive as traditional ethnography and yields targeted yet rich insights about current workflows and processes.

Workshops are similar to focus groups in that they elicit data from a group but are more participatory than focus groups or interviews. The format can vary, and workshops can include brainstorming, journey mapping, and subject matter expert panels (SME panels). Usually, the workshop facilitator frames the topic to explore and breaks the participants into small groups to do an immersive activity related to the subject and then share a summary of the movement with the larger groups. Insights are synthesized and shared out with the group at the end of the workshop.

Semi-structured interviews are interviews with a pre-determined set of questions with open-ended answers by the respondent. This method is less time-consuming than the ethnographic methods or workshops but still yields rich data about respondents' specific experiences. Though it is not feasible to attain large enough sample sizes for robust statistical analysis, semi-structured interviews can produce more precise data than surveys because respondents are not limited to a limited set of answer choices. It is less likely the respondent will respond that it is the “best fit” but still a poor reflection of the reality as may sometimes happen in surveys.

While discovery research is not an ideal fit for Agile product design and development cycles, it can be a substantial value add in strategy and direction when conducted ahead of current iterations or program increments (PI). Think of discovery research teams as the scouts making maps that give Agile teams the best way forward.

Exploratory/Formative Research: Formative research designs are most useful in understanding (empathize and define) the design thinking process phase at the beginning of the design or redesign process. A UXR team typically conducts formative research when a broad idea of new technology or redesign has been conceptualized, perhaps through discovery research. Still, the details have yet to be fleshed out. Formative research can help optimize the design of a technology concept and may sometimes build a case for pivoting to something else if the idea does not demonstrate the potential to meet actual user needs.  Unlike discovery research, formative research can fit reasonably well into Agile product cycles such as iterations or PIs.

Methods conducive to formative research include contextual inquiry, semi-structured interviews, and workshops. Ethnographic research designs to work in the exploratory analysis but would need modification from traditional ethnographic designs. In some instances, a System Usability Scale (SUS) may be used to assess a current application's usability the new application is meant to replace. Still, this method is more commonly used in summative research.

Formative research designs are beneficial for assessing user needs and pain points related to a technology concept and can be used quickly.

Summative Research: Summative research designs are most useful in the materialize (test) phase of the design cycle. Summative research is used to evaluate how usable, intuitive, and desirable a technology or product is and is complementary to formative research. This phase of UXR is beneficial in reiterating a prototype to be more usable and better meet user needs.

Methods used in summative research include usability testing, A/B testing, heuristic evaluations, SME panels SUS, and web analytics.

Usability testing is when a researcher(s) asks a test user to perform a series of tasks and to “think aloud,” describing the process as it is occurring. It is a complementary method to contextual inquiry; where the former way fleshes out strong points and pain points in a current application or process, the latter does so for a new prototype. While the researcher describes the purpose of the exercise and outlines what tasks are to be performed, he or she politely minimizes the respondent's help. This technique is essential for surfacing usability problems or ambiguity in the user interface (UI). In some cases, semi-structured interview questions about perceptions may be embedded in the usability test task scenarios. In some instances, tasks may be timed from start to completion.

A/B testing is a type of usability testing where respondents can compare two different versions of a webpage. This method is advantageous in a redesign process. In addition to typical variables in a usability test, an A/B test can measure the difference in time on task, sales, and hover time without clicking.

A heuristic evaluation is where an expert uses “rules of thumb” on an established heuristic to assess an application or web page's usability. While this method yields a concise evaluation from an expert, it is somewhat more subjective than other methods. It is vital to ensure that the right expert is consulted and that the process is a strong fit for the overall research design.

SME panels are modified focus groups in which SME’s review a prototype with a researcher and give feedback based on their impressions. This method has less setup time than a usability test. It can be employed when the prototype is still in the wireframe stage and not functional to the minimum viable product (MVP) level. SME panels can reveal hidden but vital insights about a prototype gel with current user needs, workflows, and conceptual models of what is needed. A limitation of SME panels is that SMEs’ biases in favor of an existing application or technology may color their perceptions of the prototype. SME’s may focus on what they are comfortable with and what they want, rather than on underlying needs. Fixation on the status quo or superficial wants may limit innovation in optimally meeting the underlying conditions.

These qualitative methods can be complemented with quantitative methods such as SUS and web analytics. SUS is a validated metric with five positive UX attributes and five negative UX attributes on a Likert scale. It does not provide the depth of insight that qualitative UXR methods do, but it can be administered across a much larger sample and thus is generalizable. SUS can be rapidly deployed to yield a snapshot of people’s perceptions of a web page's usability or application. Web analytics uses the AI/ML power of an analytics platform such as SQL or Google Analytics to discover usage patterns in a live platform or application. Examples of UX insights that may be gleaned from analytics include the number of unique visitors, time on screens, and drop-off frequency and location.

Summative research approaches are vital for testing technology prototypes and deployments to assess usability and desirability.

The graphic below shows the benefits of formative and summative research. And where in the design and development process, these techniques are most productively employed.

Building the Right Thing with UX Research

Deep Knowledge about People - UX researchers are trained to be good listeners and observers. They obtain in-depth knowledge about people, what they do, and what they need professionally. Whether they are trained in research methods through UXR training programs or while obtaining advanced social science degrees, UX researchers have thorough training in highly effective research methods. Experienced UX researchers can translate UXR data into insights that drive high-impact design concepts and decisions.

Design Integration - Researchers can integrate with design teams and adapt the research-design-research/test process to an agile design cycle. Designers may assist with research sessions to understand the process and gain insights from users. Researchers can help with brainstorming activities, workshops, journey maps, and other design thinking exercises.

Adaptability in Agile Environments - Integrating UX researchers into agile teams of designers and developers allows for rapid and flexible prototype iteration and refinement. UXR ensures that designs meet actual user needs and minimizes rework.

Meeting Users Where They Are - Integrating UXR in the design and development processes ensures that digital experiences are delivered to users where they are. It helps carry users along in the innovation journey rather than putting them in unfamiliar or uncomfortable territory.

Wrapping up

UXR empowers digital innovation while meeting actual user needs in a highly usable and relatable manner. It’s a rapidly growing and transformational area in the technology landscape for start-ups, flagship tech companies, and legacy enterprises.

Radiant Digital helps organizations check all the boxes in enterprise software design with competitive UX Research services. We focus on research that guides a winning corporate strategy in the new digital economy.

 Connect with us for customized UX Research plans today!

The Problem isn’t the Training; it’s Effective Knowledge Transfer

One of the biggest challenges for organizations is what happens after the training. Typically, training is seen as an isolated event. Afterward, many learning development professionals and supervisors find themselves asking, “why is the employee not using the information from training” or “why hasn’t the employee’s performance increased following training.” Questions such as this suggest that knowledge transfer did not occur after the employee left the classroom or virtual training session.

This article will explore the design of the training and execution phase of training in driving effective knowledge transfer in the workplace.

What is Knowledge Transfer?

For training to be practical, both learning and transfer of training need to occur. Knowledge transfer can be defined as a learner's ability to apply the behavior, knowledge successfully, and skills acquired in a learning event to the job, resulting in improved job performance. Trainees can fail to apply training content to their jobs incorrectly, either because the training was not conducive to learning. The work environment provides them with the opportunity to use the training content or supports its correct use. So, how does an organization avoid this as much as possible? First, consider knowledge transfer during the design or purchase of training, and second develop an execution plan to reinforce the expected learning outcomes of training.  Often, knowledge transfer is considered after training has already occurred. However, the trainees’ perception of the work environment and its support for training has influenced their motivation to learn.

Design of Training

As a first step in improving knowledge transfer, let us consider the design of training. Training design includes evaluating how to create a learning environment to help the trainee acquire the learning outcomes. Boring lectures, lack of meaningful content in e-learning, and training that doesn’t allow employees to practice and receive feedback-all methods demotivate trainees and make it difficult for them to learn and use what they have learned. However, many companies are using innovative instructional strategies to make training more exciting and to help trainees apply it to their work.

Technique 1: Incorporate Different Training Methods.

Mirror the workplace: The similarity of the tasks and materials in the learning event and the learners’ work environment affects the knowledge transfer rate. Machin and Fogarty, two researchers, found that learning transfer increases when the physical characteristics of the tasks and the learning environment match the performance environment.

  • This instructor-led training can be combined with leader-led discussions with interactive assignments that participants complete in groups in virtual breakout rooms. The benefit of this approach is it offers trainees the opportunity to engage with the learning material and immediately apply it to their daily work. The sooner the trainee can apply the knowledge, the better for knowledge transfer.
  • Include polling scenarios within webinars that reflect common instances in which the trainee will use the information taught. This keeps learners engaged and ready to apply knowledge after training.

Model the way: Modeling is a technique shown to increase learning transfer, as it provides a demonstration of how to apply learning on the job. This can quickly be done through virtual reality training, which according to LinkedIn, education continues to be on the rise. Modeling will apply learning in training and allow learners to practice will learn in training and increase learning transfer by as much as 37 percent, according to Michael Limbach, who has researched several approaches to enhancing learning effectiveness and transfer.

Space it Out: Learning transfer is impossible if learners forget what they learned in training. Use spaced repetition in your training program to minimize the forgetting curve. For example, send snapshots of learning content before the main training event, and follow up with detailed summaries of key topics after training. Will Thalheimer, in his 2006 work, “Spacing Learning Events Over Time: What the Research Says,” identifies that although learning and memory are vital during a training event, knowledge is rapidly forgotten afterward.

He also points out that spacing reinforcement, which is spaced repetition or interval reinforcement on the job after training, enhances how much trainees will retain and apply to their work. Thalheimer and many other researchers identify are that the closer in time learning is delivered to the situations when it is needed, the less forgetting will be a factor. Applying spaced repetition for learning transfer can occur immediately after training and is built into your training plan's execution phase. Again, remember learning transfer should be part of the organization's training strategy before training is implemented.

Technique 2: Ask your Target Audience

In designing training programs, a small focus group comprised of the training target audience can help a learning designer assess what trainees need or desire from the training. Gathering such feedback could enhance the training program, thus improving knowledge transfer to occur following training.

After the Training Event

As mentioned previously, interval reinforcement can be implemented to help curve forgetting after initial training. Organizations can apply this approach by delivering the training information via daily emails, hosting lunch and learns, or using virtual reality tools to amplify employee engagement in the learning journey.

By implementing a consistent post-training interval reinforcement program of learning, organizations can ensure the learning process continues and can be applied on the job. Here at Radiant Digital, we can help you map an effective training strategy, starting with a need assessment through effective knowledge transfer methods to occur in your organization.

Radiant Gains Recognition as Verizon is Named to IDG’s 2020 CIO 100

Radiant is proud to announce that it was an integral part of the team that helped Verizon be named to IDG’s 2020 CIO 100 list of innovative organizations that exemplify the highest levels of operational and strategic excellence in IT. This prestigious honor celebrates companies and award-winning IT projects which deliver business value, optimize business processes, enable growth, and create competitive advantages.

Utilizing a human-centered, design thinking approach, our team delivered digital transformation with Verizon’s Network Single Pane of Glass (nSPOG). The nSPOG is a comprehensive cross-enterprise UX framework that unifies user experience, simplifies workflows, automates repetitive tasks, enables collaboration, and allows the business to respond to customers more quickly, accurately, and effectively.

  • The nSPOG transforms the day-to-day work experience of 10,000+ Network and Technology specialists.
  • It promotes user efficiency and improves customer service
  • The framework reduces users’ cognitive load through features such as 1) guided navigation for task-based systems that implement a variety of complex provisioning workflows; 2) field display automation features that reduce redundant inputs to lengthy systems of record, and 3) predictive analysis to provide users with guidance as they make decisions regarding work prioritization and areas of attention.

The nSPOG is technically advanced and cross-platform (web, mobile web, native); the system leverages unique features to provide additional value. For instance, in mobile contexts, the framework integrates technologies such as cameras and GPS, to provide relevant information for workers in the field, while in the desktop context the framework integrates with native system capabilities to offer a conversational interface that offers significant advantages over typical form-based enterprise application interfaces.

Network Single Pane Of Glass(nSPOG) is the culmination of initiatives in machine learning, intelligent agents, process automation, agile application development, and world-class user experience design.

Network Single Pane of Glass (nSPOG) delivers significant efficiencies and cost savings across Verizon’s enterprise IT footprint.

About the Award

The CIO 100 celebrates 100 IT organizations for driving digital business growth through technology innovation. The award is an acknowledged mark of enterprise excellence.

In this webinar, our Big Data Practice Director, Sri Arepally, deep dives into DataOps, and its collaborative data management practice focused on improving communication, integration, and automation of data flow.