Data Science – The Cornerstone of Certainty during Uncertain Times

Data is a crucial digital asset for any individual or organization in their decision-making journey. According to IDC, by 2025, global data will grow to 175 zettabytes. This explosion in data from multiple sources like connected devices requires deriving valuable insights to make smarter data-driven decisions. Data Science helps enterprises understand data better and optimize its utilization for time-consuming and expensive processes. Collecting, analyzing, and managing data on-demand enables businesses to curb wastage, detect revenue leaks, and proactively solve problems to propel bottom lines. 

Data Science is a boon to any organization that needs to understand a problem, quantify data, gain visibility and insights, and implement data for decision-making. In this blog, we will take you through the basics of Data Science and give you a sneak peek into how top companies are implementing it.

Data Science – Definition

Data science is an interdisciplinary field of expertise that combines scientific methods, algorithms, processes, and systems to extract actionable insights from structured and unstructured data and apply the knowledge across a broad range of application domains.

It converges domain expertise, computer programming, and engineering, analytics, Machine Learning algorithms, mathematics, and statistical methodologies & modeling to extract meaningful data insights. In business engineering, the Data science process starts by understanding a problem, extracting and mining the required data, continuing with data handling and exploring, moving towards data modeling and feature engineering, and culminating in data visualization.

Purpose

Data Science helps find different patterns within blocks of information that we feed into a system. It helps build data dexterity in implementing and visualizing various forms of data and supporting the following workflow.

It mainly serves the following Business Process Operations’ stages:

  • Design
  • Model/Plan
  • Deploy & execute
  • Monitor & control
  • Optimize & redesign

Benefits for Data-centric Industries

A recent study showed that the global Data Science market is expected to grow to $115 billion by 2023. The following benefits attribute to this.

Better Marketing: Companies are leveraging data for marketing strategy analysis and better advertisements. By analyzing customer feedback, behavior, and trend data, companies can match customer experiences to their expectations.

Customer Acquisition: Data Scientists help companies analyze customer needs. Companies can then tailor their offerings to potential customers.

Innovation: The abundance of data enables faster innovation. Data Scientists help gain creative insights from conventional designs. Customer requirement and review analysis help improve existing products and services or craft newer and innovative ones.

Enriching Lives: In Healthcare, gaining timely insights from available data shapes seamless patient care. Data Science helps collect and streamline EHRs and patient history data to offer essential healthcare services.

Why Data Science?

With the advancements in computational capabilities, Data Science makes it possible for companies to analyze large-scale data and understand insights from this massive horde of information. Furthermore, with Data Science, industries can make proper data-driven decisions.

  • With the right tools, technologies, and data algorithms, we can leverage data to make predictions or improve decision-making.
  • Data Science helps in fraud detection using advanced Machine Learning Algorithms.
  • Allows to build and enhance intelligence capabilities when used with AI in the field of Automation.
  • Companies can perform sentiment analysis to gauge customer brand loyalty.
  • It helps companies make product/service recommendations to customers and improve their experience.

Data Science Components

  1. Statistics include the methods of collecting and analyzing large volumes of numerical data to extract valuable insights.
  2. Visualization techniques help access large data sets and convert them into easy-to-understand and digestible visuals.
  3. Machine Learning includes building and studying predictive algorithms and generate futuristic data.
  4. Deep Learning includes machine learning research where an algorithm selects the analysis model to implement.

How Companies are Revolutionizing Business with Data Science

Facebook – Monetizing Data through Social Networking & Advertising Textual

  • Analysis: Facebook uses a homegrown tool called DeepText to extract, learn, and analyze meaning from words in posts.
  • Facial Recognition: DeepFace uses a self-teaching algorithm to recognize photos of people.
  • Targeted Advertising: Deep Learning is used to pick and display advertisements based on the user’s search history and preferences on their browser or Facebook.

Amazon – Data Science to Transform E-commerce 

  • Supply Chain and Inventory: Amazon’s anticipatory shipping model uses Big Data for predicting the products potential customers are most likely to purchase. It analyzes purchase patterns and helps in SCM for warehouses based on the customer demand around them.
  • Product Decisions: Amazon uses Data Science to gauge user activity, order history, competitor prices, product availability, etc. Custom discounts on popular items are offered for better profitability.
  • Fraud Detection: Amazon has novel ways and algorithms to detect fraud sellers and fraudulent purchases.

Empowering Developing Nations

  • Developing countries use Data Science to determine weather patterns, disease outbreaks, and daily living. Microsoft, Amazon, Facebook, and Google are all supporting analytics programs in these nations by leveraging data.
  •  Data Science equips these nations to improve agricultural performance, mitigate the risks of natural disasters & disease outbreaks, extend life expectancy, and raise the overall quality of living.

Combating Global Warming

  • According to the World Economic Forum, Data is crucial to controlling global warming using reporting and warning systems. The California Air Resources Board, Planet Labs, and the Environmental Defense Fund are collaborating on a Climate Data Partnership – a common reporting platform designed to assist more targeted measures for climate control.
  • A combination of overlapping and distinct data projects, including two satellite launches, will help monitor climate change from space. The data from these satellites combined with ground data on deforestation and other environmental parameters will appropriately help implement global supply chains.

Uber – Using Data to Enhance Rides 

  • Uber contains driver and customer databases. When a cab is booked, Uber matches the customer’s profile with the most suitable driver. Uber charges customers based on the time taken to cover the distance and not the distance itself. The Uber algorithms use the time-taken data, traffic density, and weather conditions to assign a cab.
  • During peak hours in an area, the shortage of drivers is determined, and the ride rates are increased automatically.

Bank of America – Leveraging Data to Deliver Superior Customer Experiences

  • Bank of America pioneered mobile banking and has recently launched Erica, the world’s first financial assistant.
  • Currently, Erica is providing customer advice to more than 45 million users globally. Erica uses Speech Recognition for customer inputs and provides the relevant output.
  • More banks leverage Data Science algorithms like association, clustering, forecasting, classification, and predictive analytics to detect payment, insurance, credit card, accounting, and customer information frauds.

Airbnb – Customer-centric Hospitality with Data-centric Decisioning

  • Data Science helps analyze customer search results, demographics data, and bounce rates from the website.
  • In 2014, Airbnb mitigated the location-specific lower booking issue by releasing a custom version of their booking software for specific countries and replacing the neighborhood links with the top travel destinations around a location. This resulted in a 10% improvement in the lift rate for property hosts.
  • Airbnb makes knowledge graphs to match user preferences to the ideal lodgings and localities. The Airbnb search engine has been optimized to connect customers to the properties of their choice.

Conclusion 

Data Science lets you discover patterns from raw data and accelerate business conversions in a challenging digital landscape. It helps reduce the constraints of time and budget allocation while ensuring superior customer experience delivery. Connect with Radiant to learn more!


Digital Transformation with an API-focused approach

Advancements in technology and the burgeoning number of connected devices mandate every business to become digitally enabled. A crucial ingredient to navigate uncertainty and deliver value in a competitive digital economy is Digital Transformation.

Defining Digital Transformation

Digital transformation is the key strategic initiative businesses take to engage and augment digital technologies for traditional and non-digital business processes and services. This process may include creating new processes to meet the evolving market demand and propel bottom lines. Digital transformation completely alters the way businesses operate and manage, with a primary focus on customer-centric value delivery.

What does Digital Transformation Entail?

  • Digital transformation can include digital optimization, IT Modernization, and inventive digital business modeling.
  • Analyzing customer needs, mapping emerging technologies to requirements, and leveraging them for elevated user experiences.
  • Business evolution through new experimentation, techniques, and approaches to common issues.
  • Continual adaptation to a dynamic environment and change management.
  • Cloud migrations, API implementations, legacy app modernization, on-demand training, leveraging artificial intelligence, incorporating automation, and more.
  • Redefining leadership roles for strategic and change planning and digital-business disruption. This will prevent Siloed thinking and digital bolt-on strategies to give way to a more holistic approach.

According to Bain & Company, only 8% of global companies have achieved their business outcome targets from their digital technology investments. Leaders need to invest in digital transformation instead of just running their business with technology.

Benefits of Digital Transformation

3 Common Challenges of Digital Transformation

Most digital transformation issues can be associated with one of the following: people, communication, and measurement.

People: are at the core of any digital transformation initiative. Resistance to the cultural change caused by digitalization is a natural human instinct. 46% of CIOs say human adoption to culture change is their most significant barrier.

Poor Communication: Leaders often don't communicate about their digital transformation plans and expectations to their teams. Specific and actionable guidance is often overlooked before, during, and after digital transformation.

Lack of Measurement: The absence of newer and context-specific KPIs and metrics and platforms to measure them leads to assumptions and failures.

Connecting the Digital Transformation Dots with APIs

Businesses need an outside-in viewpoint of customer experiences and expectations to accelerate and close gaps digitally. Platforms like Apigee help develop and manage APIs using interactive & self-service tools like Apigee Compass. These interactive and self-managed tools help gauge an organization's digital maturity and curate a path to digital success. Apigee defines two cornerstone principles of Digital Transformation.

First Principle: Modern business must not stop adopting a mobile strategy or using cloud computing to create savings efficiencies. They must embrace a shift in the demand and supply nature.

This principle requires changing the traditional supplier-distributor and value chains. Strategies must go beyond focusing on the physical goods and services pipeline with lesser channels & customer interactions. Businesses must scale infinitely using virtual assets in new technological avenues at no marginal cost. This shall distribute value creation across ecosystems of customers, enterprises, vendors, and third parties. The key technologies include:

  • Omnichannel digital platforms
  • Packaged software and services using SaaS
  • APIs that connect data, systems, software applications, or mixed hardware-software intermediaries using HTTPS-based requests and responses.

Second Principle: Digital Transformation is shaped by a new shift in operating models that operationalize APIs and influence IT investments through technology ecosystem strategies.

APIs offer strategic levers to break silos and fuel the digital transformation initiative using IT enablers. Apigee provides a proxy layer for front-end services and an abstraction for backend APIs. It has features like security, rate management, quotas, and analytics, etc.

Apigee High-level Architecture

The two primary components include:

  • Apigee services: APIs used to create, manage, and deploy the API proxies.
  • Apigee runtime: A Kubernetes-cluster based collection of containerized runtime services in Google. All API traffic is processed using these services.
  • Additionally, GCP services support IAM, logging, metrics, analytics, and project management. In comparison, backend services provide runtime data access for your API proxies.

Managing Services with Apigee

Apigee offers secure service access with a well-defined, consistent, and service-agnostic API to:

  • Help app developers seamlessly consume your services.
  • Enable backend service implementation change without impacting the public API.
  • Leverage analytics, developer portals, and other built-in features.

10 Foundational Tenets of API-centric Digital Transformation

  1. Platform – Agile platforms repackage the software for new use cases, interactions, and digital experiences. APIs converge new services with the core system to deliver operational flexibility and data efficiency.
  2. RESTful APIs – These offer flexible and intuitive programming and multi-platform/service integration. You can monetize APIs through custom packaging.
  3. Outside-in Approach – Customer and partner experience need to be measured using analytics. This helps transform APIs into exceptional digital experiences.
  4. Ecosystem – This includes digital assets (internal and external), services, developers, partners, customers, and other enablers. This enables distributed demand generation, non-linear growth, and value creation across lucrative digital networks.
  5. Leadership – Top-to-bottom commitment helps achieve the necessary cultural alignment. APIs can be incentivized to create frictionless delivery cycles and propel bottom lines.
  6. Funding - API programs can blend with agile funding models, development cycles, and governance processes. Direct API funding can improve data utilization and process iteration without the need for excessive investments.
  7. Metrics – Enterprises must embrace API-based metrics like consumption rate, transactions-per-call, etc., that go beyond ROI, transaction volumes, and pricing. This helps overcome narrow opportunity windows and fragmented customer segments.
  8. Software Development Lifecycle (SDLC) - An API-first and agile-centric approach in a test-driven environment offers speed, innovation, and cost savings. This helps implement changes on-demand based on the ever-changing customer preferences. Automation should be pivotal to this approach where funding and measuring project success through intelligence and accurate forecasting tools is the top priority.
  9. Talent – Talent is key to the API digital value chain. Strong technical API-programming talent (developers, architects, documentation experts) shapes a company’s digital competency. Agile governance, funding, training, developer communities, portals, knowledge-sharing, automation, and DevOps promote talent improvement and retention.
  10. Self-Service – This involves delivering value in developer-driven value chains using developer portals, API catalogs, API keys and sample codes, testing tools, interactive API help content, and digital communities. 

Concluding Thoughts

Businesses can rely on APIs to scale with speed and be more responsive to demands. Regardless of your digital transformation roadmap, positioning API as a strategic asset for digital acceleration is crucial in any enterprise landscape.

Connect with Radiant to learn more!


Enhancing UX Design and its Usability with Animation and Motion

UX design must often include micro-interaction elements and ways for subtle user interactions. These components make the design more communicative and illustrative while improving its usability. However, user designers must know precisely when and where to include an active motion to enhance usability. For example, on Twitter, pulling down the screen would refresh the content. The screen slides down and bounces back, revealing a spinning wheel. This tells the user that some action has been initiated and is happening at the moment. Such subtle animations and UI micro-interactions are ingrained in our lives that their absence can be confusing.  Sometimes, an animation will need to call more attention to itself, making us pause what we are doing and pay attention. For example, a confirmation window will pop up and keep blinking until we click an action button.

UX designers can create seamless experiences by using animation with some tact. In this blog, we discuss the purpose and the ways of using UI animation and motion.

Why Animation and Motion in UX?

Animation in UX helps build mental models for users about the system’s workings and interact with it. Animations that are time-filling visual stimulations during the transition are less crucial than those that need to capture the user’s attention (like a call to action). Remember, animations need to be leveraged for usability and provide important cues about the current system status. They also signify how UI elements will behave. These serve as spatial metaphors that can be easily understood and related to the user’s location in the information space.

Feedback: Animation and motion are useful as noticeable feedback indicating an action or a condition (as recognized) in the system. For example, a navigation menu slides over the page when the user taps on the hamburger icon. The human visual system attunes to motion such that a short animation can be sensed as feedback. On the contrary, static visual feedback is ignored since users cannot sense any motion visually. Animations serve as a feedback mechanism before the user commits to action. Thus, animation or motion increases the chance of noticing feedback quickly and efficiently.

Communicating State Change: An object motion indicates the interface switch to a different state because of a mode or system status change. You can achieve this by either making the mode change noticeable; or through a conceptual metaphor depicting the mode transition. In addition to capturing mode transitions or views of data, animation and motion help communicate state changes that could go unnoticed if not triggered by user actions. For example, loading indicators depict the system is not ready to accept inputs.

Spatial Metaphors and Navigation: The structure of a complex information space can often challenge communication to users. Scanning through navigation menus, breadcrumbs, or tree diagrams to figure out the user’s location in the information hierarchy is time-consuming and needs more cognitive exercise. Animation can signal to users the direction in which they are moving within a process or information hierarchy. This adds a supplemental cue and intuitiveness while navigating through complex UI. Zooming animations help understand the direction of the user’s journey in a hierarchical design space. Zooming out decreases the detail while including more objects in a visual space while diving out of the detailing. Whereas zooming in does precisely the opposite. Likewise, slide-over animations help establish the forward or backward movement of the user within a checkout process.

Orientation: Animations prevent disorientation among users when using a new or unfamiliar design. This is especially useful for mobile UI design, where context can be lost due to the smaller screen size. Designers often use accordions, menu overlays, and anchor links in UI design. An animated cue helps tell users what they’re looking at and what they have to do next.

The Signifier of Action, Information, and Context: Animations make UI elements intuitive for users by adding context to the design, action, or information. The direction and other attributes of the animation signify what is acceptable (or not). For example, a menu or tab expanding from the bottom of the screen signals that it can be closed by pulling it down.

Types of Animation Interactions

Animation is related to the temporal behavior of interface components in real-time or non-real-time. Real-time or non-real-time events thus drive the temporal behavior of these objects.

Real-time vs. Non-real-time Motion: Real-time motion behavior involves the user’s direct interaction with the screen objects. While non-real-time means the behavior of the object is dependent on its post-interactive state. The motion is transitional and occurs after a user action. While the state of the UX object is static, its act is temporal and motion-based. When an object is in the act of being masked, it means motion is involved, which could support usability. Real-time interactions provide the ‘direct manipulation of the interface object, immediately. It means the interface behavior happens as the user is interacting with it. Non-real-time interactions take place only after the user input is provided. These give the feeling of locking the user out of the UX until the transition is completed.

Key Considerations for UX Motion and Animation

The considerations influencing the temporal behavior and usability of UX designs include:

Expectation: includes the user’s perception of the definition and behavior of an object. UX designers should focus on minimizing the gap between user expectations and actual experiences. For example, instead of the screen bouncing back and revealing a spinning wheel, nothing visually happens while the app refreshes content anyway. Instead of new content, the same content still exists because no one has made any updates in the last half hour. The app could’ve refreshed the news feed, but we’d have no way of knowing if it did or not. Based on the standard expectation, the user’s thought process is interrupted, leading to more confusion.

Continuity: covers the user flow and the ‘consistency’ of the UX. Designs can be ‘intra-continuous’ —within a scene and ‘inter-continuity’ —within a series of screens or spaces that make up the total user experience.

Narratives: define an event’s linear progression in the UX space resulting in a temporal/spatial framework. This includes a series of discreet events and reactions that form a connection throughout the user experience.

Relationships: are the temporal, spatial, and hierarchal representations between UX objects that influence user understanding and decision-making.

Speed: there is a range of milliseconds that is appropriate to animation. We can only register something happening at greater than 100 ms, like standard frame rates for movies. Slow animations frustrate the user. If we’re swiping left in Tinder, it’s great to watch someone get banished from our digital existence in slow motion, but not so slow that we can’t cover swiping another 50 potential dates within the next minute.

Natural Quality: Easing can be looked at as adding a natural quality to animation. Imagine a spaceship orbiting the moon and preparing to dock with the ISS. There is an initial thruster boost in which the ship accelerates to full speed. Then, there is a reverse thruster boost to bring the ship to a complete stop. If the ship were traveling at full speed, every second of that entire journey, either the ship or the ISS would explode in a real-life scenario. When depicted through a UX design, this would mean a hypothetical catastrophe or an unnatural view of a ship moving at full speed and instantly stop full speed. Likewise, it is more pleasant when things are animated based on how natural events unfold. Other examples include the slight in and out easing of a toggle switch when tapped. Or a little bounce that comes back to a stop slowly when we pull down a screen to refresh.

Eight Cornerstone Principles of UX Motion and Animation

UX animation is supported by these eight principles that fundamentally underscore the premises and rules of UX animation techniques. These principles are categorized based on timing, object relationship, object continuity, spatial continuity, and temporal hierarchy.

  1. Easing

This is also a principle that includes aligning the object behavior with user expectations during temporal eventsEasing helps create and reinforce the inherent ‘naturalism’ of user experiences. It provides a sense of continuity when objects behave according to user expectations. Please refer to Easing in the previous section.

  1. Offset and Delay

Offset & Delay helps in designing more usable experiences. It pre-consciously prepares the user for success by providing information about the nature of the interface objects. Even before the user conceives what the interface objects are, the designer can communicate their purpose through motion which is extremely powerful. Thus, UI objects support usability through their ‘separateness.’

  1. Parenting

Parenting creates hierarchal relationships, both spatial and temporal, during multi-object interactions. It creates an association between UI objects in the user interface as the parent and child objects. For example, scaling or positioning the ‘child’ object is based on that of the parent object (by default). The resultant object relationships and hierarchies support usability and quick understanding among users. Designers can better coordinate temporal events while communicating the object relationships more clearly. The properties creating parent-child object synergies include Scale, Position, Opacity, Rotation, Value, Shape, Color, etc.

  1. Transformation

Transformation defines the object utility changes in the form of a continuous narrative flow state. For example, a radial progress bar appearing when the ‘submit button is clicked and then becoming a confirmation checkmark is a typical transformation example. This grabs the user’s attention, has completion, and narrates a UX design story. Transformation combines cognitively related yet separated events in UX to make it seamless and continuous. This results in better user awareness and understanding to follow through.

  1. Overlay

Overlay creates spatial object relationships in visual space where layered objects are location-dependent. Overlay allows users to leverage flatland ordering properties to overcome the shortage of non-spatial hierarchies. Overlay allows designers to communicate the location of dependent objects using motion in a non-3D space. For users, non-visible UI objects are hidden visually, cognitively, and functionally. Overlay allows designers to promote spatial orientation and communicate the relationship between ‘z-axis’ positioned layers.

  1. Cloning

Cloning promotes relationship, continuity, and narrative when new objects originate and depart. As new objects are cloned from current objects in space, the narrative account for their appearance is crucial. The narrative framework of the object’s origin and departure is essential. Dimensionality, masking, cloning aid usability when producing narratives.

  1. Obscuration

Obscuration allows users in spatial orientation when considering objects or scenes absent in the primary visual hierarchy. Static and time-based obscuration techniques are being implemented in designs currently. Obscuration could be a temporal interaction involving multiple properties at a time. The standard techniques include object blurring at different levels of object transparency. Obscuration allows designers to replace unified field views or ‘objective views’ in UX design.

  1. Dimensionality

Dimensionality offers a spatial narrative framework as new objects originate and depart. Flatland non-logic can be avoided with dimensionality. This principle provides spatial origin and departure references to reinforce mental models of the user’s location in the UX space.

  • Origami Dimensionalityincludes ‘folding’ or ‘hinged’ three-dimensional UI objects. This combines multiple objects into ‘origami’ structures. Hidden objects still ‘exist’ spatially though invisible. User Experience becomes a continuous spatial event creating an operating context in user interactions and the temporal behavior of the interface objects.
  • Floating Dimensionality provides the spatial origin and departure of objects for intuitiveness and story narration.
  • Object Dimensionality creates actual depth and form in dimensional objects. The dimensionality of objects can be experienced during real-time and non-real-time transitions. Object Dimensionality helps develop a keen awareness of object utility based on spatial locations that are non-visible.

Wrapping up with some Design Tips

Sometimes we get caught up in the cool factor of animation and don't consider it interrupts the primary purpose of its usage. The idea is to minimize intrusiveness and add clarity to narration if you include animation and motion in your designs. As a leading UX animation company, Radiant Digital can help you ace the usability of your design.

Here are some expert tips:

  • Adding the 'pull-to-refresh’ gesture adds an arrow indicator with a downward motion to let the user know they have to slide down to refresh. Alternatively, using a 'refresh' button would be more intuitive.
  • Keeping the user informed with subtle cues on waiting time and the background process is essential. The user's confidence in the refresh action is directly related to its technical representation. It would be best to keep the refresh indicator spinning until the new data loads on the screen.
  • Functional animation and transitions should be intermediaries between different UI states.

Ideal UX animations merge content and object interactions with feedback, notifications, and user engagement.

 

 

 

 

 

 

 

 

 

 


Using Complex Learning and Instructional Design when things get complicated

Complex learning involves the integration of qualitatively different knowledge and skills along with their relationships and interaction rules. All the following tasks are complex:

  • Psychotherapy
  • Selling
  • Troubleshooting
  • Hardware design
  • Selecting the most appropriate statistical test for a given set of circumstances
  • Balancing competing priorities, such as prioritizing worker safety while maximizing return on investment and minimizing costs

 

As part of the “real world,” complex tasks frequently have novel variations and uncertainty. Every customer is a new challenge; almost no business models factored in a pandemic. These skills require knowledge transfer: taking what you know, making adaptions, and applying it in different situations.

Especially when dealing with uncertainty, complex learning is not well-suited for a linear instructional design model. Unfortunately, education and traditional training do just that: They teach a string of tasks sequentially and then place the onus of mastering the portion on the learner. Because these tasks and interactions are merely too much to learn at one time, they overload learners’ cognitive processes. The result is wasted training time, increased employee stress, burnout, and turnover while yielding less-than-optimal results.

When faced with complex tasks, I base my training on the Four-Component Instructional Design Model (4C/ID). Explicitly designed to reduce overall cognitive load and courage transfer, this nonlinear model developed by van Merriënboer and his colleagues breaks training down into components: (1) learning tasks, (2) supportive information, (3) procedural information, and (4) part-task practice.

Learning tasks

When using 4C/ID, I start with the two fundamental questions, “What do the learners need to do?” and “What do they need to know to do this?”

Learning tasks include projects, problems, case studies, etc. You would start with fully formed, authentic tasks. These learning tasks should start as straightforward as possible while still being faithful to avoid overload. As the learner masters the simple cases, add complexity.

Say, for example, you are trying to train a cohort of spokespersons for a large corporation. The first learning task could be telling a friendly press corps that the Company is adding 1,000 jobs to a new area. It would involve receiving the information, facing the press, giving the news, and responding to questions. To truly master the role, the spokesperson must also prepare their script from primary sources and raw documents. This should be the second learning task. The third task could be going on a financial radio show and discussing missed financial milestones.  These are all authentic tasks, and, for most jobs, they should be practiced in as realistic a setting as possible. Notice, too, that they are hugely divergent. This is by design: the variability in the tasks encourages knowledge transfer.

Not all complex tasks are as nebulous as the spokesperson. Teaching hardware design would have a different approach to whole task practice. In this case, the learner is guided through Worked Examples. These examples would highlight the complexities and interactivity of the information. The next step would be to provide similar examples, except with small portions of the solution removed. The learner would complete the highly scaffolded problems, and progress with each iteration require more significant input from the learners. For transfer, the difficulties should still vary.

Supportive Information

The next component is Supportive Information. This information is provided to help with the less common variants of the task. This isn’t a simple blurb on a computer screen. This type of information offers cognitive strategies: how to approach a situation, the way a given set of circumstances fits into the overall knowledge base that the task or job requires. You work on the best way to begin and the best way to do it into their mental model. One of the more difficult instructional design tasks is working with true experts to get the best mental model.

Procedural Information

The third component is procedural information, which supports the ordinary, routine tasks of the job. This information is provided just in time. In other words, you don’t present the learners with formula until they need the procedure. Doing so ahead of time adds noise, or more accurate, extraneous cognitive load. This type of information can be just a blurb on a pop-up window. It explains a basic how-to, with the facts, principles, and rules associated. For the spokesperson, this can be how to gather the press for a press conference or announcement.

For our hardware designer, the procedural information could be formulas or standard bits of circuitry repeatedly.

Part-Task Practice

Finally, you have part-task practice. Part-task practice usually involves some particularly tedious or difficult tasks and an enormous amount of repetition. For physicians, it could be tying sutures on a vein. For our spokespersons, this is most likely dealing with hostile questions while thinking on their feet. They will spend a lot of time being peppered with aggressive or leading questions, irrelevant details, and erroneous leaps of logic.

Standard differential equations and integrations are the most likely candidates for part-task practice. After mastering math, the designer can automate large portions of the design workload.

Although 4C/ID is an excellent model, it’s still just a model and isn’t suitable for every situation. Instructional design is complex, like the tasks we have been discussing. To get the training your organization deserves, you need instructional designers who can blend models, adapt models, or let the content drive the learning.

Radiant Digital can design and build the training your organization needs, all the way from specific compliance training to consequence-critical complex learning.

 

 

 

 


Looking at the ‘Bigger Picture’ with Service Design to deliver a Unified Experience

As an emerging field, Service Design focuses on service and experience creation, curation, and implementation. As a practice, service design aims to provide a holistic service to the user through the design of systems and processes. Service design assesses and derives value from a multi-user perspective (customer, staff, and business). It is essentially channel and medium-agnostic and associates experience delivery to the operations and technology producing it. Though service design has tools and methods similar to other human-centric design fields, its perspectives help with complexity management with a multi-dimensional service approach that assures a cohesive experience.

Service Design - Formal Definition

According to Shopify, Service design is a holistic, participatory, and cross-functional approach to improving end-to-end human experiences, as delivered through digital, physical, virtual, or human touchpoints. Service experiences need to balance user, partner, employee desirability, business viability, and operational and technical feasibility.

I'm a UX Designer. Why does this matter?

Though Service Design sounds similar to UX or CX design, there's a crossover with many broader design practice disciplines. Service design merges both customer-facing outputs like a user's interaction with an app and internal processes like an employee's experience in an organization. This is the reason it is more holistic. Service design helps apply more than just end-customer needs and context. It considers the complex chains of interactions in a design. For example, booking a cab is a service, but in service design terms, it's the whole interaction chain when an elderly customer wants to book an Uber ride to visit the hospital. There's much consideration to make here. UX designers will have to ideate design solutions for user-specific ecosystems while ensuring that brands deliver optimally and sustainably.

Service Design and UX Design

How are they Similar?

  1. Empathy and design thinking: Both in UX and service design, empathy and a design-oriented mindset play pivotal roles.
  2. Personas: UX and Service Design both include creating archetypes of various user personas and summarizing values, desires, motivations, problems, cultures, and social characteristics of the imagined user or customer.
  3. Research and prototype: SD and UX both implement UX or service research and prototyping to match the real-world expectations.
  4. Strategic thinking: Designers in UX and SD need to implement strategic planning to make ideal business decisions and handle business issues.
  5. Customer journey map: Mapping out the possible customer experiences (positive and negative) while interacting with the solution and a scenario-based approach is followed in both these disciplines.

How is a Service Designer different from a UX Designer?

  • While UX designers create assets and perform the research for those assets, service designers offer more strategic value to the design process.
  • UX designers focus on design environment development, which includes products, interfaces, etc. Service designers focus on interactional design and implementation along the entire customer journey for tangible and digital mediums.
  • Service designers need to merge strategic, operations, and people management, while UX designers need to manage only the UX design and their outcomes.

Why Service Design?

Bridges Gaps and Breaks Silos: Service design can bridge departmental and interaction-related organizational gaps. It lets teams focus on customer-facing outputs and internal processes as well. Service design is holistic and covers people, processes, and props, in addition to the experience of employees. Service designers can solve problems and gain clarity by questioning prevailing assumptions. They can prioritize functionalities and remove overlaps for simplified and unified services.

Foster loyalty with Customers and Employees: Service design helps improve customer and employee loyalty by getting the service right through various methods and processes. By adopting a service design-first approach, businesses can double their customer and employee retention game.

Improve Business Efficiency: Service design reduces redundancies through streamlined processes and a bird’s-eye view. It lets designers map out the whole internal service processes cycle to render a holistic view of its service ecosystem (services, channels, touchpoints, and business interactions). It is easy to pinpoint inefficiencies leading to wasted efforts and resources. Eliminating redundancies through service design reduces costs and increases efficiency.

The Five Principles of Service Design

Marc Stickdorn and Jakob Schneider have authored “This is Service Design Thinking, “where they define the following fundamental service design principles.

  1. User-centered – Use qualitative and user-centric research to design.
  2. Co-creative – Include all the relevant stakeholders in the design process.
  3. Sequencing – Break a complex service into isolated processes and user journey sections.
  4. Evidencing – Envision service experiences with tangible results and transparency for users to understand and trust brands.
  5. Holistic – Design for all customer touchpoints, user networks, and experiences.

What Service Design Involves 

  • Understand the service proposition of the organization or company deeply.
  • Analyze the needs of all the customers and service providers (actors) in a service.
  • Use a service ecology, blueprint, and user journeys to map out a service.
  • Co-create possible solutions or improvements by collaborating with service stakeholders.
  • Pilot new service experiences and create prototypes with real customers and staff.
  • Zoom in and out constantly between individual touchpoints in detail and the design of the overall service.

Components of Service Design

  1. Actors (Employees, stakeholders, and users of the service)
  2. Location (A virtual or physical set-up where customers receive the service or a service/design is tested/researched)
  3. Props (Objects used during service delivery like Storefronts, Conference rooms, Websites, and Digital files)
  4. Associates (Partner organizations involved in providing the service, e.g., logistics)
  5. Processes (Workflows used for service delivery, for example, getting help from a support page, interviewing a new employee, sharing a file)

The Four Pillars of Service Design

  • Assets. These represent digital or physical customer touchpoints in a design. At the same time, these include the tools used by employees to deliver the service.
  • People. This includes any direct or indirect person contributing to the service. These users work in a co-creative environment to influence and shape the design, changes, results, and experiences.
  • Policies. The company's policies, rules, standard operating procedures, and workflows direct the service provision and the customer experience.
  • Culture. These include the unwritten rules for employee attitudes and behavior. This is influenced by the company's history, employee experience, and management style.

What do Service Designers Create during this Journey?

  • User flows – Define the paths that the user travels, starting from an entry point to the end goal to complete a user interface task.
  • Personas – Defines a character representing a potential user of your website or app to help the design team target designs around users.
  • User research – Define the ways and methods of studying users and their requirements to add context, find problems and their solutions.
  • Prototypes – Define the web and mobile replicas of how the result of a product will look, usually without code, including the final UI design and interaction.
  • Future-state blueprints – Help visualize a concept's future state for a new product or service based on a specific customer's (persona) journey. Their new journey is supported by different employee roles, processes and technologies, business organizations, and third-party partners.
  • Ecosystem maps – Define the high-level service ecology and interactions between the client and the design groups. This helps assess the symbiotic functioning of products, services, and people.
  • Archetypes – Define different personality traits of all the involved users in the context of their interaction(s) with the design.
  • Service storming – Helps observe, understand, ideate, evaluate, and refine services in design for both customers and the organization.
  • Customer Journey Maps – Define the customers’ touchpoints, critical moments, interaction flows, and barriers.
  • Service blueprints- Represent the elevated forms of customer journey maps to define all the situations when users/customers can interact with brands.

A Typical Service Design Process

Service Design Methods

Core Considerations for Service Designing the Complete Experience

Here are some considerations to make if you want to accommodate your customers' environment(s) and the limitations, motivations, and feelings they'll have.

  • Understand the brand's purpose, the demand for a product or service, and all the service providers' ability to deliver.
  • Customer needs should be incorporated in the Services' design rather than shaping it based on the business' needs.
  • Design the services to deliver a unified system rather than individual components that can elevate the overall service performance.
  • Services should be designed based on the value delivered to the customers most efficiently.
  • Design Services based on the understanding that special events (those causing variations in general processes) will be treated as conventional events (and processes designed to accommodate them).
  • Always initiate service design with relevant inputs from the users of the service.
  • Build service prototypes and test them before developing them.
  • Keep a clear business case and model in place before starting with a service design.
  • Develop a Minimum Viable Service (MVS) of a service design and deploy it. Perform iterations and improvements based on user/customer feedback.
  • All the relevant stakeholders (external and internal) must be in sync and consulted before designing and delivering the service.

Wrapping up

Service design offers the ability to address design problems in their entirety while capturing all the stakeholders' perspectives in that service. It helps shape an overall service experience with attention to detail on how the service is delivered to all of the system's actors. One of the exciting things about SD is that it is vital in a service-oriented world to shape digital experiences and build futuristic digital ecosystems.

Radiant Digital helps enterprises deliver the 'complete' experience through service-oriented design. Call us to learn more.

 


[Webinar] Using UX Research to Optimize Software Design

https://youtu.be/42urTpSYGG8

This webinar aims to showcase the added value of user experience research (UXR) in the design and development process. Topics covered include why software product teams should incorporate UXR, how UX researchers fit into user experience design (UXD) teams and flows, and types of UXR.

Our guest speaker, Dr. Racine Marcus Brown, is a UX researcher and research manager at Radiant Digital. In addition to leading and training UX researchers and business analysts, he conducts UX research on enterprise software for corporate clients and institutions of higher learning. Racine also leverages his scientific expertise to facilitate grant application reviews and other contract work for Federal agencies. He is an applied anthropologist by training, with a Ph.D. from the University of South Florida. His experience includes work in the trust-tech and healthcare spaces.


[Webinar] Explore the origins, current state, and future of RPA

https://youtu.be/wdzpBcrwbc4

Radiant has tracked RPA solutions for over a decade and has been part of its evolution. Chandra Alluri shall discuss its origins, share insights on its current state, and how it will morph in the future to alleviate the pain points in software engineering and IT & Business operations. Connect with us to learn how our RPA solutions can help you!


[Webinar] Avoid the Hype Cycle and deploy AR at scale

https://youtu.be/6wvVhKyvdjU

 

In partnership with Scope AR, Radiant presents: "Avoid the Hype Cycle and deploy AR at scale." In this webinar, co-founder of Scope AR presents a novel approach to the Hype Cycle, observing at each point that very early on, much like the technology’s journey through the Hype Cycle, the customer themself goes through a similar cycle as well. Follow the customer journey through the technology trigger, the peak of inflated expectations, the trough of disillusionment, the slope of enlightenment, and the plateau of productivity. Then, demonstrate how to avoid the "Hype Cycle" and gain productivity in deploying AR at scale. Our partner, Scope AR, created the software, and our company, Radiant, handles the custom content development for our clients. Connect with us to know more about using AR with learning and training programs.

The Versatility of a Technical Writer

A recent search on LinkedIn for technical writing job opportunities returned over 50k results in the United States. The deliverable types included internal-facing work processes, quality assurance workbooks, developer training resources, and external-facing consumer support content, proposals, and marketing. The delivery platforms had print, digital, and a blended format. The industries were just as diverse—aerospace and defense technology, oil and gas, computer software development, consumer electronics, and pharmaceuticals. All this was just within the first page of the results. The common link is the need for a technical writer. A technical writer's versatility allows them to quickly move between deliverable types, delivery platforms, and even industries. This article will discuss three key traits that help a technical writer meet and exceed any client needs.

Systematic Approach

A technical writer uses a systematic approach for each project. They find common threads or patterns in information that help with planning and organizing documentation of similar types. So, it is not surprising that one technical writer can handle seemingly diverse topics. For example, consider the following processes:

  • Software installation and usage
  • Global corporate workflow implementation
  • Engineering work instructions
  • Repair and Maintenance (R&M) tool teardown

Each process has the following high-level components:

  • Roles and related responsibilities/user types
  • Competency requirements
  • A flow of data or documentation
  • Steps in a specific order
  • Consequences if done incorrectly

The key differences between the processes are:

  • The need for the process
  • The asset being manipulated
  • The expected outcome or output

While the individual details may vary considerably, a similar set of questions can gather the processes' required information.

  • Why is the process necessary, and what are the goals and objectives?
  • What is changed through the process: data, people, machinery?
  • What is the result: a report, a behavior change, a product?
  • Who is involved, and what do they need to do?
  • What do the roles/users need to know and learn?
  • How is the process progress tracked?
  • What are the steps and the required order?
  • What happens if the process is not followed?

The systematic approach works across industries and can be applied to more than just processes. Consider software documentation in general. Similar documentation types are used with software regardless of the industry or the intended purpose of the application. You are likely to see some or all of the following (and more):

  • Installation guides
  • Installation notes
  • Manuals
  • Training guides
  • Help files

Each document type will have its own set of baseline questions for starting information gathering; however, the end product will be unique to each client’s individual needs.

Adaptable Writing Style

Technical writers can adapt their writing styles for each project. Since no two clients or audiences are created equal, a technical writer provides the expertise to reflect a client’s corporate identity while delivering clear, concise documentation with the appropriate level of detail for the intended audience. For this example, consider marketing and proposal writing. To be clear, these are different. Good marketing will let people know what a company can do and build interest, leading to a reasonable proposal explaining what a company will do for a client with one or more products and services.

For marketing, a company needs to reach both internal and external audiences, and each audience will include multiple roles that require a different level of detail. The primary internal audience is the sales team, and the main external audience is the potential client. (The external audience often includes competitors as well.)

Internal Roles Level of Detail External Roles
General Employee Low General Public/Potential Client
Executives Low to Medium Potential Client Executive
Business Development Medium to High Potential Client Buyer
Product Owners/Technical Associates High Potential Client Implementation Role

 

The internal and external documents all need to be informative, emphasizing the benefits to the client. Internal documents are also required to explain how to position those benefits to the client to make a sale. For the potential client, the documents and deliverables need to show them that their challenges are understood and that the product or service will help them with those challenges. Here is also where corporate identity comes in. The deliverables can tie in the company’s value proposition and use the proper tone. Does the company use a formal style when addressing clients, or are they more informal and friendly? Does the writing style change depending on the deliverable type?

The image below shows different detail levels in marketing documents, a summary of what information can be included in those levels, and some example deliverables.

On the other hand, we have proposals. When it comes to positioning a product or service, these are directed to an external audience. Proposals are very detailed and use a formal writing style. Proposals go beyond just describing a product or service. They include project management and implementation strategy, key performance indicators for assessing success or failure, just to name a few. The technical writer must ensure that the details are clear and concise and there is little room for misinterpretation.

Tech Savvy

A technical writer uses their ability to learn software applications and platforms to create and deliver technical communication projects. Below are just a few of the tools that can be used to produce print and digital deliverables.

Documents Images Training
Microsoft Word

Adobe FrameMaker

Adobe InDesign

Adobe Photoshop

Adobe Illustrator

Adobe Captivate

Storyline 360

 

For delivering digital projects or providing digital copies, a technical writer can upload or create content on a content management system, like Adobe Experience Manager, to build out web pages or web sites, upload and tag documents and images to digital asset management systems, like Brandfolder, for general company use, and even create custom SharePoint sites for internal knowledge management and consumption. For training, a technical writer can publish training files, test functionality, and SCORM compliance and upload them to learning management systems.

Technical writers can also create training and processes in a wholly digital capacity using standard work training software like Dozuki. Here writers can blend process steps, images, and media files to engage the learner fully and still update information on the fly.

Conclusion

Technical writers are not tied to one document type or industry. One writer can tackle multiple request types. Technical writers can work with different document types, content, and initiatives by using a systematic approach to planning, adapting their writing style for each client and deliverable, and using technical knowledge to develop and deliver final products. Contact Radiant if you have multiple technical communication projects but don’t know where to start to accomplish them. Our team of technical writers has experience delivering a variety of projects across industries, including the public and private sectors.

 


Selecting the Best Tools for Building your MLOps Workflows

In our previous blog, The Fundamentals Of MLOps – The Enabler Of Quality Outcomes In Production Environments, we introduced you to MLOps and its significance in an intelligence-driven DevOps ecosystem. MLOps is gaining popularity since it helps standardize and streamline the ML modeling lifecycle.

From development and deployment until maintenance, various tools and their features can be implemented to achieve the best outcomes. Organizations shouldn’t depend on just one tool but combine the most valuable features of multiple tools for:

This post discusses some key pointers to help you pick the right MLOps tools for your project.

Considerations for Choosing MLOps Tools

When organizations deploy real-world projects, there is a vast difference between individual data scientists working on isolated datasets on local machines and Data science teams deploying models in a production environment. These models need to be reproducible, maintainable, and auditable later on. MLOps tools help converge various functionalities and connect the dots through unified collaboration.

MLOps Tools help in these Areas

Resulting in:

• 30% faster time-to-market
• 50% lower new release failure rate
• 40% shorter lead times between fixes
• Up to 40% Improvement in the average Time-to-Recovery

Radiant’s Top Recommendations:

1. Databricks MLflow

MLflow is an open-source tool that lets you manage the entire machine learning lifecycle, including experimentation, deployment, reproducibility, and a central model registry. MLflow is suitable for individual data scientists and teams of any size. This platform is library-agnostic and can be implemented with any programming language.

Components

Image source: Databricks

Features: MLflow comprises four primary features that help track and organizes experiments.

MLflow Tracking – This feature offers an API and UI for logging parameters, metrics, code versions, and artifacts when running machine learning code. It lets you visualize and compare results as well.

MLflow Projects – With this, you can package ML code in a reusable form that can be transferred to production or shared with other data scientists.

MLflow Models – This lets you manage and deploy models from different ML libraries to a gamut of model-serving and inference platforms.

MLflow Model Registry – This central model store helps manage an MLflow Model's entire lifecycle. The processes include model versioning, stage transitions, and annotations.
The Model Registry capabilities are given below.

Architecture

Image source: slacker news

The MLflow tracking server offers the ability to track metrics, artifacts, and parameters for experiments. It helps package models and reproducible ML projects. You can deploy models to real-time serving or batch platforms. The MLflow Model Registry [AWS] [Azure] represents a central repository to manage staging, production, and archiving.

On-Premise and Cloud Deployment

Image source: LG collection

MLflow can be deployed on cloud platform services such as Azure and AWS. It can be deployed on container-based REST servers for on-prem, and continuous deployment can be executed using Spark streaming.

2. Kubeflow

Kubeflow is an open-source machine learning toolkit that works using Kubernetes. Kubernetes standardizes software delivery at scale, and Kubeflow provides the cloud-native interface between K8s and data science tools like libraries, frameworks, pipelines, notebooks, etc., to combine Ml and Ops.

Components

Image source: kubeflo

• Kubeflow dashboard: This multi-user dashboard offers role-based access control (RBAC) to the Data scientists and Ops team.

• Jupyter notebooks: Data scientists can quickly access the Jupyter notebook servers from the dashboard that have allocated GPUs and storage.

• Kubeflow pipelines: Pipelines can map dependencies between ML workflow components where each component is a containerized piece of ML code.

• TensorFlow: This includes TensorFlow training, TensorFlow serving, and even TensorBoard.

• ML libraries & Frameworks: These include PyTorch and MXNet XGBoost MPI for distributed training. Model serving is done using KFserving, Seldon Core, and more.

• Experiment Tracker: This component helps store the results of a Kubeflow pipeline run using specific parameters. These results can be easily compared and replicated later.

• Hyperparameter Tuner: Katib is used for hyperparameter tuning, which runs pipelines with different hyperparameters (e.g., learning rate) optimized for the best ML modeling.

Features

1. Kubeflow supports a user interface (UI) for managing and tracking experiments, jobs, and runs.

2. An engine schedules multi-step ML workflows.

3. An SDK defines and manipulates pipelines and components.

4. Notebooks help interact with the system using the SDK.

Architecture

Image source: kubeflow

Kubeflow is built on Kubernetes, which supports AWS, Azure, and on-prem deployments. It helps scale and manage complex systems. The Kubeflow configuration interfaces let you specify the ML tools that assist in the workflow. The Kubeflow applications and scaffolding layer manage the various components and functionalities of the following ML flow.

Image source: kubeflow

Main source page of all the images above: https://www.kubeflow.org/docs/started/kubeflow-overview/

On-premise and Cloud Deployment

Kubeflow has on-premise and cloud deployment capabilities that Google’s Anthos support. Anthos is a hybrid and multi-cloud application platform built on open source technologies, including Kubernetes and Knative. Anthos lets you create a consistent setup across your on-premises and cloud environments, where policy and security automation is possible at scale. Kubeflow can be deployed on IBM Cloud, AWS, and Azure as well.

3. Datarobot

DataRobot is an end-to-end enterprise platform that automates and accelerates every step of your ML workflow. Data Scientists or the operations team can import models programmed using Python, Java, R, Scala, and Go. The system includes frameworks in pre-built environments like Keras, PyTorch, and XGBoost that simplify deployment. You can then test and deploy models on Kubernetes and other ML execution environments that are available via a production-grade REST endpoint. DataRobot lets you monitor service health, accuracy, and data drift and generates reports and alerts for overall performance monitoring.

Components

• REST API-Helps quickly deploy and interact with a DataRobot-built model.

• Model Registry - The Model Registry is the central hub with all your model packages containing a file or set of files with model-related information.

• Governance and Compliance -This component helps comply with your models and
ML workflows with the defined MLOps guidelines and policies.

• Application Server -This component handles authentication, user administration, and project management and provides an endpoint for APIs.

• Modeling workers - This computing resource allows users to train machine learning models in parallel and even generate predictions at times.

• Dedicated prediction servers -Help monitor system health and make real-time decisions using key statistics.

• Docker Containers - This helps run multi-instance services on multiple machines, offering high availability and resilience during disaster recovery. Docker containers allow enterprises to run all of the processes on one server.

Features

1. Monitor MLOps models for service health, data drift, and accuracy.

2. Custom Notifications for user deployment status.

3. Management and replacement of MLOps models along with the documented record of every change that occurs.

4. Establish governance roles and processes for each deployment.

5. Real-time deployments using DataRobot Prediction API & HTTP Status Interpretation.

6. Optimization of Real-Time Model Scoring Request Speed.

7. Batch deployments using Batch Prediction APIs and Parameterized Batch Scoring Command-line Scripts.

Architecture

Image source: Datarobot community

The web UI and APIs feed business data and model information predictions to the application server.

The App Server handles user administration tasks and authentication. It also acts as the API endpoint.

The queued modeling requests are sent to the modeling workers. These stateless components can be configured to join or leave the environment on-demand.

The data layer is where the trained models are written back. Their accuracy is indicated on the model Leaderboard through the Application Server.

The Dedicated Prediction Server uses key statistics for instant Decisioning and provides data returned to the Application Server.

On-premise and Cloud Deployment

DataRobot can be deployed for on-premise enterprise clients as either a standalone Linux deployment or a Hadoop deployment. Linux deployments allow clients to deploy the platform in multiple locations from physical hardware and VMware clusters. They also help deploy using virtual private cloud (VPC) providers. Hadoop deployments help install in a provisioned Hadoop cluster which saves on hardware costs and simplifies data connectivity.

4. Azure ML

Azure Machine Learning (Azure ML) is a cloud-based service for creating and managing ML workflows and solutions. It helps data scientists and ML engineers leverage data processing and model development frameworks. Teams can scale, deploy and distribute their workloads to the cloud infrastructure at any time.

Components

• A Virtual machine (VM) is a device on-premise or in the cloud that can send an HTTP request.

• Azure Kubernetes Service (AKS) is used for application deployment on a Kubernetes cluster.

• Azure Container Registry helps store images for all Docker container deployment types, including DC/OS, Docker Swarm, and Kubernetes.

Features

Image source: slidetodoc

New Additions

More intuitive web service creation – A “training model” can be turned into a “scoring model” with a single click. Azure ML automatically suggests/creates the input and output points of the web service model. Finally, an Excel file can be downloaded and used for web service interactions for feature inputs and scores/predictions outputs.

The ability to train/retrain models through APIs – Developers and data scientists can periodically retrain a deployed model with dynamic data programmatically through an API.

Python support – Custom Python code can be easily added by dragging the “Execute Python Script” workflow task into the model and feeding the code directly into the dialogue box that appears. Python, R, and Microsoft ML algorithms can all be integrated into a unified workflow.

Learn with terabyte-sized data – You can connect to and develop predictive models using “Big Data” sets with the support of “Learning with Counts.”

Architecture

Image source: Microsoft docs

1. The trained model is registered to the ML model registry.
2. Azure ML creates a Docker image that includes the model and the scoring script.
3. It then deploys the Azure Kubernetes Service (AKS), scoring images as a web service.
4. The client sends an HTTP POST request with the question data encoded.
5. The web service created by Azure ML extracts the question from the request.
6. The question is relayed to the Scikit-learn pipeline model for scoring and featurization.
7. The matching FAQ questions with their scores are returned to the client.

On-Premise and other Deployment Options

Azure ML tools can create and deploy models on-premise, in the Azure cloud, and at the edge with Azure IoT edge computing.

Other options include:

• VMs with graphic processing units (GPUs) to help handle complex math and parallel processing requirements of images.

• Field-programmable gate arrays (FPGAs) as-a-service help operate at computer hardware speeds and drastically improve performance.

• Microsoft Machine Learning Server: This provides an enterprise-class server for distributed and parallel workloads for data analytics developed using R or Python. This server runs on Windows, Linux, Hadoop, and Apache Spark.

5. Amazon SageMaker

Amazon SageMaker is a fully-managed end-to-end ML service that enables data scientists and developers to build quickly, train, and host ML models at scale. SageMaker allows data labeling and preparation, algorithm selection, model training, tuning, optimization, and deployment to production.

Components

Image source: AWS

• Authoring: Zero-setup hosted Jupyter notebook IDEs for data exploration, cleaning, and pre-¬ processing.

• Model Training: A distributed model building, training, and validation service. You can use built-in standard supervised and unsupervised learning algorithms and frameworks or create your¬ training with Docker containers.

• Model Artifacts: Data-dependent model parameters allow you to deploy Amazon SageMaker¬-trained models to other platforms like IoT devices.

• Model Hosting: A model hosting service with HTTPS endpoints for invoking your models to get real-time inferences. These endpoints can scale to support traffic and allow you to A/B test, multiple models simultaneously. Again, you can construct these endpoints using the built-in SDK or provide your configurations with Docker images.

Features

Image source: slideshare

Architecture

Image source: ML in production

SageMaker is composed of various AWS services. An API is used to "bundle" together these services to coordinate the creation and management of different machine learning resources and artifacts.

On-premise and Cloud Deployment

Once the MLOps model is created, trained, and tested using SageMaker, it can be deployed by creating an endpoint and sending end images through HTTP using AWS SDKs.
This endpoint can be used by an application deployed on AWS (e.g., Lambda functions, Docker Microservices, or applications deployed on EC2s) and applications running on-premise.

MLOps Tools Comparison Snapshot

MLOps Tool Purpose Build Deploy Monitor Drag-and-drop Model Customization
AWS SageMaker It lets you train the Machine Learning model by creating a notebook instance from the SageMaker console along

with proper IAM role and S3

bucket access.

SageMaker console, Amazon Notebook S3, Apache Spark. Amazon Hosting Service Model Endpoint Amazon Model Monitor, Amazon CloudWatch Metrics NO YES
AZURE ML It lets Data scientists create    separate pipelines for different phases in the ML lifecycle, such as data pipeline, deploy pipeline,

inference pipeline, etc.

Azure Notebook, ML Designer Azure ML studio, Real-time endpoint service, Azure pipeline Azure Monitor YES NO
DataRobot It provides a single place to deploy centrally, monitor, manage, and govern all your production ML models, regardless of how they were created and where they were deployed. Baked-in modeling techniques, drag-and-drop ·          Make predictions, a.k.a. drag-and drop

·       Deploy

·       Deploy to Hadoop

·       DataRobot prime

·       Download

·       Prediction Application

MLOps monitoring agents YES YES
Kubeflow It is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable. The goal is not to recreate other services but to provide a straightforward way to deploy the best-of-breed open-source ML systems to diverse infrastructures. TensorFlow libraries, Google AI Platform, Datalab, Big Query, Data flow, Google Storage, Data Proc for Apache Spark and Hadoop TensorFlow Extended (TFX) Kubeflow pipeline ML Monitor API, Google Cloud Logging, Cloud Monitoring Service YES YES
MLflow It provides experimentation, tracking, model deployment, and model management services to manage the build, deploy, and monitor phases of ML projects. Experiment tracking Model Deployment Model Management YES YES

Wrapping up

Implementing the right mix of ML models should not be an afterthought but carefully calibrated within the ML lifecycle. Visibility into models runtime behavior benefits data scientists and operations teams to monitor effectiveness while providing the opportunity to detect any shortcomings and embrace functional improvements.