Preparing for growth with IT Service Management introduction

In today’s businesses, the IT function is increasingly essential. With more and more services being digitized, products purchased online, and teams working remotely, the acceleration of digital transformation means IT teams are racing to support the resilience and growth of these functions by creating always-on, exceptional digital experiences for both customers and employees alike.

This race has seen an increased burden on IT service management teams, who are in many ways still tied to tools that enforce old ways of working and limit new ones. These teams depend on the seamless flow of work across development, IT operations, and business teams. But the pressure from all sides can lead to a loss of focus.

Indeed, only 21% of ITSM professionals always ensure that their end users/customers know what can be done and by when. Before we explain how ITSM can be applied effectively, let’s look at what we mean by IT service management.

What is ITSM?

IT service management - mostly referred to as ITSM - is essentially the function of IT teams managing the end-to-end delivery of IT services to customers. This includes all the processes and activities required to design, create, deliver, and support IT services.

The core idea of ITSM is the belief that IT should be delivered as a service. A typical ITSM scenario might involve someone requesting new hardware, such as a laptop or call headset. This request would be submitted through a dedicated portal, with the person filling out a ticket with all relevant information, starting the chain of a repeatable workflow. This ticket would then land in the IT team’s ‘queue,’ where incoming requests are sorted and addressed according to priority.

As a result of their day-to-day interactions with IT teams, many people misconstrue the purpose of ITSM as a primary IT support function. However, ITSM teams manage and oversee all kinds of workplace technology, ranging from hardware devices to servers to business-critical software applications.

In this context, many IT service management (ITSM) tools today aren’t fit for purpose in tomorrow's business world, often creating conflict over collaboration, siloes over knowledge sharing, and the rigidity of standardization over vital agility. There is a real need for an ITSM solution that enables teams to move fast, unify development, and enhance IT function in an increasingly digital world.

There are some typical approaches to ITSM which have developed over time in the IT industry, positing that an effective ITSM strategy follows three steps:

  1. Build and implement IT technology
  2. Bring in and enforce the proper process
  3. People can learn the technology and abide by the process

However, this traditional method can often lead to IT technology that doesn’t adapt to the teams' requirements. Atlassian ITSM offers a new approach, flipping the order of these three steps to put the needs of IT teams at the heart of any ITSM approach.

What is the Atlassian ITSM approach?

 Atlassian’s ITSM solution was designed to address traditional IT responsibilities with modern practices in mind, such as culture, collaboration, and improving workflow. Built and extended from Jira, the engine for agile work practices for thousands of customers, Jira Service Management enables organizations to adopt new, modern practices that fit their needs and deliver high value to the business.

IT teams should be continually learning and improving. They must feel valued and empowered to make a difference in the organization. Rather than answering to rules imposed by a tiered reporting structure or rigid process, IT teams can make informed decisions about adopting SLAs and which software to implement. Because IT teams enable productivity and digital transformation, strong IT teams are critical to influential organizations. The team is at the center of Atlassian ITSM processes and technologies.

After focusing on the strength of the IT team, it’s possible to develop unique practices and capabilities to provide value to the organization. No matter how respectable the source, it’s insufficient to “copy and paste” another organization’s set of standards and hope they will work in your unique environment. Successful IT teams build their approach from frameworks like ITIL (the Information Technology Infrastructure Library). Still, they are careful to adapt processes that resonate with their customers.

Finally, software and technology should support a team’s practices and amplify their impact.

The benefits of using Atlassian ITSM

 There are three core benefits to the Atlassian ITSM approach:

  • Quicker time to value, with teams able to adopt a common code approach to defining and refining workflows while still being standardized on Jira, a single source of truth. This reduces the complexity and cost of ITSM.
  • Greater visibility of work. Jira Service Management gives teams and the broader organization visibility into work happening across the company. Tight integrations that make contextual information easy to access allow for better business decisions.
  • Enhanced DevOps, allowing teams to be more effective across the entire IT service lifecycle – from planning to building, testing, deploying, changing, and optimizing.

This all stems from a configurable approach to ITSM, with Atlassian’s Jira Service Management allowing any team that interacts with IT, such as legal, HR, or finance, to build out their own IT service culture and operations while still having that centralized anchor the ensures more comprehensive business strategy is adhered to.

Get ready to unleash the potential of your teams:

ITSM is a key cog in modernizing business operations, and enhancing the customer and employee experience. Atlassian’s approach attempts to take that influence further by placing people at the heart of its methodology, ensuring that all solutions are tailored to the needs of those who use them and maximizing efficiency. The world has changed drastically in the last two years, and software-enabled services are only set to accelerate further. Hence, IT teams need to transform, supporting businesses to be resilient in the face of constant flux and to differentiate themselves under more fierce competition than ever. By moving towards a new ITSM approach, your organization can stay one step ahead of the game.

Radiant Digital is a ServiceNow partner, get in contact with our experts to learn more about IT Service Management and how you can take your digital transformation to the next level.

What is GrandStack - and why should you use it?


GRANDstack is a combination of technologies that combine to enable software developers to build data-intensive, full-stack applications. It is a new generation framework with notable advantages on other such tech stacks, such as the ability to ship apps across platforms much faster, deliver consistent, high-quality UX, ease the transition to microservices and centrally manage and secure entire APIs more seamlessly than alternatives such as the REST (Representational state transfer) method. REST is a software architectural style created to guide the design and development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of an Internet-scale distributed hypermedia system, such as the Web, should behave.

GrandStack, meanwhile, is an acronym for the different components that make up a full stack tech combination, which are:

● GraphQL - A new paradigm for building APIs, GraphQL describes data and enables clients to query it.
● React - A JavaScript library for building component-based reusable user interfaces.
● Apollo - A suite of tools that work together to create great GraphQL workflows.
● Neo4j Database - The native graph database allows you to model, store, and query your data the same way you think: a graph.

The combination of these technologies can offer huge benefits when it comes to the success of your application. In this article, we will break down each of the components of GrandStack in more detail to explain how they work and what they offer as part of the combination. By the end of this article, you'll understand all about how GrandStack works and why and how it can benefit your business or software project.

The Components of GrandStack


The 'G' of GrandStack, GraphQL is an open-source data query and manipulation language for APIs and a runtime for fulfilling questions with existing data. GraphQL was developed internally by Facebook in 2012 before being publicly released in 2015.

Its Schema Definition Language is language agnostic, making it incredibly versatile in interacting with other technologies. For example, GraphQL enables you to wrap a REST API, offering organizations with a lot of investment in existing REST architectures a quick, low-investment way to reap the efficiency gains on GraphQL without significant up-front investment.


R is for React. React is a free and open-source front-end JavaScript library for building user interfaces or UI components. It is maintained by Facebook and a community of individual developers and companies. React can be used as a base in the development of single-page or mobile applications. In the GrandStack, React is used for the front-end and handles requests and responses via the Apollo Client React integration, sending them to and receiving them from the GraphQL server.


Apollo is a powerful tool whose primary use is making GraphQL easier on the client-side of the server. Apollo Client is a client-side JavaScript library for querying the GraphQL API from the app and handles interactions with the React front end.
It is popular because it offers declarative data fetching, zero-configuration caching, and the ability to combine local and remote data, all of which simplify data management.

Neo4j Database

The 'N' and the 'D' of GrandStack come in the form of the Neo4j Database. Neo4j is a graph database management system. Described by its developers as an ACID-compliant transactional database with native graph storage and processing, Neo4j is available in a GPL3-licensed open-source "community edition," with online backup and high availability extensions licensed under a closed-source commercial license.Neo4j delivers lightning-fast read and write performance while protecting data integrity. It is the only enterprise-strength graph database that combines native graph storage, scalable architecture optimized for speed, and ACID compliance to ensure the predictability of relationship-based queries.

So that's GrandStack. Why should you use it?

The above definition touched briefly on some of the benefits of every component and how they interact with each other. However, we see there as being four primary reasons for businesses to utilize GrandStack.

1. Modernize your APIs with GraphQL

GraphQL offers many modern upgrades over a traditional REST API architecture. For example, it enables data to be obtained through a single endpoint and allows you to inspect the API to work out what you can query. With the proper knowledge, you can also make incremental changes to the result in data points without ending up with 'versioned' endpoints. If all this sounds too technical, the implications of these abilities are, in short, that GraphQL helps you centrally manage and build super-powerful APIs, which will ultimately make the products and services you produce on top more appealing to your customers. In addition, GraphQL also offers the ability to build flexible public APIs that enable new applications. In contrast, architectural styles such as REST require constant API customization and often result in misuse, resulting in a low level of confidence in hundreds of REST endpoints.

2. Simplify your application development

Building on that last point, GrandStack also helps to simplify the development of new applications. GrandStack makes it easier for software developers to use the range of technologies they need simultaneously. Usually, developers don't have access to these integrations, and it can be complicated to get siloed technologies to interact effectively, thus complicating the overall app development process.

3. Develop UI and business logic - simultaneously

The ability for developers to use these technologies simultaneously is another benefit: you can build User Interfaces while writing the business logic code that governs how data is created, stored, and used. This means that the real-world rules your business runs can be encoded into the User Interfaces you build, simplifying the process of data management and compliance.

4. Easy refactoring

As we've already hinted at, GrandStack's full-stack integration enables the easy management of complex data. This also includes making refactoring of data much easier than it would be using siloed technologies. Refactoring is the process of changing a software system so that it does not alter the function of the code yet improves its internal structure. This can enhance the speed of improvements and enhancements and reduce the costs associated with downtime.


GrandStack can revolutionize full-stack development and certainly offers numerous benefits to those developing software for their businesses. The development of applications and the ability to manage complex data loads is becoming ever-more crucial across all industry verticals as the era of Big Data accelerates. That means GrandStack could play a significant role in the years to come.

Reach out to our experts to learn more about GrandStack, how it works, and what it could bring to your development.

Benefits of Data Mesh

Data mesh has been generating a lot of buzz recently in the business intelligence world. This is because businesses are always trying to improve and scale. Due to its scalability and democratization features, data mesh can massively help with data requirements for your business and meet your increasing needs. It’s a relatively new concept that continues to produce optimal outcomes when data is concerned. Although, its true potential has not been reached yet. Continuous modifications are enhancing the data platform architecture to obtain new heights to its power. 

What is a Data Mesh?

In simplistic terms, data mesh is a paradigm that is both architectural and organizational. It’s an innovative way to prove that massive amounts of analytical data don’t need to be centralized or can only be used by a specialized team to gain the necessary value from the information.  

There are four main principles that this paradigm follows:

  • Decentralization of architecture and data ownership that is domain-oriented
  • Focus on data provided as a product that is domain-oriented
  • Supporting self-serve data infrastructure by using a platform that promotes the use of autonomous, domain-oriented data teams. 
  • Ecosystems and interoperability are achieved by federated governance.

Why choose a Data Mesh?

There are many benefits as to why businesses should use a data mesh. If a company is looking to become data-driven, data mesh helps increase customer personalization and improve customer experience. Not only does it drastically increase efficiency by reducing your operational costs and employee time, but it also gives more in-depth business intelligence insights.

If you have a large number of domains, the data process can be highly complicated. For domain-based data ownership products that have been federated, a data mesh helps automate the right strategies to make it as efficient as possible. Thus, a data mesh is an essential step in improving the democratization of crucial data. 

Data Lakes vs. Data Mesh: What’s the difference?

Data lakes are great if you are looking for one centralized system to complete all your data needs. However, data lakes can hold you back in achieving your goals when you scale your business. This is where a data mesh comes into play. A data mesh system gives employees more control over large amounts of data. However, as data is used for various things, having a less centralized system is necessary to complete data transformations in the most efficient way possible. Data lakes are great for smaller organizations. However, for larger companies that need lots of data to be processed, a data mesh is required to speed up their processes through autonomy and a more flexible system. This saves tons of time for data teams, giving those using this system a distinct edge over their competitors. 

What’s a Data Mesh score?

A data mesh score is mainly based on how complicated your processes are. It also applies to how many systems or domains you have, the size of your data team, and the priority of data governance. If you have a high data mesh score, this means that your current processes would best benefit from using a data mesh. 

Observability for Data Mesh 

By measuring the internal states of a system by examining what is produced, businesses can analyze chains with more control and identify crucial elements. Data mesh helps ensure domain ownership when observability is concerned and offers these benefits by using self-serve capabilities:

  • Quality metrics in data product 
  • Encryption for data at rest and in motion
  • Monitoring, alerting, and logging in data product 
  • Data product schema
  • Data production lineage
  • Discovery, catalog registration, and publishing in data product 
  • Data governance and standardization

These core standardizations help give businesses high-end observability when utilized. Furthermore, it provides the ability to scale individual domains throughout the entire observability process. 

Data as a product using Data Mesh

This is achieved through the ownership of data being federated to domain data owners, providing more control and allowing them to hold accountability when supplying data as products. However, during this process, owners are supported by self-serve data platforms to reduce the technical knowledge needed for data mesh to work.

In addition, a new system of federated governance that is automated to ensure interoperability of data products that are domain-oriented is required. All these factors allow data to be decentralized, helping enhance the experience received by data consumers. Businesses that maintain a high pool of domains that require various systems and teams to produce data can benefit from data mesh, along with those with a range of set data-driven access patterns and use cases. 

Challenges of Data Mesh

Although the current data mesh has tons of benefits, there are currently a few challenges that you may face. Many domain experts are not knowledgeable in using the specific domain programming languages which the data mesh may be using. On top of this, many programs in the data mesh are not API compatible. This can sometimes make it difficult for some businesses to complete their required tasks efficiently.

Putting Data Mesh 2.0 into practice

Digital transformation can be a complex process, primarily when data mesh is implemented on large networks. However, with version 2.0 coming soon, many of its advantages will cancel out many of the current challenges of Data Mesh 1.0 while significantly improving network processes. For more information on ensuring a smooth process, contact us today. 

Enterprise-First Automation and Application Modernization using Kogito

Intelligent cloud-native applications are helping enterprises spearhead innovation through automation and legacy modernization. Every application needs to deliver a specific business logic. But, workflows and rule engines make it difficult for developers. Developers need to increase their productivity and business logic efficiency in cloud environments. Therefore, the use of precise language for business workflow implementation and business rules is the need of the hour.

Quarkus projects already demonstrate the power and efficiency of working in a fast-paced development environment through native execution. However, today, business automation projects similar to jBPM and Drools need support for cloud-nativity, speed, and low resource consumption. This is why developer communities provide a way to build next-gen business applications that support native compilation in the cloud. Meet Kogito, a cloud-native business automation technology to build cloud-ready business applications while enjoying the supersonic Quarkus experience.

In Kogito, "K" refers to Kubernetes, which is the base for OpenShift as the target cloud platform, and to the Knowledge Is Everything (KIE) open-source business automation project from which Kogito originates. Being a powerful engine, Kogito allows the implementation of the core business logic in a business-driven manner. It merges tried and tested runtime components like jBPM, Drools, and OptaPlanner.

Behind the scenes in Kogito, the development of cloud-native apps are backed by:

  • Code generation based on business assets.
  • An executable model for process/rules/constraints/decision definitions.
  • A type-safe model that encapsulates variables.
  • REST APIs for business processes/rules/decisions.

These help in orchestrating distributed Microservices and container-native applications influenced by Kubernetes and OpenShift.

Made for Hybrid Cloud Environments

Kogito adapts to your domain and tools since it is optimized for a hybrid cloud environment. The core objective of Kogito is to help converge a set of business processes and decisions into your own domain-specific cloud-native set of services.

What happens when you use Kogito?

When you use Kogito, you build a cloud-native application that equates to a set of independent domain-specific services to achieve specific business value. The processes and decisions you use to define the target behavior are executed as you create services. This results in highly distributed and scalable services with no centralized orchestration. Instead, the runtime that your service uses is optimized based on what your service needs.

Kogito Implementation on Knative’s Event-driven Architecture

Kogito ergo Cloud: Kogito is designed from the basics to run at scale on cloud infrastructure. Leveraging the latest technologies (Quarkus, knative, etc.), speedy boot times, startup times, low footprint, and instant scaling on orchestration platforms like Kubernetes is possible.

Kogito ergo Domain: Kogito strongly focuses on building domain-specific services using APIs and JSON data. This prevents any leaking of tool abstraction into your client applications. To achieve this, Kogito relies on code generation, which takes care of 80% of the work in a domain-specific service (or services) based on the defined processes and rule(s). Additionally, you can expose domain-specific data using events or a data index for other services to consume and query it easily.

Kogito ergo Developer: Kogito's battle-tested components offer a power-packed developer experience to achieve instant efficiency by having:

  • Tooling to build processes and rules for cloud-native services is embedded wherever required. For example, the Kogito BundleVSCode extension enables you to edit your Business Process Model and Notation (BPMN) 2.0 business processes and Decision Model and Notation (DMN) decision models directly in your VSCode IDE, next to your other application code.

The Kogito Operator Framework lets you deploy services in the cloud. It is based on the Operator SDK and provides deployment steps automation. For example, a link to the Git repository containing your application helps automatically configure the components required to build your project from the source and deploy the resulting services.

Kogito's command-line interface (CLI) simplifies these deployment tasks.

  • Code generation that automates 80% of the processes.
  • Customization flexibility.
  • Simple local development with live reload.
  • Instant productivity where a service can be developed, built, deployed, and tested to avoid delays. On Quarkus and Spring Boot, a dev mode helps achieve this. Quarkus even offers a live reload of your processes and rules in the running application, which offers advanced debug capabilities.
  • Components based on well-known business automation KIE projects, specifically jBPM, Drools, and OptaPlanner, offer reliable opensource solutions for business processes, rules, and constraint solving.

Why Kogito?

Cloud-first priority: By leveraging Quarkus and container orchestration platforms like Kubernetes/OpenShift, Kogito lets cloud-native apps run at scale along with superfast boot times.

Domain-specific flexibility: Kogito can be customized to specific business domains to avoid leaking workflow tool-specific application abstractions while interacting with the workflows.

Developer-centric Experience:

  • Kogito enhances the developer experience (based on VSCode) to achieve instant efficiency with embeddable tooling wherever required.
  • Kogito tooling (Codegen) automatically generates code for a minimum of 80% of workflows with the flexibility to customize and simplify application development.
  • Advanced Local test/debug, hot reload.
  • Integrated into existing developer workflow.
  • Reusable building blocks.
  • Rapid prototyping.

Optimized for Cloud and Containers

  • Small footprint
  • Fast startup
  • Dedicated Generated runtime + optional add-ons
  • Serverless architecture

Technology Enabler

  • Cloud events
  • Event-driven
  • Serverless with knative
  • Machine-learning support
  • Grafana/Prometheus

Other benefits include:

  • Runtime persistence for workflows for preserving the process state for instances across restarts.
  • Supports events and enables integration with third-party systems using external REST API calls.
  • Enables process instance progress tracking from the Kogito process management console.
  • Provides security to application endpoints by integrating with an OpenId Connect (OIDC) Server.

Latest Kogito Features

The Kogito 0.8.0 release has 25+ new feature requests, 35 enhancements, and more than 40 Kogito Runtime bug fixes. The VS Code editor has been updated with bug fixes and improvements to Chrome extensions for Business Process Model & Notation (BPMN) and Decision Model & Notation (DMN). In addition, new online editors make them more robust.

Here are the top features.

Online authoring Tools: Kogito offers authoring tools to increase the productivity of business automation experts who author executable business logic in the cloud. The latest release gives Kogito online editors for BPMN and DMN ready-to-use and does not require any local setup. You can use these editors to sketch BPMN-based business-process diagrams or business decisions based on DMN specifications.

Kogito Management Console: The new Kogito Management Console helps view and manage process instances. The Kogito Data Index in the new management console aids visualization and management for process-instances and domain-based views. Developers will get a detailed picture of running process instances, variables, sub-processes, and task execution workflows.

Kogito Domain Explorer: This feature helps navigate business data in a selected domain.

Multiple Run-time Modes: For Quarkus and Spring Boot, Kogito supports the following modes.

  • Development mode: For local testing. This mode offers live processes and decisions reload in your running applications for advanced debugging.
  • JVM mode: For Java virtual machine (JVM) compatibility.
  • Native mode: Quarkus requires GraalVM. This mode is used for direct binary execution as native code.

Meaningful process IDs

The meaningful process IDs simplify the correlation of new processes with the existing business information. In addition, these give the option to create a unique reference for a business key even if you use auto-generated process IDs.

You can create a unique reference by passing a query parameter named businessKey in the request to start a new process instance. For example, you can start an order process correlated to the business key by sending a request similar to the below example:

POST /orders?businessKey=ORDER-0001

You can then retrieve the new process instance with the new reference ID similar to the one given below.

GET /orders/ORD-0001

The following HTTP request deletes the process instance with the same reference ID:

DELETE /orders/ORD-0001

Process-variable tagging

The Kogito runtime supports process-variable tagging. Developers can provide the metadata about a process variable and perform model-based process grouping if required. For example, important business KPIs are grouped and classified this way using process inputs and outputs; and variable tags are internal or read-only.

Kogito Operator and Kogito CLI

Kogito Operator and Kogito CLI are based on the popular Red Hat OpenShift Container Platform where the Kogito Operator helps deploy Kogito Runtimes from the project's source with facility for developer interaction with Kogito Operator from the Kogito CLI.

You can set the -enable-istio parameter in Kogito CLI after creating a new Kogito application to enable Istio service mesh sidecars. The sidecars can then automatically combine tracing, observability, security, and monitoring in your Kogito pods.

Kogito Jobs Service

The Kogito Jobs Service offers a lightweight and dedicated solution for scheduling two types of jobs. A time-scheduled job executes only once at a given time, and a periodically scheduled job executes at the given interval and executes a predetermined number of times.

Decision and process integration

Kogito supports the modeling of a decision-driven service as either a self-contained service or based on intelligent workflows and decision tasks. For example, you can implement business decisions using Drools rules, decision tables in a spreadsheet, or DMN models where Kogito automatically generates REST endpoints for you.

In Kogito 0.8.0, this feature allows the integration of DMN models with BPMN processes to drive business processes by intelligent decision nodes.

Technologies used with Kogito

Kogito is compatible with the latest cloud-based technologies like Quarkus, Knative, and Apache Kafka, to increase start times and instant scaling on container application platforms, such as OpenShift.

  • OpenShift is based on Kubernetes and the target platform for building and managing containerized applications.
  • Quarkus is a native Kubernetes-based Java stack used to build applications with Kogito services.
  • Spring Boot support is available for Kogito, where you can use the Spring Framework.
  • GraalVMwith Quarkus provides native compilation with Kogito, resulting in fast startup times and a minimal footprint. As a result, a fast startup is available, especially for small serverless applications.
  • Knative allows building serverless applications with Kogito that can be scaled up or down (to zero) as needed.
  • Prometheus and Grafana can be used for monitoring and analytics with optional extensions.
  • Infinispan and MongoDB are middleware technologies that offer persistence support.
  • Kafka and Keycloak are middleware technologies that support messaging and security.

   Kogito on Quarkus and Spring Boot

  • Kogito supports primary Java Frameworks like Quarkus (recommended) and Spring Boot.
  • Quarkus is Kubernetes-native with a container-first approach to build Java applications, especially for Java virtual machines (JVMs) such as GraalVM and HotSpot. As a result, Quarkus reduces the size of both the Java application and container image footprint to optimize for Kubernetes while eliminating some Java programming workloads from older versions and reducing the memory requirement to run those images.
  • For Kogito services, Quarkus is highly preferred to optimize Kubernetes compatibility and enhanced developer features like live reload in the dev mode for advanced debugging.
  • Spring Boot helps build standalone production-ready Spring applications. Spring Boot-based applications require minimal configurations without an entire Spring configuration setup.
  • For Kogito services, Spring Boot is ideal for developers who need to use Kogito in existing Spring Framework environments.

Kogitos Architecture

When using Kogito, a cloud-native application is built by collaborating a set of independent domain-specific services to achieve some business value. The processes and rules describing the service behavior are executed as part of the services and are highly distributed and scalable without a centralized orchestration service. This way, the runtime of your service is fully optimized on a need basis. For long-lived processes, the runtime state must be persisted externally using a data grid, like Infinispan. Each service also produces events that can be aggregated, indexed in a data index service, and consumed to offer advanced query capabilities (using GraphQL).

Deploying a Custom Kogito Service on Kubernetes with Kogito Operator

Kogito Magic- Behind the Scenes

Kogito starts by reading business assets like a BPMN2 file, decision tables, or Drools rules and creates executable models using Java Fluent API. Here, developers can leverage Quarkus' hot-reload feature to update a business process diagram, save it, and test the changes instantly without a server restart. Kogito automatically generates an executable business resource for every business asset and REST endpoint that allows client apps to interact with processes, tasks, and rules in a more domain-driven way. This business automation tool increased productivity and faster business logic validation using easy-to-maintain business assets, the hot-reload feature, and quick application bootstrap during the development phase. While executing the Kogito business application, if the user wants to compile it to run in the native mode with GraalVM, the business rule execution can be run 100 times faster with 10x lower resource consumption when compared to a JVM execution. Once started, you can access the application after boot with no additional processing. Finally, with the Kogito operator, the user can create a new Kogito application or deploy an existing one through the UI or CLI tool.

Wrapping up

Kogito's Serverless Workflow specification can help define complex workflows, which can be helpful across multiple platforms and cloud providers. Developers can now rely on this platform fully compliant with standards and the specifications needed for cloud-native business application automation.

Connect with Radiant Digital's experts to explore the advantages of Kogito for your cloud-native enterprise applications.


Championing ‘Secure CI-CD’ with DevSecOps using Gitlab Secure

A successful DevOps implementation has two cornerstones, Continuous Integration and Continuous Deployment. Enterprises can reap the bottom-line benefits of an optimized CI/CD pipeline by automating their build, integration, and testing processes. Conventional IT dev processes involve security at the end of the application or software stack. To break down development and delivery process silos and ship software faster and more securely, securing CI/CD workflows has become necessary. Governance shortcomings and fragmented toolchains also risk the continuous release and deployment automation for applications. Thus, DevSecOps is the natural next step of DevOps that converges development, operations, and security teams. The missing link in CI/CD pipeline optimization helps promptly manage persistent security threats in the enterprise ecosystem.

What is DevSecOps?

The DevSecOps process integrates IT security practices into your application’s entire life cycle. It factors application and infrastructure security considerations from the start without pushing the security team’s role to the final development stage. It is used to establish the following goals.

  • Empower the Development Team to optimize CI/CD security and automate remediation through the improved visibility of vulnerabilities, risks, and code coverage.
  • Prevent pipeline vulnerabilities using the incident history from InfoSec.
  • Maintain a Trusted Repository that is threat-free.
  • Verify functional stability, security & compliance before GO-Live.

Why DevSecOps Integration Matters

  • It tests every piece of code upon commit for security threats at optimized costs.
  • The developer can remediate while working on their code or create an issue with a single click.
  • The security team can monitor and manage lurking vulnerabilities captured as software development by-products.
  • A single source of truth can help with remediation collaboration among developers, operations professionals, and security experts.
  • A single tool minimizes integration and maintenance costs throughout the DevOps pipeline.

Enterprise DevSecOps Integration with GitLab Secure

With GitLab Secure, businesses can continuously secure high-velocity DevOps. GitLab Secure covers the entire DevSecOps Cycle from Manage to Defend in a single application.

single sign-on eliminates the need for separate tool access requests, reduces context switching, and improves cycle time. GitLab Secure improves quality, security, and developer productivity by,

  • Offer actionable vulnerability findings through application security testing and remediation. This helps security professionals resolve and manage vulnerabilities easily.
  • Add Cloud-native Application Protection and monitoring capabilities to secure production environments.
  • Ensure Policy Compliance and Auditability through GitLab’s end-to-end transparency, MR approvals, compliance dashboard, and standard controls.
  • Provide SDLC Platform Security covering all the software stages.

GitLab Secure Features

Each of the following features displays vulnerabilities and analysis results in line with each merge request for immediate resolution.

Static Application Security Testing (SAST)

  • Scan the application binaries and source code to spot potential vulnerabilities (like harmful code leading to SQL DB injection) before deployment.
  • Scan results are collated and presented as a single report.
  • Assess vulnerabilities in the GitLab pipeline and manage issues with one click.

Dependency Scanning

  • Analyses external dependencies like libraries and versions for known vulnerabilities on each code commit in the CI/CD pipeline.
  • Identifies dependencies needing critical updates.

Container Scanning

  • Check for known vulnerabilities in the app environment’s docker images (such as using a dependency’s older version) using an open-source tool called ‘Clair.’
  • Help prevent the redistribution of vulnerabilities via container images.
  • Vulnerabilities are displayed in-line for each merge request.

Dynamic Application Security Testing (DAST)

  • Analyze running web applications for runtime vulnerabilities (like missing X content type options header) by running a live attack against an app or environment.
  • Leverage GitLab’s review app CI/CD capability to scan the SDLC earlier dynamically.
  • Display results in a sorted list of vulnerability severity levels.
  • Accept HTTP credentials to test private apps.

License Compliance

  • Help security teams scan all the licenses within project dependencies and tally them with an approved or denied list.
  • Automatic search for approved and unapproved licenses in project dependencies based on company policies.
  • Generate project-based custom license policies.

Security Dashboard

The Security Dashboard is a primary security tool that is available at the group and product levels. It provides an overview of security status and actionable insights to start a remediation process. This tool provides data visualizations for easy consumption of performance information.

Secret Detection: Secret Detection scans the repository content to detect sensitive API keys, tokens, and passwords that may get saved unintentionally to remote repositories. With Secret Detection, vulnerabilities are displayed by security scans in an intuitive UI for the developer to resolve them before deployment. The Security Dashboard and complete repository history scanning using SAST help prevent the accidental leakage of secrets.

Auto Remediation: Auto remediation offers automated vulnerability solution flow and fixes. Then, you can test the fixes. Once they pass, you can deploy all the tests for the application to the production environment.


Feature Flags: GitLab Secure also adds an Operations Dashboard called Feature Flags, in addition to their Kubernetes-native integrations and Multi-cloud deployment support. Feature Flag is a technique that reduces maintaining multiple branches in the source code (known as feature branches) to test the software feature during runtime before it is released. Feature flag linchpins a progressive delivery strategy allowing multiple software iterations to be simultaneously delivered without constant branching and merging costs.

Setting up DevSecOps CI/CD Using GitLab


  1. An existing or new GitLab Account.
  2. Set up for a new GitLab Project.

Step 1: In your GitLab project, navigate to your repository.

Next, add your source code to this repository using your IDE tools.

Step 2: Add a new .gitlab-ci.yml file for the CI/CD pipeline stages, tasks, etc. GitLab will auto-detect any changes to this file and run your CI/CD pipeline once any changes or updates occur.

Step 3: Set up GitLab Runner to run jobs in the CI/CD pipeline. You can access this Runner at Setting -> CI / CD -> Runner.

Step 4: Redeploy your CI/CD pipeline by navigating to project -> Pipeline -> Run Pipeline.

This will successfully set up your CI/CD pipeline in GitLab.

All the security features mentioned previously can be added to your DevOps CI/CD pipeline using Gitlab’s default security templates.

Step 5: Next, manually include the security scan templates in the .gitlab-ci.yml file in your existing project.

Step 6: Commit a change and observe your new DevSecOps CI/CD pipeline progress while checking your security and compliance board.

Access your Security Dashboard and other security options using the left-hand menu in GitLab.

Any vulnerabilities will be displayed on the page in red color under Repository->File.

You can view the vulnerability report by clicking on Security & Compliance->Vulnerability Report

From here, you can keep improvising your app’s security by updating the node js, other docker container package dependencies and modifying your Docker file.

Parting Words

Overall, with DevSecOps available throughout the CI/CD workflow, a single application will help companies improve how they deliver code, reduce release cycles, and innovate. GitLab Secure is a DevSecOps game-changer that applies to governance, construction, verification, and deployment.

Radiant Digital empowers enterprises with an optimized DevSecOps framework using GitLab Secure. Let’s connect to discuss more.

Data Science – The Cornerstone of Certainty during Uncertain Times

Data is a crucial digital asset for any individual or organization in their decision-making journey. According to IDC, by 2025, global data will grow to 175 zettabytes. This explosion in data from multiple sources like connected devices requires deriving valuable insights to make smarter data-driven decisions. Data Science helps enterprises understand data better and optimize its utilization for time-consuming and expensive processes. Collecting, analyzing, and managing data on-demand enables businesses to curb wastage, detect revenue leaks, and proactively solve problems to propel bottom lines. 

Data Science is a boon to any organization that needs to understand a problem, quantify data, gain visibility and insights, and implement data for decision-making. In this blog, we will take you through the basics of Data Science and give you a sneak peek into how top companies are implementing it.

Data Science – Definition

Data science is an interdisciplinary field of expertise that combines scientific methods, algorithms, processes, and systems to extract actionable insights from structured and unstructured data and apply the knowledge across a broad range of application domains.

It converges domain expertise, computer programming, and engineering, analytics, Machine Learning algorithms, mathematics, and statistical methodologies & modeling to extract meaningful data insights. In business engineering, the Data science process starts by understanding a problem, extracting and mining the required data, continuing with data handling and exploring, moving towards data modeling and feature engineering, and culminating in data visualization.


Data Science helps find different patterns within blocks of information that we feed into a system. It helps build data dexterity in implementing and visualizing various forms of data and supporting the following workflow.

It mainly serves the following Business Process Operations’ stages:

  • Design
  • Model/Plan
  • Deploy & execute
  • Monitor & control
  • Optimize & redesign

Benefits for Data-centric Industries

A recent study showed that the global Data Science market is expected to grow to $115 billion by 2023. The following benefits attribute to this.

Better Marketing: Companies are leveraging data for marketing strategy analysis and better advertisements. By analyzing customer feedback, behavior, and trend data, companies can match customer experiences to their expectations.

Customer Acquisition: Data Scientists help companies analyze customer needs. Companies can then tailor their offerings to potential customers.

Innovation: The abundance of data enables faster innovation. Data Scientists help gain creative insights from conventional designs. Customer requirement and review analysis help improve existing products and services or craft newer and innovative ones.

Enriching Lives: In Healthcare, gaining timely insights from available data shapes seamless patient care. Data Science helps collect and streamline EHRs and patient history data to offer essential healthcare services.

Why Data Science?

With the advancements in computational capabilities, Data Science makes it possible for companies to analyze large-scale data and understand insights from this massive horde of information. Furthermore, with Data Science, industries can make proper data-driven decisions.

  • With the right tools, technologies, and data algorithms, we can leverage data to make predictions or improve decision-making.
  • Data Science helps in fraud detection using advanced Machine Learning Algorithms.
  • Allows to build and enhance intelligence capabilities when used with AI in the field of Automation.
  • Companies can perform sentiment analysis to gauge customer brand loyalty.
  • It helps companies make product/service recommendations to customers and improve their experience.

Data Science Components

  1. Statistics include the methods of collecting and analyzing large volumes of numerical data to extract valuable insights.
  2. Visualization techniques help access large data sets and convert them into easy-to-understand and digestible visuals.
  3. Machine Learning includes building and studying predictive algorithms and generate futuristic data.
  4. Deep Learning includes machine learning research where an algorithm selects the analysis model to implement.

How Companies are Revolutionizing Business with Data Science

Facebook – Monetizing Data through Social Networking & Advertising Textual

  • Analysis: Facebook uses a homegrown tool called DeepText to extract, learn, and analyze meaning from words in posts.
  • Facial Recognition: DeepFace uses a self-teaching algorithm to recognize photos of people.
  • Targeted Advertising: Deep Learning is used to pick and display advertisements based on the user’s search history and preferences on their browser or Facebook.

Amazon – Data Science to Transform E-commerce 

  • Supply Chain and Inventory: Amazon’s anticipatory shipping model uses Big Data for predicting the products potential customers are most likely to purchase. It analyzes purchase patterns and helps in SCM for warehouses based on the customer demand around them.
  • Product Decisions: Amazon uses Data Science to gauge user activity, order history, competitor prices, product availability, etc. Custom discounts on popular items are offered for better profitability.
  • Fraud Detection: Amazon has novel ways and algorithms to detect fraud sellers and fraudulent purchases.

Empowering Developing Nations

  • Developing countries use Data Science to determine weather patterns, disease outbreaks, and daily living. Microsoft, Amazon, Facebook, and Google are all supporting analytics programs in these nations by leveraging data.
  •  Data Science equips these nations to improve agricultural performance, mitigate the risks of natural disasters & disease outbreaks, extend life expectancy, and raise the overall quality of living.

Combating Global Warming

  • According to the World Economic Forum, Data is crucial to controlling global warming using reporting and warning systems. The California Air Resources Board, Planet Labs, and the Environmental Defense Fund are collaborating on a Climate Data Partnership – a common reporting platform designed to assist more targeted measures for climate control.
  • A combination of overlapping and distinct data projects, including two satellite launches, will help monitor climate change from space. The data from these satellites combined with ground data on deforestation and other environmental parameters will appropriately help implement global supply chains.

Uber – Using Data to Enhance Rides 

  • Uber contains driver and customer databases. When a cab is booked, Uber matches the customer’s profile with the most suitable driver. Uber charges customers based on the time taken to cover the distance and not the distance itself. The Uber algorithms use the time-taken data, traffic density, and weather conditions to assign a cab.
  • During peak hours in an area, the shortage of drivers is determined, and the ride rates are increased automatically.

Bank of America – Leveraging Data to Deliver Superior Customer Experiences

  • Bank of America pioneered mobile banking and has recently launched Erica, the world’s first financial assistant.
  • Currently, Erica is providing customer advice to more than 45 million users globally. Erica uses Speech Recognition for customer inputs and provides the relevant output.
  • More banks leverage Data Science algorithms like association, clustering, forecasting, classification, and predictive analytics to detect payment, insurance, credit card, accounting, and customer information frauds.

Airbnb – Customer-centric Hospitality with Data-centric Decisioning

  • Data Science helps analyze customer search results, demographics data, and bounce rates from the website.
  • In 2014, Airbnb mitigated the location-specific lower booking issue by releasing a custom version of their booking software for specific countries and replacing the neighborhood links with the top travel destinations around a location. This resulted in a 10% improvement in the lift rate for property hosts.
  • Airbnb makes knowledge graphs to match user preferences to the ideal lodgings and localities. The Airbnb search engine has been optimized to connect customers to the properties of their choice.


Data Science lets you discover patterns from raw data and accelerate business conversions in a challenging digital landscape. It helps reduce the constraints of time and budget allocation while ensuring superior customer experience delivery. Connect with Radiant to learn more!

Digital Transformation with an API-focused approach

Advancements in technology and the burgeoning number of connected devices mandate every business to become digitally enabled. A crucial ingredient to navigate uncertainty and deliver value in a competitive digital economy is Digital Transformation.

Defining Digital Transformation

Digital transformation is the key strategic initiative businesses take to engage and augment digital technologies for traditional and non-digital business processes and services. This process may include creating new processes to meet the evolving market demand and propel bottom lines. Digital transformation completely alters the way businesses operate and manage, with a primary focus on customer-centric value delivery.

What does Digital Transformation Entail?

  • Digital transformation can include digital optimization, IT Modernization, and inventive digital business modeling.
  • Analyzing customer needs, mapping emerging technologies to requirements, and leveraging them for elevated user experiences.
  • Business evolution through new experimentation, techniques, and approaches to common issues.
  • Continual adaptation to a dynamic environment and change management.
  • Cloud migrations, API implementations, legacy app modernization, on-demand training, leveraging artificial intelligence, incorporating automation, and more.
  • Redefining leadership roles for strategic and change planning and digital-business disruption. This will prevent Siloed thinking and digital bolt-on strategies to give way to a more holistic approach.

According to Bain & Company, only 8% of global companies have achieved their business outcome targets from their digital technology investments. Leaders need to invest in digital transformation instead of just running their business with technology.

Benefits of Digital Transformation

3 Common Challenges of Digital Transformation

Most digital transformation issues can be associated with one of the following: people, communication, and measurement.

People: are at the core of any digital transformation initiative. Resistance to the cultural change caused by digitalization is a natural human instinct. 46% of CIOs say human adoption to culture change is their most significant barrier.

Poor Communication: Leaders often don't communicate about their digital transformation plans and expectations to their teams. Specific and actionable guidance is often overlooked before, during, and after digital transformation.

Lack of Measurement: The absence of newer and context-specific KPIs and metrics and platforms to measure them leads to assumptions and failures.

Connecting the Digital Transformation Dots with APIs

Businesses need an outside-in viewpoint of customer experiences and expectations to accelerate and close gaps digitally. Platforms like Apigee help develop and manage APIs using interactive & self-service tools like Apigee Compass. These interactive and self-managed tools help gauge an organization's digital maturity and curate a path to digital success. Apigee defines two cornerstone principles of Digital Transformation.

First Principle: Modern business must not stop adopting a mobile strategy or using cloud computing to create savings efficiencies. They must embrace a shift in the demand and supply nature.

This principle requires changing the traditional supplier-distributor and value chains. Strategies must go beyond focusing on the physical goods and services pipeline with lesser channels & customer interactions. Businesses must scale infinitely using virtual assets in new technological avenues at no marginal cost. This shall distribute value creation across ecosystems of customers, enterprises, vendors, and third parties. The key technologies include:

  • Omnichannel digital platforms
  • Packaged software and services using SaaS
  • APIs that connect data, systems, software applications, or mixed hardware-software intermediaries using HTTPS-based requests and responses.

Second Principle: Digital Transformation is shaped by a new shift in operating models that operationalize APIs and influence IT investments through technology ecosystem strategies.

APIs offer strategic levers to break silos and fuel the digital transformation initiative using IT enablers. Apigee provides a proxy layer for front-end services and an abstraction for backend APIs. It has features like security, rate management, quotas, and analytics, etc.

Apigee High-level Architecture

The two primary components include:

  • Apigee services: APIs used to create, manage, and deploy the API proxies.
  • Apigee runtime: A Kubernetes-cluster based collection of containerized runtime services in Google. All API traffic is processed using these services.
  • Additionally, GCP services support IAM, logging, metrics, analytics, and project management. In comparison, backend services provide runtime data access for your API proxies.

Managing Services with Apigee

Apigee offers secure service access with a well-defined, consistent, and service-agnostic API to:

  • Help app developers seamlessly consume your services.
  • Enable backend service implementation change without impacting the public API.
  • Leverage analytics, developer portals, and other built-in features.

10 Foundational Tenets of API-centric Digital Transformation

  1. Platform – Agile platforms repackage the software for new use cases, interactions, and digital experiences. APIs converge new services with the core system to deliver operational flexibility and data efficiency.
  2. RESTful APIs – These offer flexible and intuitive programming and multi-platform/service integration. You can monetize APIs through custom packaging.
  3. Outside-in Approach – Customer and partner experience need to be measured using analytics. This helps transform APIs into exceptional digital experiences.
  4. Ecosystem – This includes digital assets (internal and external), services, developers, partners, customers, and other enablers. This enables distributed demand generation, non-linear growth, and value creation across lucrative digital networks.
  5. Leadership – Top-to-bottom commitment helps achieve the necessary cultural alignment. APIs can be incentivized to create frictionless delivery cycles and propel bottom lines.
  6. Funding - API programs can blend with agile funding models, development cycles, and governance processes. Direct API funding can improve data utilization and process iteration without the need for excessive investments.
  7. Metrics – Enterprises must embrace API-based metrics like consumption rate, transactions-per-call, etc., that go beyond ROI, transaction volumes, and pricing. This helps overcome narrow opportunity windows and fragmented customer segments.
  8. Software Development Lifecycle (SDLC) - An API-first and agile-centric approach in a test-driven environment offers speed, innovation, and cost savings. This helps implement changes on-demand based on the ever-changing customer preferences. Automation should be pivotal to this approach where funding and measuring project success through intelligence and accurate forecasting tools is the top priority.
  9. Talent – Talent is key to the API digital value chain. Strong technical API-programming talent (developers, architects, documentation experts) shapes a company’s digital competency. Agile governance, funding, training, developer communities, portals, knowledge-sharing, automation, and DevOps promote talent improvement and retention.
  10. Self-Service – This involves delivering value in developer-driven value chains using developer portals, API catalogs, API keys and sample codes, testing tools, interactive API help content, and digital communities. 

Concluding Thoughts

Businesses can rely on APIs to scale with speed and be more responsive to demands. Regardless of your digital transformation roadmap, positioning API as a strategic asset for digital acceleration is crucial in any enterprise landscape.

Connect with Radiant to learn more!

Selecting the Best Tools for Building your MLOps Workflows

In our previous blog, The Fundamentals Of MLOps – The Enabler Of Quality Outcomes In Production Environments, we introduced you to MLOps and its significance in an intelligence-driven DevOps ecosystem. MLOps is gaining popularity since it helps standardize and streamline the ML modeling lifecycle.

From development and deployment until maintenance, various tools and their features can be implemented to achieve the best outcomes. Organizations shouldn’t depend on just one tool but combine the most valuable features of multiple tools for:

This post discusses some key pointers to help you pick the right MLOps tools for your project.

Considerations for Choosing MLOps Tools

When organizations deploy real-world projects, there is a vast difference between individual data scientists working on isolated datasets on local machines and Data science teams deploying models in a production environment. These models need to be reproducible, maintainable, and auditable later on. MLOps tools help converge various functionalities and connect the dots through unified collaboration.

MLOps Tools help in these Areas

Resulting in:

• 30% faster time-to-market
• 50% lower new release failure rate
• 40% shorter lead times between fixes
• Up to 40% Improvement in the average Time-to-Recovery

Radiant’s Top Recommendations:

1. Databricks MLflow

MLflow is an open-source tool that lets you manage the entire machine learning lifecycle, including experimentation, deployment, reproducibility, and a central model registry. MLflow is suitable for individual data scientists and teams of any size. This platform is library-agnostic and can be implemented with any programming language.


Image source: Databricks

Features: MLflow comprises four primary features that help track and organizes experiments.

MLflow Tracking – This feature offers an API and UI for logging parameters, metrics, code versions, and artifacts when running machine learning code. It lets you visualize and compare results as well.

MLflow Projects – With this, you can package ML code in a reusable form that can be transferred to production or shared with other data scientists.

MLflow Models – This lets you manage and deploy models from different ML libraries to a gamut of model-serving and inference platforms.

MLflow Model Registry – This central model store helps manage an MLflow Model's entire lifecycle. The processes include model versioning, stage transitions, and annotations.
The Model Registry capabilities are given below.


Image source: slacker news

The MLflow tracking server offers the ability to track metrics, artifacts, and parameters for experiments. It helps package models and reproducible ML projects. You can deploy models to real-time serving or batch platforms. The MLflow Model Registry [AWS] [Azure] represents a central repository to manage staging, production, and archiving.

On-Premise and Cloud Deployment

Image source: LG collection

MLflow can be deployed on cloud platform services such as Azure and AWS. It can be deployed on container-based REST servers for on-prem, and continuous deployment can be executed using Spark streaming.

2. Kubeflow

Kubeflow is an open-source machine learning toolkit that works using Kubernetes. Kubernetes standardizes software delivery at scale, and Kubeflow provides the cloud-native interface between K8s and data science tools like libraries, frameworks, pipelines, notebooks, etc., to combine Ml and Ops.


Image source: kubeflo

• Kubeflow dashboard: This multi-user dashboard offers role-based access control (RBAC) to the Data scientists and Ops team.

• Jupyter notebooks: Data scientists can quickly access the Jupyter notebook servers from the dashboard that have allocated GPUs and storage.

• Kubeflow pipelines: Pipelines can map dependencies between ML workflow components where each component is a containerized piece of ML code.

• TensorFlow: This includes TensorFlow training, TensorFlow serving, and even TensorBoard.

• ML libraries & Frameworks: These include PyTorch and MXNet XGBoost MPI for distributed training. Model serving is done using KFserving, Seldon Core, and more.

• Experiment Tracker: This component helps store the results of a Kubeflow pipeline run using specific parameters. These results can be easily compared and replicated later.

• Hyperparameter Tuner: Katib is used for hyperparameter tuning, which runs pipelines with different hyperparameters (e.g., learning rate) optimized for the best ML modeling.


1. Kubeflow supports a user interface (UI) for managing and tracking experiments, jobs, and runs.

2. An engine schedules multi-step ML workflows.

3. An SDK defines and manipulates pipelines and components.

4. Notebooks help interact with the system using the SDK.


Image source: kubeflow

Kubeflow is built on Kubernetes, which supports AWS, Azure, and on-prem deployments. It helps scale and manage complex systems. The Kubeflow configuration interfaces let you specify the ML tools that assist in the workflow. The Kubeflow applications and scaffolding layer manage the various components and functionalities of the following ML flow.

Image source: kubeflow

Main source page of all the images above:

On-premise and Cloud Deployment

Kubeflow has on-premise and cloud deployment capabilities that Google’s Anthos support. Anthos is a hybrid and multi-cloud application platform built on open source technologies, including Kubernetes and Knative. Anthos lets you create a consistent setup across your on-premises and cloud environments, where policy and security automation is possible at scale. Kubeflow can be deployed on IBM Cloud, AWS, and Azure as well.

3. Datarobot

DataRobot is an end-to-end enterprise platform that automates and accelerates every step of your ML workflow. Data Scientists or the operations team can import models programmed using Python, Java, R, Scala, and Go. The system includes frameworks in pre-built environments like Keras, PyTorch, and XGBoost that simplify deployment. You can then test and deploy models on Kubernetes and other ML execution environments that are available via a production-grade REST endpoint. DataRobot lets you monitor service health, accuracy, and data drift and generates reports and alerts for overall performance monitoring.


• REST API-Helps quickly deploy and interact with a DataRobot-built model.

• Model Registry - The Model Registry is the central hub with all your model packages containing a file or set of files with model-related information.

• Governance and Compliance -This component helps comply with your models and
ML workflows with the defined MLOps guidelines and policies.

• Application Server -This component handles authentication, user administration, and project management and provides an endpoint for APIs.

• Modeling workers - This computing resource allows users to train machine learning models in parallel and even generate predictions at times.

• Dedicated prediction servers -Help monitor system health and make real-time decisions using key statistics.

• Docker Containers - This helps run multi-instance services on multiple machines, offering high availability and resilience during disaster recovery. Docker containers allow enterprises to run all of the processes on one server.


1. Monitor MLOps models for service health, data drift, and accuracy.

2. Custom Notifications for user deployment status.

3. Management and replacement of MLOps models along with the documented record of every change that occurs.

4. Establish governance roles and processes for each deployment.

5. Real-time deployments using DataRobot Prediction API & HTTP Status Interpretation.

6. Optimization of Real-Time Model Scoring Request Speed.

7. Batch deployments using Batch Prediction APIs and Parameterized Batch Scoring Command-line Scripts.


Image source: Datarobot community

The web UI and APIs feed business data and model information predictions to the application server.

The App Server handles user administration tasks and authentication. It also acts as the API endpoint.

The queued modeling requests are sent to the modeling workers. These stateless components can be configured to join or leave the environment on-demand.

The data layer is where the trained models are written back. Their accuracy is indicated on the model Leaderboard through the Application Server.

The Dedicated Prediction Server uses key statistics for instant Decisioning and provides data returned to the Application Server.

On-premise and Cloud Deployment

DataRobot can be deployed for on-premise enterprise clients as either a standalone Linux deployment or a Hadoop deployment. Linux deployments allow clients to deploy the platform in multiple locations from physical hardware and VMware clusters. They also help deploy using virtual private cloud (VPC) providers. Hadoop deployments help install in a provisioned Hadoop cluster which saves on hardware costs and simplifies data connectivity.

4. Azure ML

Azure Machine Learning (Azure ML) is a cloud-based service for creating and managing ML workflows and solutions. It helps data scientists and ML engineers leverage data processing and model development frameworks. Teams can scale, deploy and distribute their workloads to the cloud infrastructure at any time.


• A Virtual machine (VM) is a device on-premise or in the cloud that can send an HTTP request.

• Azure Kubernetes Service (AKS) is used for application deployment on a Kubernetes cluster.

• Azure Container Registry helps store images for all Docker container deployment types, including DC/OS, Docker Swarm, and Kubernetes.


Image source: slidetodoc

New Additions

More intuitive web service creation – A “training model” can be turned into a “scoring model” with a single click. Azure ML automatically suggests/creates the input and output points of the web service model. Finally, an Excel file can be downloaded and used for web service interactions for feature inputs and scores/predictions outputs.

The ability to train/retrain models through APIs – Developers and data scientists can periodically retrain a deployed model with dynamic data programmatically through an API.

Python support – Custom Python code can be easily added by dragging the “Execute Python Script” workflow task into the model and feeding the code directly into the dialogue box that appears. Python, R, and Microsoft ML algorithms can all be integrated into a unified workflow.

Learn with terabyte-sized data – You can connect to and develop predictive models using “Big Data” sets with the support of “Learning with Counts.”


Image source: Microsoft docs

1. The trained model is registered to the ML model registry.
2. Azure ML creates a Docker image that includes the model and the scoring script.
3. It then deploys the Azure Kubernetes Service (AKS), scoring images as a web service.
4. The client sends an HTTP POST request with the question data encoded.
5. The web service created by Azure ML extracts the question from the request.
6. The question is relayed to the Scikit-learn pipeline model for scoring and featurization.
7. The matching FAQ questions with their scores are returned to the client.

On-Premise and other Deployment Options

Azure ML tools can create and deploy models on-premise, in the Azure cloud, and at the edge with Azure IoT edge computing.

Other options include:

• VMs with graphic processing units (GPUs) to help handle complex math and parallel processing requirements of images.

• Field-programmable gate arrays (FPGAs) as-a-service help operate at computer hardware speeds and drastically improve performance.

• Microsoft Machine Learning Server: This provides an enterprise-class server for distributed and parallel workloads for data analytics developed using R or Python. This server runs on Windows, Linux, Hadoop, and Apache Spark.

5. Amazon SageMaker

Amazon SageMaker is a fully-managed end-to-end ML service that enables data scientists and developers to build quickly, train, and host ML models at scale. SageMaker allows data labeling and preparation, algorithm selection, model training, tuning, optimization, and deployment to production.


Image source: AWS

• Authoring: Zero-setup hosted Jupyter notebook IDEs for data exploration, cleaning, and pre-¬ processing.

• Model Training: A distributed model building, training, and validation service. You can use built-in standard supervised and unsupervised learning algorithms and frameworks or create your¬ training with Docker containers.

• Model Artifacts: Data-dependent model parameters allow you to deploy Amazon SageMaker¬-trained models to other platforms like IoT devices.

• Model Hosting: A model hosting service with HTTPS endpoints for invoking your models to get real-time inferences. These endpoints can scale to support traffic and allow you to A/B test, multiple models simultaneously. Again, you can construct these endpoints using the built-in SDK or provide your configurations with Docker images.


Image source: slideshare


Image source: ML in production

SageMaker is composed of various AWS services. An API is used to "bundle" together these services to coordinate the creation and management of different machine learning resources and artifacts.

On-premise and Cloud Deployment

Once the MLOps model is created, trained, and tested using SageMaker, it can be deployed by creating an endpoint and sending end images through HTTP using AWS SDKs.
This endpoint can be used by an application deployed on AWS (e.g., Lambda functions, Docker Microservices, or applications deployed on EC2s) and applications running on-premise.

MLOps Tools Comparison Snapshot

MLOps Tool Purpose Build Deploy Monitor Drag-and-drop Model Customization
AWS SageMaker It lets you train the Machine Learning model by creating a notebook instance from the SageMaker console along

with proper IAM role and S3

bucket access.

SageMaker console, Amazon Notebook S3, Apache Spark. Amazon Hosting Service Model Endpoint Amazon Model Monitor, Amazon CloudWatch Metrics NO YES
AZURE ML It lets Data scientists create    separate pipelines for different phases in the ML lifecycle, such as data pipeline, deploy pipeline,

inference pipeline, etc.

Azure Notebook, ML Designer Azure ML studio, Real-time endpoint service, Azure pipeline Azure Monitor YES NO
DataRobot It provides a single place to deploy centrally, monitor, manage, and govern all your production ML models, regardless of how they were created and where they were deployed. Baked-in modeling techniques, drag-and-drop ·          Make predictions, a.k.a. drag-and drop

·       Deploy

·       Deploy to Hadoop

·       DataRobot prime

·       Download

·       Prediction Application

MLOps monitoring agents YES YES
Kubeflow It is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable. The goal is not to recreate other services but to provide a straightforward way to deploy the best-of-breed open-source ML systems to diverse infrastructures. TensorFlow libraries, Google AI Platform, Datalab, Big Query, Data flow, Google Storage, Data Proc for Apache Spark and Hadoop TensorFlow Extended (TFX) Kubeflow pipeline ML Monitor API, Google Cloud Logging, Cloud Monitoring Service YES YES
MLflow It provides experimentation, tracking, model deployment, and model management services to manage the build, deploy, and monitor phases of ML projects. Experiment tracking Model Deployment Model Management YES YES

Wrapping up

Implementing the right mix of ML models should not be an afterthought but carefully calibrated within the ML lifecycle. Visibility into models runtime behavior benefits data scientists and operations teams to monitor effectiveness while providing the opportunity to detect any shortcomings and embrace functional improvements.

Making Your Business Processes Efficient and Reliable with jBPM Migration

In our previous blog, "Enterprise BPM Transformation - Embrace the Change," we discussed the transformational capacity of BPM and how it helps organizations gain better visibility that translates into higher productivity. Many companies invest in tools like Lucidchart, Visio, Modelio, Pega BPM, ServiceNow BPM, etc., based on their diverse and specific business needs. While each of them has benefits and limitations, we at Radiant highly recommend jBPM. As a Java-based workflow engine, it leverages framework capabilities and externalized assets like business processes, planning constraints, decision tables, and business rules in a unified environment. Business analysts, system architects, and developers can implement their business logic with persistence, transactions, messaging, events, etc. Automated Decisioning Support in jBPM helps analyze, track, and implement tasks using machine learning algorithms that automatically assess and process them. It facilitates process execution using the BPMN 2.0 specifications and Java’s object-oriented benefits. jBPM is helpful as a standalone service or code embedded into your customized service.

Over the years, jBPM has been increasingly used in the following areas:

  • Business processes (BPMN2)
  • Case management (BPMN2 and CMMN)
  • Decision Management (DMN)
  • Business rules (DRL)
  • Business optimization (Solver)

jBPM lets you implement complex business logic with adaptive and dynamic processes that use business rules and advance event processing. jBPM can also be used in traditional JEE applications, Thorntail, or SpringBoot & standalone java programs. Control rests with the end-users who can monitor and work on which processes (wholly or partly) should be dynamically executed.

The Differentiating Features of jBPM 

Benefits of jBPM

  • Better management visibility on business resources and processes and thus improved decision-making
  • Low cost of inputs (at least 30%), less wastage (at least 40%), de-skilled labor requirements, and standardized components.
  • Quality: Consistent and reliable output quality leads to higher customer satisfaction.
  • Meant for everyone: Non-developers can effortlessly design business processes and obtain a much better view of runtime process states.
  • Support for human tasks: jBPM Workflows can also create manual testing tasks or signing off on releases.
  • Graphs: Complex workflows can be easily modeled with jBPM using a graphical designer and the Java code, which performs the workflow-triggered actions.
  • Resilience: Existing workflow definitions remain unaffected by new processes.
  • Flexibility: More variables relating to the approbation flow can be introduced dynamically in the workflow aligned to the specified rules and conditions.
  • Multiple flows: jBPM helps manage multiple workflows simultaneously for a complex business logic through automation.

How we Implemented jBPM for a Ticketing System at Radiant Digital

This section will discuss how we implemented jBPM for a Ticketing System project of our client. The essential features of this implementation include:

  • This project was deployed on an AWS server.
  • jBPM Business Central was used for workflow creation and execution.
  • Our implementation components included Business Rules, Rest APIs, script tasks, human tasks, Data Modular Forms, and Process level variables.
  • Various REST APIs were developed and integrated with the jBPM use case, and the entire service was deployed on AWS.
  • Dashboard and data visualization features have been implemented using Big Data to monitor and execute all the defined workflows' tasks and processes.

Implementation Steps

  1. Launch JBPM business central.
  2. Business Central has four activity options, Design, Deploy, Manage and track.

  1. Use the “Design” option to create your project space and click to add project and add your required assets by clicking the “Add Asset” option to choose your asset.

  1. For the ticketing system, we used the following steps.
  • Create the ticket using REST APIs or create a ticket using an email sub-process (to create automatically).
  • We used three REST APIs for CREATE, UPDATE, and DELETE operations, deployed them on this code on the AWS server, and used their endpoints in our project.
  • We then defined the required object(s) separately.

Once the object(s) were defined, the form was automatically generated on jBPM. We could edit and design the form as required.

  • The user needed to enter the ticket details in the form, and the form data was routed to store the “Create Ticket” REST API after submitting the details.
  • This form appeared again, asking the user if an update was required. When the user clicked “Yes,” the user could update the ticket details that got stores on the “UPDATE” REST API; otherwise when the user clicks on “No,” the system directly moves on to the next step in the process.

5. After completing the first two steps, the system initiates the “Ticket Routing” sub-process. This sub-process gets the ticket details based on the ticket ID available in the APIs and checks the user’s country and region based on the business rule definition.

6. Based on the region name, the system ended the process and moved to the primary process again, or the aging user needed to enter the interface team name, and the team checked if the user had already been serviced or not. In case the user was not serviced, an ITT checks the TMG and the TOG business rules and ends the sub-process. It then enters the primary process in case the user is not serviced. This loop keeps happening until the user updates the ITT.

Create Ticket Process Flow

An Auto-assign sub-process automatically assigned the ticket for a particular user and sent the notification to that user. Here’s the process flow.

After completing the Auto-assign sub-process, the data again went to the primary process. Here, the data was retrieved using the “Get Task” Rest API, and the system checked if the Task Completion data was due along with the task status and updated these details using the “UPDATE TASK” Rest API.

After completing the Auto-complete sub-process, the data moved to the primary process the “EVENT-SUBPROCESS” API was used. In the primary process, we used the “intermediate” signal to transfer the data. The “start” signal was used to ask the user if additional data needed to be added.

Again, the user could update data by clicking “Yes” when the form appears or “No” to send the validation data. When the form appeared again, the user had to change the status details and call the “UPDATE” REST API, to update the changes. This completed the ticket.

The Ticket Details Form snapshot is given below.

The snapshot of the analytics-driven Ticketing System Dashboard is given below.

jBPM Migration Steps 

The three types of migration to be carried out include runtime process instance, data, and API calls' migration.

Runtime Process Instance Migration:

jBPM 7.0 comes with an excellent deployment model based on knowledge archives, allowing different project versions to run parallelly in a single execution environment. This is powerful; however, it raises some migration concerns. Some of the pressing questions include:

  • Can users run both the old and new versions of the processes?
  • What shall happen to already active process instances that were started with a previous version?
  • Can active versions be migrated to a newer version and vice-versa?

Active process instances can be migrated, but this is not a straightforward process that can be performed via the jBPM console (aka kie-workbench). You can directly deploy the steps provided in this article to your installation and migrate any process instance. I explicitly use the term "migrate" instead of "upgrade" here because it can move from the lower to the higher version and vice-versa. Few things might happen when migration is performed. These depend on the differences between the process definitions of the two versions.

What does Migration Include?

  • You can migrate from one process to another within the same Kjar.
  • You can migrate from one process to another across Kjars.
  • You can migrate with node mapping between the two process versions.

While the first two options are more straightforward, the third one requires some explanation.

Before we move to the migration scenarios, let's understand what node mapping is.

While making changes in the process versions, we might end up in a situation where others replace the nodes/activities. When migrating between these versions, node mapping needs to take place. Another scenario is when you'd like to skip some nodes in the current version.

Scenario 1: Simple migration of process instance

This case is about migrating active process instances from one version to another.

  • "default org.jbpm:Evaluation:1.0" project is used, consisting of a single process definition - evaluation with version 1.0.
  • A single process instance is started with this version. Once done, a new version of the evaluation process is created.
  • The upgraded version is then released as part of the "org. jbpm: Evaluation:2.0" project with version 2.0.
  • Then, the migration of the active process instance is performed.
  • The process instance migration results are illustrated on the process model of the active instance and the outcome of the process instance migration.

Scenario 2: Process Instance migration with node mapping

Here, we go one step further and add another node to the Evaluation process to skip one of the original version nodes. For that, we need to map the nodes to be migrated. The steps are almost the same as in scenario 1, except that we need to perform additional steps to collect node information and then let the user manually select which nodes should be mapped to the new version.

Data Migration

  • Two types of processes are involved in data migration. One for the external database migration (ex. MySQL, SQL…) and the other for internal data migration.
  • jBPM has a default repository where all the data is stored. You can also import the code to the external repository and perform runtime migration.

API Calls Migration

The Red Hat JBoss BPM Suite 5 provides a task server bridged from the core engine using the messaging system provided by HornetQ. A helper or utility method called "LocalHTWorkItemHandler" helps you bridge the gap until you can migrate API calls in your current architecture. Since the "TaskService" API is part of the public API, you will need to refactor your imports and methods because of package and API changes.

What you get after the jBPM Migration 

  • A high-performing rules engine based on the Drools project.
  • Improved rule authoring tools and an enhanced user interface.
  • A commonly defined methodology for building and deployment using Maven as the basis for repository management.
  • A heuristic planning engine based on the OptaPlanner community project.
  • Better algorithms to handle a more significant number of rules and facts.
  • A new Data Modeler that replaces the declarative Fact Model Editor.
  • Stability, usability, and functionality improvements.
  • Case management capabilities.
  • A new and simplified authoring experience for creating projects.
  • Intuitive Business dashboards.
  • Process and task admin API.
  • Process and task consoles that can connect to any number of execution servers.
  • A preview of a new process designer and form modeler.
  • A new security management UI.
  • Upgrades to Java8, WildFly 10, EAP 7, etc.

If you're looking for a seamless jBPM migration or project implementation, we could help you with our industry-leading expertise. 

Connect with us to learn more.



The Fundamentals of MLOps – The Enabler of Quality Outcomes in Production Environments

With the increasing complexity of modeling frameworks and their relevant computational needs, organizations find it harder to meet the evolving needs of Machine Learning.

MLOps and DataOps help data scientists embrace collaborative practices between various technology, engineering, and operational paradigms.

MLOps is a set of practices that infuses Machine Learning, DevOps, and Data Engineering practices for a reliable and data-centric approach to Machine Learning systems during production.

It starts by defining a business problem, a hypothesis about the value extraction from the collected data, and a business idea for its application.

What is MLOps, and what does it entail?

It is a new cooperation method between business representatives, mathematicians, scientists, machine learning specialists, and IT engineers in creating artificial intelligence systems.

It applies a set of practices to augment quality, simplify the management processes, and automate Machine Learning and Deep Learning models' deployment in a large-scale production environment.

The processes involved in MLOps include data gathering, model creation (SDLC, continuous integration/continuous delivery), orchestration, deployment, diagnostics, governance, and business metrics.

Key Components of MLOps

MLOps Lifecycle

When it becomes necessary to retrain the model on new data in the operation process, the cycle is restarted - the model is refined and tested while a new version is deployed.

Why do we need MLOps?

MLOps streamlines process collaboration and integration through the automation of retraining, testing, and deployment.

AI projects include stand-alone experiments, ML coding, training, testing, and deployment to fit into the CI\CD pipelines of the development lifecycle.

MLOps automates model development and deployment to enable faster go-to-market and lower operational costs. It allows managers and developers to become more agile and strategic with their decision-making while addressing the following challenges.

Unrealized Potential: Many firms are striving to include ML in their applications to solve complex business problems. However, for many, incorporating ML into production has proven even more difficult than finding expert data scientists, useful data, and optimized training models.

Even the most sophisticated software solutions get caught up in deployment and become unproductive.

Frustrating Lag: Machine learning engineers often need to deploy an already trained model, which could cause anxiety. Communication between the operations and engineering teams requires incremental sign-offs and facilitation, spanning over weeks or months. Sometimes, a straightforward upgrade can feel insurmountable, especially for model enhancements, by switching from one ML framework to another.

Fatigue: Production processes lead to frustrated or underutilized engineers whose projects may not make it to the finish line. This causes process fatigue that stifles creativity and diminishes the motivation to deliver benchmark customer experiences.

Weak Communication: Data engineers, data scientists, researchers, and software engineers seem to be worlds apart from the operations team regarding the physical presence and thought process.

There is rarely sufficient streamlining to bring development and data management work to full production.

Lack of Foresight: Many ML data scientists don't have a particular way of knowing that their training models will work correctly in production. Even writing test cases and continually auditing their work with QA or Unit Testing cannot prevent the types of data encountered in production to differ from those used to train these models.

Therefore, gaining access to production telemetry to evaluate model performance against real-world data is very important.

However, the non-streamlined CI/CD processes to place the new model into production is a significant hindrance to realizing deriving value from machine learning.

Misused Talent: Data scientists differ from IT engineers. They primarily develop complex algorithms, neural network architectures, and data transformation mechanisms but do not handle Microservices deployment, network security, or other critical aspects of real-world implementations.

MLOps merges multiple disciplines with varied expertise to infuse machine learning into real applications and services.

DevOps vs. MLOps

By analogy, DevOps and DataOps help businesses organize continuous cooperation and interaction between all the process participants with machine learning models created by Data Scientists and ML developers.

Though MLOps evolved from DevOps, the two concepts are fundamentally different in the following ways.

MLOps DevOps
It is more experimental in nature. It is more implementation and process-oriented.
It needs a hybrid team composition of software engineers, data scientists, ML researchers, etc. focusing on exploratory data analysis, model development, and experimentation. Teams are mostly composed of data engineers, scientists, and developers.
Testing an ML system involves model validation and training, in addition to unit and integration testing. Mostly regression and integration testing are done.
A multi-step pipeline is required to retrain and deploy a model automatically. A single CI/CD pipeline is enough to retrain and deploy a DevOps model.
Automatic execution of steps needs manual intervention before new models are deployed. No Manual intervention is required for new model deployment since the entire process can be automated using various tools.
ML models can experience production system performance degradation due to evolving data profiles. DevOps models do not experience any performance degradation due to evolving elements.
Continuous Training (CT) is used to retrain candidate models for testing and serving automatically. And Continuous Monitoring (CM) is used to capture and analyze production inference data and model performance metrics. Continuous Training (CT) and Continuous Monitoring are not used, but Continuous Integration and Continuous Deployment are used.

The Four Pillars of MLOps

These four critical pillars support any agile MLOps solution for problematic situations to deliver machine learning applications in a production environment safely.

The Fifth Pillar of MLOps

Production Model Management: To ensure ML models' consistency and meet all business requirements at scale, a logical, easy-to-follow Model Management method is essential. Though optional, this paradigm streamlines end-to-end model training, packaging, validation, deployment, and monitoring to ensure consistency.

With Production Model Management, organizations can:

  • Proactively address common business issues (such as regulatory compliance).
  • Enable the sustainable tracking of data, models, code, and model versioning.
  • Package and deliver models in reusable configurations.

Model Lifecycle Management: MLOps simplifies production model lifecycle management by automating troubleshooting and triage, champion/challenger gating, and hot-swap model approvals. It produces a secure workflow that efficiently manages your models' lifecycle as the number of models in production grows exponentially.

The key actions include:

  • Champion/ challenger model gating introduces any new model (aka 'challenger') by initially running it in production and measuring its performance in comparison to its predecessor (aka 'champion') for a defined period to determine its worth to outperform its predecessor in terms of quality and model stability before becoming completely automated.
  • Troubleshooting and Triage are used to monitor and rectify suspicious or poorly performing areas of the model.
  • Model approval is designed to minimize risks associated with model deployment ensuring that all relevant business or technical stakeholders have signed off.
  • Model update offers the ability to swap models without disrupting the production workflow, which is crucial for business continuity.

Production Model Governance: Organizations have to comply with CCPA and EU/UK GDPR before putting their machine learning models into production.

Organizations need to automate model lineage tracking (approvals, versions deployed, model interactions, updates, etc.).

MLOps offers enterprise-grade production model governance to deliver:

  • Model version control
  • Automated documentation
  • Comprehensive lineage tracking and audit trails for the suite of production models

This helps minimize corporate and legal risks, maintain production pipeline transparency, and reduce/eliminate model bias.

Benefits of MLOps

Open Communication: MLOps helps merge machine learning workflows in a production environment to make data science and operations team collaborations frictionless.

It reduces bottlenecks formed by complicated and siloed ML models in development. MLOps-based systems establish dynamic and adaptable pipelines to enhance conventional DevOps systems to handle ever-changing, KPI-driven models.

Repeatable Workflows: MLOps allow automatic and streamlined workflow changes. A model can span through processes that accommodate data drifts without significant lags. MLOps consistently gauge and order model behavior and outcomes in real-time while streamlining iterations.

Governance / Regulatory Compliance: MLOps helps incentivize regulatory capacities and stringent policies. MLOps systems can reproduce models in compliance and accordance with original standards. As consequent pipelines and models evolve, your systems play by the rule book.

Focused Feedback: MLOps provides sophisticated monitoring capabilities, data drift visualizations, and data metrics detection over the model lifecycle to ensure high accuracy over time.

MLOps detects anomalies in machine learning development using analytics and alerting mechanisms to help engineers quickly understand the severity and act upon them promptly.

Reducing Bias: MLOps can prevent development biases that can lead to misrepresenting user requirements or subjecting the company to legal scrutiny. MLOps systems ensure that data reports do not have unreliable information. MLOps support the creation of dynamic systems that do not get pigeonholed in their reporting.

MLOps boosts ML models' credibility, reliability, and productivity in production to reduce system, resource, or human bias.

MLOps Target Users 

Data Scientists: MLOps helps Data Scientists collaborate with their Ops peers and offload most of their daily model management tasks to discover new use cases, work on feature plans, and build in-depth business expertise.

Business Leaders and Executives: MLOps lets decision-makers scale organization-wide AI capabilities while tracking KPIs influencing outcomes.

Software Developers: A robust MLOps system provides a functional deployment and versioning system to developers that include:

  • Clear and straightforward APIs (REST)
  • Developer support for ML operations (documentation, examples, etc.)
  • Versioning and lineage control for production models
  • Portable Docker images

DevOps and Data Engineers: MLOps offers a one-stop solution for DevOps and data engineering teams to handle everything from testing and validation to updates and performance management. This allows value generation from and scaling opportunities for internal deployment and service monitoring.

The benefits include:

  • No-code prediction.
  • Anomaly and bias warnings.
  • Accessible and optimized APIs.
  • Swappable models with gating of choice automation, smooth transitioning, and 100% uptime.

Risk and Compliance Teams: An MLOps infrastructure improves the quality of oversight for complex projects. A reliable MLOps solution supports customizable governance workflow policies, process approvals, and alerts.

It promoted a system that can self-diagnose issues and notify relevant RM stakeholders, allowing for tighter enterprise control over projects. Additional MLOps management capabilities include predictions-over-time analysis and audit logging.

Final Thoughts

A well-planned MLOps strategy can lead to more efficiency, productivity, accuracy, and trusted models for the long road ahead. All it takes is embracing its potential systematically and in line with your production environment needs.

Radiant Digital's MLOps experts can help to achieve this. Connect with us today to discuss the possibilities.