December 20, 2019updated 18 Dec 2019 10:27am

How AIOps is restructuring the data stack

By Justyn Goodenough

With core business processes increasingly seeing migration to the cloud, there has been an increasing need for new technologies to keep pace with cloud expansion. Previously confined to more broad use cases, the ability for the cloud to facilitate more marginal data workloads, such as CRM or HR, has created a corresponding need for data teams to enable this cloud journey and to keep it running once moved off-premise. However, for small business where a data team may not even exist or for large enterprises where the team is already stretched by managing existing architecture, there is a need for supporting technologies. This is where AIOps has a fundamental role to play in enhancing deployments while simultaneously reducing costs.

The fundamental promise of AIOps is to enhance or replace a range of IT operations processes through combining big data with AI or machine learning. This holds obvious appeal across industries for enterprises looking to solve expensive, challenging, and time-consuming problems in big data deployment. However, taking the first step into AIOps can be challenging in view of the different approaches that have developed around it.

In practice, these increases in performance are enabled through the automation capabilities that accompany AI. These automation capabilities are applicable to a broad range of use cases meaning that most business functions can be enhanced to some extent by AI. For instance: workload management, cloud cost management, performance optimisation and remediation, and other integral tasks can all be performed with reduced involvement from the technical team. With the team freed up from constantly monitoring and managing deployments, they may focus on value-on initiatives that can contribute to the long-term efficiency and cost reduction of data and app deployments.

Challenges in AIOps deployment

That being said, creating an AIOps deployment is not a straightforward task. The main consideration is that AIOps outputs are only as good as the inputs received. In other words, AIOp deployments need high-quality data inputs that are accurate, relevant, timely and comprehensive in order to provide utility to an organisation. Another consideration is that organisations need to ensure that they are measuring the business outcomes they need in order to streamline their processes such as time to insight, transaction response time, and job completion time.

As such, AIOps deployments need to be considered on a case-by-case basis to see which of these types of deployments is the best fit for their use case and is plausible for their organisation. These different approaches are outlined below:

  • Rule based – This is arguably the least ‘intelligent’ instance of AIOPs where data teams develop rules and automated responses to specific instances in the data stack. While this is the easiest to implement, it only works for constrained and limited use cases and can be difficult to maintain on a long-term basis.
  • Neural Net based – Unlike the rule based approach, neural networks systems ‘learn’ to perform tasks by considering examples without being programmed with task-specific rules. While this is a true example of AI, it does require training and can be problematic in dynamic environments.
  • Unsupervised Self Learning – This approach requires the least involvement from the data team as the AI is left to teach itself. However, it is difficult to focus the AI on addressing the desired KPIs.
  • Supervised Self Learning – Addressing the issues of unsupervised self learning, supervised self learning combines human expertise with machine learning. This results in AI more directly addressing desired areas but requires more involvement.

In view of these various approaches, it is likely that there is at least one (if not more) that will be appropriate for a particular business need. While AIOps deployments are certainly not always straight-forward, the onus is on the organisation to perform the necessary due diligence and find the appropriate fit for them – or use a partner. Regardless, it is likely that we will see this next step in the data sphere popularised in the near future as it demonstrates increasing and clear value for businesses.


Read more: Technical debt, slow coding cause half of app development requests to fail


Verdict deals analysis methodology

This analysis considers only announced and completed cross border deals from the GlobalData financial deals database and excludes all terminated and rumoured deals. Country and industry are defined according to the headquarters and dominant industry of the target firm. The term ‘acquisition’ refers to both completed deals and those in the bidding stage.

GlobalData tracks real-time data concerning all merger and acquisition, private equity/venture capital and asset transaction activity around the world from thousands of company websites and other reliable sources.

More in-depth reports and analysis on all reported deals are available for subscribers to GlobalData’s deals database.

Topics in this article: ,