A new breed of DevOps platform providers is gaining ground among IT operations teams for their ability to improve efficiency and reduce costs associated with deploying modern application architectures into production throughout multi-cloud environments, all while ensuring continuous delivery capabilities.
Evolving DevOps solutions have not always been operationally provisioned to the degree that IT professionals can easily deploy and manage Kubernetes containerization, event-driven microservices, and serverless computing apps. Application modernization and the deployment of those advanced apps have been strained under current infrastructure configuration complexities. This has resulted in barriers to adoption around newer DevOps methods, including CICD, over the past couple years.
Intelligent automation and DevOps
Resulting from newer efforts around infrastructure modernization, including the more formalized use of CICD/GitOps pipelines and best practices, enterprises will be better armed with AI-injected tools that drive automation agilities. Such tools provide integration, resource management, intelligent automation, and observability for Kubernetes and other key open-source software projects, such as Knative. This use of intelligent automation will significantly reduce the need for human intervention when monitoring alerts occur, shifting operations from a reactive to proactive mode.
While these new platforms may prove to be limited in flexibility around customized configurations, they hold the promise of easing new digitization requirements, and so will be a welcome addition to the overall cloud-native ecosystem of innovators.
A number of Kubernetes resource management platform providers have been coming to the forefront among DevOps circles, such as TriggerMesh, Codefresh, Weaveworks, and StormForge. All support some forms of integration, software delivery, and resource management tools which help fill in the gaps of a modern software supply chain.
Recently StormForge announced its newest offering, Optimize Live, which merges advanced machine learning with observability tools to provide IT operations and developer teams with real-time configuration recommendations to strengthen operational optimization.
The platform provides customers with the ability to bridge pre-production and post-production issues through the use of ML which recommends real-time coding and configuration changes to infrastructure resources for improving application performance. This ensures that DevOps teams can abstract at least some of the tasks around tuning applications and managing resources necessary in the app development process, improving the efficiency of deploying apps into production environments.
Cloud giants are filling out their DevOps platforms
Certainly, traditional application platform and cloud giants are filling out their DevOps platforms to address similar issues around operational provisioning and infrastructure modernization, although their strategies have largely only recently begun and gaps remain. They have begun offering features of these types of operational tools either as a capability within larger DevOps platforms or through integrated technology partnerships. Examples include the IBM DevOps and Red Hat OpenShift GitOps platforms. These providers are well aware of the disruptive stance of this new breed of DevOps startups, but counter that attractive frontends offered by pure plays will only carry so much weight with large enterprises.
IBM and others claim that large customers are more inclined to invest in DevOps solutions which support unified management across a plethora of environments spanning Kubernetes containers to traditional on-premises mainframes and x86 systems.
At least for now, there appears to be room for both options in the cloud-native ecosystem, not only to provide customers with alternative approaches for kick-starting their digitization projects, but among technology partners for complementing and consolidating tools which help to fill out the CICD pipelines.