GitOps is increasingly popular in the cloud-native world, allowing developers to deliver software to production using their native tooling—a pull request in Git. The underlying principle is that of infrastructure as code](https://techbeacon.com/enterprise-it/infrastructure-code-engine-heart-devops). Namely, any change in operations can be affected by a change in code.
This is also the origin of the term "GitOps": operations via Git.
GitOps is typically thought of as a cloud-native enabler, given its close association with Kubernetes, where an operational environment can be expressed declaratively in code. Kubernetes will seek to ensure that the desired state (the code) matches the observed state (the live instance).
A Fortune 500 organization I consulted with recently transformed its deployment of a legacy three-tier application to a global footprint. Existing deployments were slow, error-prone, fragile, and lacking in audit controls.
Here's how we solved some key enterprise software delivery pain points using GitOps and the key benefits for your team.
Enterprise software delivery challenges
Unfortunately, at many enterprises the adoption of DevOps is nascent, and software is still deployed by manual or—at best—semi-automated methods. Central to such deployments is the infamous change advisory board, which is required (by ITIL et al.) to rubber-stamp any change request.
Ubiquitous in this process is the run book, providing instructions on how to perform the deployment. Typically, these are complex, lengthy documents describing a sequence of manual steps that must be performed. Often, steps are skipped or performed incorrectly, resulting in deployments that differ from the expected state.
This lack of reproducibility requires the intervention of operators to access production systems to correct observed discrepancies, typically via SSH access as the root user. More often than not, such interventions create as many problems as they resolve. And more important, there is a loss of control, since it is necessary to provide operational staff with privileged access to production systems.
An additional problem arises due to manually intensive processes: The system cannot be audited, since there is no record of exactly what changes were performed and by whom. The ultimate downside of the lack of this audit capability is the frustrating poor incident resolution encountered in much enterprise software.
Frustrated users create service tickets that are passed from one team to the next, with no one taking responsibility for resolution and resorting to blaming other colleagues or vendors.
How GitOps delivered
Codethink, where I am a consultant, was tasked with transforming an organization's legacy three-tier application to one that included a global footprint. The solution comprised application deployment automation using PowerShell scripting, Azure ARM templates, Azure DevOps Pipelines, and Git as the repository. First, all of the application deployment run books were transformed into PowerShell scripts to ensure that the deployment was fully automatable and repeatable.
To make sure that the computation and database resources were built in a consistent manner, standard Azure ARM templates were used to build instances, and PowerShell Desired State Configuration (DSC) was used to ensure that the end states matched the desired states.
To utilize Git as the system of record, a cluster of independent repositories was used as follows:
- The config repository stored the desired configuration of the deployment.
- The validation repository performed validation of the configuration against the enterprise IT policies.
- The deployment repository contained the scripts responsible for taking a given configuration (received, for example, from a service desk ticket) and using automation scripts to deploy the configuration to a live instance.
Partitioning responsibilities
A core design principle of this approach was to partition responsibilities, to ensure that individual roles and duties were rigidly enforced. This was done via Azure identity and access management, using finely granular access controls to the individual repositories. As an example, an operator doing a deployment could not access the configuration repository, since this was not the operator's responsibility.
Upon a successful deployment, a repository representing the deployment would be created from the configuration, validation, and deployment repositories. This repository would be assigned to the end user requesting the resource. This ensured a hard enforcement of the principle of separation of control—only the resource owner had access to the resource.
GitOps is all about operations via Git. Here's how an operational change could be made using this solution. Each deployed instance had an associated repository containing full details of the instance. To modify an instance—for example, to add a new database—a change could be made to the configuration code, which could be checked in and reviewed in a pull request.
Upon successful merge, a continuous integration (CI) process could execute to use automation to bring the end state in accordance with the new desired state. Precisely the promise of GitOps!
Key benefits for your team
This relatively novel approach to solving a traditional enterprise pain point using GitOps brings with it a number of key benefits. First, all changes are fully declarative based on code or configuration; gone are the run books of yore. Probably the main benefit is that Git is the underlying system of record, so there is a very strong and auditable record of change.
All changes to the target deployment are made via changes to Git and, as such, every change is held in a Git changeset (underpinned by the mathematics of a Merkle Tree). Other interesting areas for enhancement include the ability to automate pull request reviews using algorithms to determine the impact of the change. Think of this as an automated change advisory board!
Since every stage of the deployment is performed by a step in a CI process, one can perform a post-mortem on a failed deployment to determine at which stage of the deployment the failure occurred to determine remedial action. This eliminates the all too familiar finger pointing that occurs in a manual process by enforcing a perfect partition of responsibility.
Most importantly for the enterprise, however, is the rigid enforcement of separation of control by virtue of identity and access controls using Git repositories as the enforcement boundaries. It is no longer possible for operators to directly access deployments; rather such access must be attained via a relevant changeset in a repository.
This article was originally published on TechBeacon
Other Content
- Open Source Summit Europe (OSSEU) 2024
- Watch: Real-time Scheduling Fault Simulation
- Improving systemd’s integration testing infrastructure (part 2)
- Meet the Team: Laurence Urhegyi
- A new way to develop on Linux - Part II
- GUADEC 2024
- Developing a cryptographically secure bootloader for RISC-V in Rust
- Meet the Team: Philip Martin
- Improving systemd’s integration testing infrastructure (part 1)
- A new way to develop on Linux
- RISC-V Summit Europe 2024
- Safety Frontier: A Retrospective on ELISA
- Codethink sponsors Outreachy
- The Linux kernel is a CNA - so what?
- GNOME OS + systemd-sysupdate
- Codethink has achieved ISO 9001:2015 accreditation
- Outreachy internship: Improving end-to-end testing for GNOME
- Lessons learnt from building a distributed system in Rust
- FOSDEM 2024
- Introducing Web UI QAnvas and new features of Quality Assurance Daemon
- Outreachy: Supporting the open source community through mentorship programmes
- Using Git LFS and fast-import together
- Testing in a Box: Streamlining Embedded Systems Testing
- SDV Europe: What Codethink has planned
- How do Hardware Security Modules impact the automotive sector? The final blog in a three part discussion
- How do Hardware Security Modules impact the automotive sector? Part two of a three part discussion
- How do Hardware Security Modules impact the automotive sector? Part one of a three part discussion
- Automated Kernel Testing on RISC-V Hardware
- Automated end-to-end testing for Android Automotive on Hardware
- GUADEC 2023
- Embedded Open Source Summit 2023
- RISC-V: Exploring a Bug in Stack Unwinding
- Adding RISC-V Vector Cryptography Extension support to QEMU
- Introducing Our New Open-Source Tool: Quality Assurance Daemon
- Long Term Maintainability
- FOSDEM 2023
- Think before you Pip
- BuildStream 2.0 is here, just in time for the holidays!
- A Valuable & Comprehensive Firmware Code Review by Codethink
- GNOME OS & Atomic Upgrades on the PinePhone
- Flathub-Codethink Collaboration
- Codethink proudly sponsors GUADEC 2022
- Tracking Down an Obscure Reproducibility Bug in glibc
- Web app test automation with `cdt`
- FOSDEM Testing and Automation talk
- Protecting your project from dependency access problems
- Porting GNOME OS to Microchip's PolarFire Icicle Kit
- YAML Schemas: Validating Data without Writing Code
- Deterministic Construction Service
- Codethink becomes a Microchip Design Partner
- Hamsa: Using an NVIDIA Jetson Development Kit to create a fully open-source Robot Nano Hand
- Using STPA with software-intensive systems
- Codethink achieves ISO 26262 ASIL D Tool Certification
- Full archive