Here at Deloitte Platform Engineering Continuous Delivery has become a central part of the way we work. Naturally, we were very excited by the news that ThoughtWorks – who introduced Continuous Delivery (CD) to the world – have recently open-sourced their CD server software ‘Go’.
To realise value from a solution you need to deploy it. Deploying any kind distributed application manually is hard enough. Complex service-oriented solutions, with their inherently diverse mix of technologies and many inter-dependent points of configuration, offer an entirely new level of provisioning pain. In these environments, having a single Big Red Button™ labelled ‘Deploy All The Things’ is worth its weight in gold.
We’ve used Go to implement our latest Big Red Button™ and we thought we’d share our (mostly very positive) experiences.
Installation - 0 to Go in 60 Seconds
Like most modern build servers, Go comprises a central server component and one or more build agents. For our purposes we ran the server and two build agents on a single host. Go is distributed in .deb and .rpm packages making basic installation on a Linux server very simple. However basic installation is not enough, particularly if you want to run multiple agents on a single machine. To make our lives easier we used Opscode’s Chef to install and configure the software, taking advantage of the official go-cookbook provided by ThoughtWorks. The Chef cookbook takes care of creating OS services to control each component, as well as registering agents with the server.
We configured Go to run behind an Apache reverse proxy over HTTPS. Go provides explicit configuration support for this scenario ensuring all URLs resolve correctly through the proxy.
Overall Go’s administration UI is quite polished. A particularly nice feature is the ability to edit the entire configuration of the server directly in its native XML format within the admin UI. This is great for making bulk changes – much faster than editing build jobs individually. The web editor validates your config changes before committing them, ensuring that you can’t corrupt your entire server simply by forgetting to close an XML tag />.
Defining Build Pipelines
Go is organised around the concept of pipelines. Pipelines are constructed from sequential stages, each containing parallel jobs made up of sequential tasks or commands. Coming from using the Jenkins CI server with its flat ‘build job’ concept, this hierarchy took some getting used to.
Pipelines have inputs, known as materials. Materials can be source code repositories, binary package repositories or other pipelines. SCM support is limited but useful – Git, Subversion and Perforce are included. We haven’t tried the package repository materials but it’s easy to see the value they might bring particularly if you’re integrating artifacts from external project teams.
Go does not provide dedicated integration with many build tools – only Ant, NAnt and Rake. It does let you execute any shell command as a build step though. This CLI level integration is flexible enough but we felt the lack of deeper integration e.g. Jenkin’s first class support for Maven projects.
The Value Stream
We’ve tried building delivery pipelines in other tools (mainly Jenkins) and frankly the end result has always felt like a bit of a compromise. We could get the deployment behaviour we wanted but only through a mish-mash of plugins, custom scripting and a small cup of wishful thinking. If something went wrong during a deployment it was hard to recover, especially if you wanted to maintain traceability from code changes into production.
Go doesn’t feel like a compromise. Dependencies between pipelines form a graph that models your solutions’ value stream – the stages a change needs to progress through (build, QA etc.) before we can get business value from it. When it comes to managing and reporting on the value stream Go is very capable and very opinionated.
… we mostly agree with its opinion….
Versions vs Re-Builds
Go separates a version of a pipeline from builds of that version. A pipeline version is determined solely by the versions of its input materials – exactly what you want for traceability. For any number of reasons though you might need to rebuild some or all of a pipeline. Perhaps a flaky Maven repository timed out, or you entered the deployment password wrong. Rebuilds don’t change the version, which exactly how most of us want rebuilds to behave.
The Perfect Build Server?
Go is not without its limitations and annoyances. Most obvious to us was the way all pipelines are triggered from the same polling cycle (1 minute by default). For long pipelines with short build jobs you spend most of the time waiting for the next job to be triggered. On Jenkins by comparison, downstream build jobs are triggered immediately, making the end to end delivery process feel much snappier.
Go’s “Fan-In” behaviour also caused some issues. In theory fan-in is supposed to coordinate a set of SCM changes through multiple pipelines to avoid wasted or duplicate builds (e.g. if the build of a shared library fails there is no point in continuing to build dependent projects). Such behaviour relies on all the changes coming from a single SCM repository though. This works fine for monolithic SCM repositories like Subversion but doesn’t align well with the Git practice of maintaining multiple smaller repositories. Fan-in doesn’t activate in this situation, resulting in wasted and duplicate builds anyway. What we really wanted was Jenkin’s ability to tell a build job not to build while upstream projects are building.
The source code for Go is now fully available, so hopefully annoyances like these can be addressed by the community. With a few more features and a bit more speed, Go could become the tool of choice for Continuous Delivery.