Get the number of the build that you wish to deploy from the Puppet job on CI (e.g.
18295, this will be tagged as
Get the tag of the last deployed build from the release app (e.g.
Compare the two build tags to see what you are deploying:
NB: make sure you have the older release first otherwise you won’t see a diff
Deploy the tag to staging using the Deploy Puppet job.
You need to configure the build by setting the ‘TAG’ value to the successful build you previously selected (e.g.
You will either need to wait 30mins or read about convergence. After which you should keep an eye on Icinga, Smokey and test anything you’re concerned about.
Repeat the last step to deploy to production.
Build tags and release branches
Puppet deployments are different from most of the other application
deployments in GOV.UK in that we only ever deploy to Staging and
Production from the
release branch. Build tags are promoted (merged)
into that branch prior to deployment.
This is in part because configuration management tools like Puppet rely on managing only a subset of the complete system which doesn’t lend itself well to flipping backwards and forwards through build history. Think of it like database migrations; you can’t undo the creation of a new column unless you write a new migration to explicitly remove it.
The deployment only pushes the new code to the Puppet master. Each node runs a Puppet agent every 30mins (via cron), so it may be some time before the release has taken effect. This has an implication on how quickly you can go from Staging to Production.
If you would like to know which version of Puppet is running where on a specific environment, there is a script in the fabric-scripts repository to help.
In order to run it, create a GitHub Access Token here and run the following inside the fabric-scripts repository:
The script will prompt you for an environment (integration, staging or production) and it will query all servers in that environment for the version of Puppet and the last time the Puppet agent ran.
If you’d rather not wait and you’re able to safely determine from the diff what classes of machines the change will affect, or which ones are still on an older version of Puppet using the script above, you can use Fabric to force a run of Puppet. For example:
fab $environment class:frontend_lb class:backend_lb puppet
This will run in serial across the nodes so there is a reduced chance of downtime caused by a service restarting on all nodes of a given class/tier at the same time. You should still be careful though, because some services take longer to restart than others.
Preventing service restarts
It may occassionally be neccessary to trick Puppet into not restarting a service, if it is a single point of failure and doing so would cause a brief outage, e.g. MySQL.
This is not a “normal” procedure. You should only do this if you need to and you MUST have some plan for restarting the service in the near future so that it’s not inconsistent with it’s configuration.
Disable normal Puppet runs on the affected nodes:
fab $environment class:mysql_master puppet.disable:'Preventing service restart'
Change the file content to match what Puppet wants it to be. If it’s a plain file you can probably apply the diff from git using
sudo patch source.diff dest. If it’s a template then you may need to refer to an existing environment or figure it out yourself.
Verify that Puppet won’t change the file or notify the service by running it in
noopmode. You will need to provide a different lock path to bypass the disable:
govuk_puppet -v --noop --agent_disabled_lockfile /tmp/puppet.noop
If you’re happy with the results then re-enable Puppet and run it again:
fab $environment class:mysql_master puppet.enable puppet
Schedule a time to actually restart the service if neccessary.