Use Terraboard to monitor Terraform state
We use Terraboard for monitoring Terraform state across all environments in AWS.
Terraboard displays both summary and detailed information about Terraform state, when each state file was last modified, the version of Terraform used to modify it and its history.
Terraboard runs on each environment’s
monitoring machine, in AWS only.
Docker containers and networking
The Terraboard suite of apps consists of three Docker containers. One runs
Terraboard itself, one runs a PostgreSQL database instance used to cache
Terraform state, and one runs OAuth2 Proxy, an app that authenticates users
using GitHub before proxying requests to Terraboard. Only this last Docker
container is exposed outside of the host machine. The three containers
communicate with each other over a private bridge network named
Terraboard and the PostgreSQL database instance are configured using
environment variables, whereas OAuth2 Proxy uses a config file located in
/opt/terraboard/conf on the host machine, which is mounted inside the
Terraform state file caching
Terraboard runs a task every minute which fetches all Terraform state files
from the configured S3 bucket (
and caches their content in the PostgreSQL database. This decreases the amount
of S3 traffic and makes the app faster.
OAuth2 Proxy is configured to authenticate users using GitHub. A GitHub OAuth
app for each environment exists and is owned by
alphagov, which provides the
OAuth credentials required for authentication.
Only users who are members of the
alphagov organisation and the
team are granted access. Access checks are refreshed after an hour.
nginx proxies all requests to OAuth2 Proxy, which is exposed on port 4180 on the host machine. This port corresponds to port 7920 in the container. OAuth2 Proxy then proxies all authenticated requests on to port 8080 of the Terraboard container.
Docker image for OAuth2 Proxy
Since there is no official Docker image for OAuth2 Proxy, the
repository contains a
Dockerfile used to build a custom image from the original
source. This image is then pushed to Docker Hub, from where it is downloaded and run.
There are instructions on how to update, build and push new versions of the image in the README.
More about Monitoring
- Add a deployment dashboard for an application
- Add an Icinga passive check to a Jenkins job
- Add sidekiq-monitoring to your application
- Error reporting with Sentry
- GOV.UK and Virtual Private Networks (VPNs)
- Graphite and deployment dashboards
- How to deal with errors
- Monitor Sidekiq queues for your application
- Monitoring screens
- Nagios NRPE connection failures
- Pingdom Bouncer canary check
- Tools: Icinga, Grafana and Graphite, Kibana and Fabric
- Uptime Metrics
- Use AWS X-Ray to trace app requests