Skip to main content
Last updated: 6 Aug 2025

How logging works on GOV.UK

To view logs, see View GOV.UK logs in Logit.

Overview

GOV.UK sends its application logs, origin HTTP request logs, and kubernetes events to managed ELK stacks hosted by Logit.io, a software-as-a-service provider. Each environment has its own ELK stack in Logit.

Fastly CDN request logs use a different system to this because of the much higher data rates involved.

Architecture of application log delivery

Block diagram showing data flow from containers in Kubernetes through to the
managed ELK
stack. edit diagram

Architecture of Kubernetes Events log delivery


flowchart LR
subgraph EKS["EKS Cluster"]
  kubernetes-events-shipper --read--> events-api
end
style EKS stroke-dasharray: 5 5

subgraph ELK["logit.io managed ELK stack"]
  direction BT
  kibana
  elasticsearch
end
style ELK stroke-dasharray: 5 5

kubernetes-events-shipper --write--> elasticsearch
kibana --read--> elasticsearch
user((user)) --> kibana

Components in the logging path for applications

Application container

The application container, or a sidecar/adapter container such as the nginx reverse proxy, writes log lines to stdout or stderr.

Log lines can be structured (JSON) or unstructured (arbitrary text).

Application workloads in GOV.UK’s Kubernetes clusters run with readOnlyRootFilesystem: true and should not write logs anywhere other than stdout/stderr. Only logs written to stdout/stderr will be collected by the logging system.

Kubernetes worker node

The container runtime (containerd) and kubelet on the worker node are responsible for:

  • writing the workload containers’ stdout/stderr streams to files in /var/log/containers
  • rotating those log files so as not to run out of space

Filebeat

The Filebeat daemon runs on each Kubernetes worker node and is responsible for:

  • finding and tailing log files on the local filesystem
  • applying some transformations such as parsing JSON-formatted log lines and dropping some unwanted fields
  • sending logs to Logstash (at Logit)

Filebeat runs as a daemonset in the cluster-services namespace.

As of mid-2023, Filebeat is installed by a Helm chart via Terraform and its configuration is also managed using Terraform. In the future, Argo CD could replace this usage of Terraform as part of ongoing efforts to reduce toil.

Logstash

Logstash is a log ingestion pipeline, essentially an ETL system for logs. Logstash and the remaining components in the logging path are hosted by Logit.io.

Logstash is responsible for:

  • receiving streams of log messages from each node’s Filebeat process via TLS over the Internet
  • parsing the semi-structured log messages from Filebeat and transforming them where necessary, for example to fit the Elastic Common Schema
  • loading the logs into Elasticsearch for storage and indexing

Vulnerability Scanners

Occasionally you may see some logs that look suspicious. For example, see the controller part of the following made up example:

http_request_duration_seconds_count { action="1", container="app", controller="../../../../../etc/passwd", endpoint="metrics", job="govuk", namespace="apps", pod="static-abc123-de7f" }

This sort of thing often originates from vulnerability scanners and isn’t necessarily something to worry about. However, you should consider how such activity ended up leaking into your logs, and whether or not there’s anything you could/should do to patch it up. Read our internal guidance on the issue.

Components in the logging path for Kubernetes Events

Kubernetes Events API

This is exposed by the Kubernetes API server. It’s part of the control plane for Kubernetes, which is managed by AWS as part of the Elastic Kubernetes Service (EKS).

It exposes all of the kubernetes events that occur within the EKS cluster.

Fluentbit

The fluentbit daemon is responsible for:

  • watching the Kubernetes events API
  • keeping track of which events have been sent to Elasticsearch
  • sending the events to Elasticsearch (at Logit)

Fluentbit runs as a StatefulSet in the cluster-services namespace.

Fluentbit is deployed by the kubernetes-events-shipper Helm chart and is deployed via the cluster-services terraform.

The configuration of fluentbit is stored in a ConfigMap in the kubernetes-events-shipper Helm chart.

Components common to all logging paths

Elasticsearch

Elasticsearch is a search engine and storage/retrieval system. It is responsible for:

  • storing the log data
  • indexing the stored logs for efficient search and retrieval
  • running queries and returning the results to the user interface (Kibana)

Kibana

Kibana is the user interface for viewing logs. It is responsible for:

  • rendering the web UI
  • parsing user queries written in Lucene/KQL into Elastic
  • querying the Elasticsearch indices
  • displaying the results