Introduced in GitLab 9.0.
GitLab offers powerful integration with Prometheus for monitoring your apps. Metrics are retrieved from the configured Prometheus server, and then displayed within the GitLab interface.
Each project can be configured with its own specific Prometheus server, see the configuration section for more details. If you have a single Prometheus server which monitors all of your infrastructure, you can pre-fill the settings page with a default template. To configure the template, see the Services templates document.
Integration with Prometheus requires the following:
- GitLab 9.0 or higher
- The Kubernetes integration must be enabled on your project
- Your app must be deployed on Kubernetes
- Prometheus must be configured to collect Kubernetes metrics
- Each metric must be have a label to indicate the environment
- GitLab must have network connectivity to the Prometheus sever
There are a few steps necessary to set up integration between Prometheus and GitLab.
Configuring Prometheus to collect Kubernetes metrics
In order for Prometheus to collect Kubernetes metrics, you first must have a Prometheus server up and running. You have two options here:
- If you installed Omnibus GitLab inside of Kubernetes, you can simply use the bundled version of Prometheus. In that case, follow the info in the Omnibus GitLab section below.
- If you are using GitLab.com or installed GitLab outside of Kubernetes, you will likely need to run a Prometheus server within the Kubernetes cluster. Once installed, the easiest way to monitor Kubernetes is to simply use Prometheus' support for Kubernetes Service Discovery. In that case, follow the instructions on configuring your own Prometheus server within Kubernetes.
Configuring Omnibus GitLab Prometheus to monitor Kubernetes
With Omnibus GitLab running inside of Kubernetes, you can leverage the bundled version of Prometheus to collect the required metrics.
- Read how to configure the bundled Prometheus server in the Administration guide.
- Now that Prometheus is configured, proceed on configuring the Prometheus project service in GitLab.
Configuring your own Prometheus server within Kubernetes
Setting up and configuring Prometheus within Kubernetes is quick and painless. The Prometheus project provides an official Docker image which we can use as a starting point.
To get started quickly, we have provided a sample YML file
that can be used as a template. This file will create a
Service, Deployment, and ConfigMap in Kubernetes. You can upload
this file to the Kubernetes dashboard using + Create at the top right.
kubectl apply -f path/to/prometheus.yml
Once deployed, you should see the Prometheus service, deployment, and
pod start within the
prometheus namespace. The server will begin to collect
metrics from each Kubernetes Node in the cluster, based on the configuration
provided in the template.
Since GitLab is not running within Kubernetes, the template provides external
network access via a
NodePort running on
30090. This method allows access
to be controlled using provider firewall rules, like within Google Compute Engine.
NodePort does not automatically have firewall rules created for it,
one will need to be created manually to allow access. In GCP/GKE, you will want
to confirm the Node that the Prometheus pod is running on. This can be done
either by looking at the Pod in the Kubernetes dashboard, or by running:
kubectl describe pods -n prometheus
Next on GKE, we need to get the
tag of the Node or VM Instance, so we can
create an accurate firewall rule. The easiest way to do this is to go into the
Google Cloud Platform Compute console and select the VM instance that matches
the name of the Node gathered from the step above. In this case, the node tag
gke-prometheus-demo-5d5ada10-node. Also make a note of the
External IP, which will be the IP address the Prometheus server is reachable
Armed with the proper Node tag, the firewall rule can now be created specifically for this node. To create the firewall rule, open the Google Cloud Platform Networking console, and select Firewall Rules.
Create a new rule:
- Specify the source IP range to match your desired access list, which should include your GitLab server. A sample of GitLab.com's IP address range is available in this issue, but note that GitLab.com's IPs are subject to change without prior notification.
- Allowed protocol and port should be
- The target tags should match the Node tag identified earlier in this step.
Now that Prometheus is configured, proceed to configure the Prometheus project service in GitLab.
Configuration in GitLab
The actual configuration of Prometheus integration within GitLab is very simple. All you will need is the DNS or IP address of the Prometheus server you'd like to integrate with.
- Navigate to the Integrations page
- Click the Prometheus service
- Provide the base URL of the your server, for example
http://prometheus.example.com/. The Test Settings button can be used to confirm connectivity from GitLab to the Prometheus server.
Metrics and Labels
GitLab retrieves performance data from two metrics,
container_memory_usage_bytes. These metrics are collected from the
Kubernetes pods via Prometheus, and report CPU and Memory utilization of each
container or Pod running in the cluster.
In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which pods are associated. To do that,
GitLab will specifically request metrics that have an
environment tag that
matches the $CI_ENVIRONMENT_SLUG.
If you are using GitLab Auto-Deploy and one of the methods of
configuring Prometheus above, the
environment will be automatically added.
GitLab Prometheus queries
The queries utilized by GitLab are shown in the following table.
|Average Memory (MB)||
|Average CPU Utilization (%)||
Monitoring CI/CD Environments
Once configured, GitLab will attempt to retrieve performance metrics for any environment which has had a successful deployment.
Determining the performance impact of a merge
Developers can view the performance impact of their changes within the merge request workflow. When a source branch has been deployed to an environment, a sparkline and numeric comparison of the average memory consumption will appear. On the sparkline, a dot indicates when the current changes were deployed, with up to 30 minutes of performance data displayed before and after. The comparison shows the difference between the 30 minute average before and after the deployment. This information is updated after each commit has been deployed.
Once merged and the target branch has been redeployed, the metrics will switch to show the new environments this revision has been deployed to.
Performance data will be available for the duration it is persisted on the Prometheus server.
If the "Attempting to load performance data" screen continues to appear, it could be due to:
- No successful deployments have occurred to this environment.
- Prometheus does not have performance data for this environment, or the metrics
are not labeled correctly. To test this, connect to the Prometheus server and
run a query, replacing
$CI_ENVIRONMENT_SLUGwith the name of your environment.