Configure Prometheus Exporters to Scrape Data

Configure Prometheus Exporters to Scrape Data


What is Prometheus Exporter?

Prometheus provides a wide variety of exporters to expose metrics according to different use cases. These exporters could save time and effort by exposing the required metrics for us. Here is the list of the most common and widely used exporters.

If you have not yet installed Prometheus, please follow Setup Prometheus for Monitoring to get started. 

Today's Agenda

In this post, we will learn how to configure some of the most useful Prometheus Exporters. If you have not yet installed any of these exporters or facing any issues, please follow the links mentioned to install them.

Prerequisite

This post has been prepared for the audience who : 
  1. Have access to a Linux-based system although the steps are almost similar in other platforms like CentOS or MacOS.
  2. Have a basic understanding of Linux based systems and their commands.
  3. Have Prometheus installed already to which we will add the exporter configuration. If not already done, please follow Setup Prometheus for Monitoring.

Let's get started

In this post, we will configure the following Prometheus Exporters to use them to the fullest:
Once the Node Exporter is installed and is running on a port, then it's time to add the configuration of this node exporter in your Prometheus.

Step 1: Add a block of code in the prometheus.yml file which is created at the time we installed Prometheus.

Open the prometheus.yml file (path: /etc/prometheus/prometheus.yml) in edit mode.
$ sudo vi /etc/prometheus/prometheus.yml

Add the below code block to your configuration file -
 
global:

   scrape_interval: 15s

scrape_configs:


- job_name: 'node-exporter-a'

   scrape_interval: 5s

   static_configs:

     - targets: ['192.168.1.2:9100']


- job_name: 'node-exporter-b'

   scrape_interval: 5s

   static_configs:

     - targets: ['192.168.1.2:9100', '192.168.1.3:9100']

 

Note: 

  • Targets will include the URLs of machines to which node exporter is installed along with the port. 
  • We can add multiple job_name blocks for different types of machines. 
  • Scrape interval means the time interval after which Prometheus will ask for new data.

2.  Configure Blackbox Exporter

Once the Blackbox Exporter is installed and is running on a port, then it's time to add the configuration of this BlackBox exporter in your Prometheus.

Step 1: Add a block of code in the prometheus.yml file which is created at the time we installed Prometheus.

Open the prometheus.yml file (path: /etc/prometheus/prometheus.yml) in edit mode.
$ sudo vi /etc/prometheus/prometheus.yml

Add the below code block to your configuration file -
 
global:

   scrape_interval: 15s

scrape_configs:


- job_name: 'blackbox'

   metrics_path: /probe

   params:

     module: [http_2xx]

   static_configs:

     - targets:

       - http://example.com/index.html

- https://myabc.com/

   relabel_configs:

     - source_labels: [__address__]

       target_label: __param_target

     - source_labels: [__param_target]

       target_label: instance

     - target_label: __address__

       replacement: localhost:9115


# Blackbox exporter installed on the same machine with Prometheus.


- job_name: 'URLMonitor'

   metrics_path: /probe

   params:

     module: [http_2xx]

   static_configs:

     - targets:

       - http://192.168.2.3:8080/login?from=%2F

       - http://192.168.4.9:5601/app/kibana

       - http://192.168.7.0:9000/about

   relabel_configs:

     - source_labels: [__address__]

       target_label: __param_target

     - source_labels: [__param_target]

       target_label: instance

     - target_label: __address__

       replacement: 192.168.2.3:9115


# Blackbox exporter installed on different machine from where above targets are accessible.

 

Note:
 
  • Targets can include the URLs as well as endpoints that are accessible from that server (where BlackBox Exporter is installed).
  • We can add multiple job_name blocks for different types of Endpoints.
  • Source labels are auto-generated through the exporters. We can rename them using Target labels.
  • To monitor Ports (like 8080 for Jenkins, 5601 for Kibana, 9000 for Sonarqube), make sure you provide the URLs that provide the correct status code through the curl command.

3.  Configure Script Exporter

Once the Script Exporter is installed and is running on a port, then it's time to add the configuration of this script exporter in your Prometheus.

Step 1: Add a block of code in the prometheus.yml file which is created at the time we installed Prometheus.

Open the prometheus.yml file (path: /etc/prometheus/prometheus.yml) in edit mode.
$ sudo vi /etc/prometheus/prometheus.yml

Add the below code block to your configuration file -
 
global:

   scrape_interval: 15s

scrape_configs:


- job_name: 'script3'

   scrape_interval: 1m

   scrape_timeout: 30s

   metrics_path: /probe

   params:

     script: [rdsaw]

   static_configs:

     - targets

       - localhost:9469


# Script exporter installed on the same machine with Prometheus.


- job_name: 'randomName'

   scrape_interval: 1m

   scrape_timeout: 30s

   metrics_path: /probe

   params:

     script: [rdsar]

   static_configs:

     - targets:

       - 192.168.2.3:9469


# Script exporter installed on different machine where above script is present.

 

Note:
  • Targets include the host where our script exporter is installed.
  • The name of the script will be mentioned under param > script > script name.
  • We can add multiple job_name blocks for different types of Scripts.
  • Scripts output will be present at /probe mentioned under metrics_path.

4.  Configure Kafka Exporter

Once the Kafka Exporter is installed and is running on a port, then it's time to add the configuration of this Kafka exporter in your Prometheus.

Step 1: Add a block of code in the prometheus.yml file which is created at the time we installed Prometheus.

Open the prometheus.yml file (path: /etc/prometheus/prometheus.yml) in edit mode.
$ sudo vi /etc/prometheus/prometheus.yml

Add the below code block to your configuration file -
 
global:

   scrape_interval: 15s

scrape_configs:


- job_name: 'kafka_exporter'

   dns_sd_configs:

     - names:

       - 'tasks.kafka_exporter'

       type: 'A'

       port: 9308

   static_configs:

    - targets:

      - 192.168.19.85:9308

      - 192.168.19.86:9308


# Kafka exporter installed on different machine where Kafka is installed.

 

Note:
  • Targets include the host where our Kafka Exporter is installed along with the port number.
  • We can add multiple job_name blocks.
  • Visit the GitHub repo if you need some clarification or info about more parameters.
 






Comments