====== prometheus-sql-exporter-的基本使用 ======
我们选用 https://github.com/burningalchemist/sql_exporter,主要是因为稍微活跃些。
下载下来指定的版本,然后开始配置:
===== 配置SQL采集的主框架 =====
主配置文件: sql_exporter.yml
global:
scrape_timeout: 10s
scrape_timeout_offset: 500ms
min_interval: 0s
max_connections: 3
max_idle_connections: 3
max_connection_lifetime: 5m
jobs:
- job_name: gpu
collectors: [gpu_count]
enable_ping: true
static_configs:
- targets:
pythonapi: 'mysql://readonly:readonly@127.0.0.1:3306/database'
collector_files:
- "sql_config/*.yml"
- "sql_config/*.yaml"
包含的其他的配置文件: sql_config/python_api.yml
collector_name: gpu_count
metrics:
- metric_name: gpu_container_total
type: gauge
help: 'Total number of records in monitor_gpu_container table'
values: [row_count]
query: |
SELECT COUNT(*) AS row_count FROM gpu_container;
启停脚本 restart_exporter.sh
#!/bin/bash
EXPORTER_BIN="./sql_exporter"
CONFIG_FILE="./sql_exporter.yml"
pkill -f sql_exporter
sleep 2
nohup $EXPORTER_BIN -config.file=$CONFIG_FILE -web.listen-address=0.0.0.0:9399 > run.log 2>&1 &
echo "Restarted. port 9399 PID: $!"
===== 配置prometheus采集 =====
- job_name: 'sql_exporter_gpu'
scrape_interval: 30m
static_configs:
- targets: ['10.10.10.10:9399'] # 配置为默认的 sql_exporter的地址
params:
jobs[]:
- gpu
这里可以通过jobs参数进行过滤获取数据,避免获取其他的无关数据