postgres_exporter 部署

news2025/1/10 20:46:21

目录

  • - 下载地址
  • - 配置环境变量
  • - 启动
    • vim ./start.sh
    • vim ./stop.sh
      • queries.yaml
  • - 配置prometheus


- 下载地址

https://github.com/prometheus-community/postgres_exporter/releases

- 配置环境变量

- 启动

因启动前需要配置环境变量后再进行启动
运行如下语句:

  • export DATA_SOURCE_NAME=postgresql://user:userpwd@0.0.0.0:5432/db?sslmode=disable
  • ./postgres_exporter --extend.query-path queries.yaml --log.level debug

故为了方便辨认 我更改了它的端口,并编了./start.sh./stop.sh两个脚本方便以后启动,语句如下:

vim ./start.sh

export DATA_SOURCE_NAME=postgresql://user:userpwd@0.0.0.0:5432/db?sslmode=disable
nohup ./postgres_exporter --log.level debug   --web.listen-address=:15432 > ./startlog.log 2>&1 &	# --extend.query-path queries.yaml

vim ./stop.sh

#!/bin/sh
echo "stop"
#!/bin/bash

PID=$(ps -ef | grep postgres_exporter | grep -v grep | awk '{ print $2 }')
if [ "${PID}" ]
then
    echo 'Application is stpping...'
    echo kill $PID DONE
    kill $PID
else
    echo 'Application is already stopped...'
fi

./start.sh启动脚本内:
–web.listen-address为监听的端口
–extend.query-path为自定义查询的文件

queries.yaml

无需要可忽略。

pg_replication:
  query: "SELECT CASE WHEN NOT pg_is_in_recovery() THEN 0 ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp()))) END AS lag"
  master: true
  metrics:
    - lag:
        usage: "GAUGE"
        description: "Replication lag behind master in seconds"

pg_postmaster:
  query: "SELECT pg_postmaster_start_time as start_time_seconds from pg_postmaster_start_time()"
  master: true
  metrics:
    - start_time_seconds:
        usage: "GAUGE"
        description: "Time at which postmaster started"

pg_stat_user_tables:
  query: |
   SELECT
     current_database() datname,
     schemaname,
     relname,
     seq_scan,
     seq_tup_read,
     idx_scan,
     idx_tup_fetch,
     n_tup_ins,
     n_tup_upd,
     n_tup_del,
     n_tup_hot_upd,
     n_live_tup,
     n_dead_tup,
     n_mod_since_analyze,
     COALESCE(last_vacuum, '1970-01-01Z') as last_vacuum,
     COALESCE(last_autovacuum, '1970-01-01Z') as last_autovacuum,
     COALESCE(last_analyze, '1970-01-01Z') as last_analyze,
     COALESCE(last_autoanalyze, '1970-01-01Z') as last_autoanalyze,
     vacuum_count,
     autovacuum_count,
     analyze_count,
     autoanalyze_count
   FROM
     pg_stat_user_tables
  metrics:
    - datname:
        usage: "LABEL"
        description: "Name of current database"
    - schemaname:
        usage: "LABEL"
        description: "Name of the schema that this table is in"
    - relname:
        usage: "LABEL"
        description: "Name of this table"
    - seq_scan:
        usage: "COUNTER"
        description: "Number of sequential scans initiated on this table"
    - seq_tup_read:
        usage: "COUNTER"
        description: "Number of live rows fetched by sequential scans"
    - idx_scan:
        usage: "COUNTER"
        description: "Number of index scans initiated on this table"
    - idx_tup_fetch:
        usage: "COUNTER"
        description: "Number of live rows fetched by index scans"
    - n_tup_ins:
        usage: "COUNTER"
        description: "Number of rows inserted"
    - n_tup_upd:
        usage: "COUNTER"
        description: "Number of rows updated"
    - n_tup_del:
        usage: "COUNTER"
        description: "Number of rows deleted"
    - n_tup_hot_upd:
        usage: "COUNTER"
        description: "Number of rows HOT updated (i.e., with no separate index update required)"
    - n_live_tup:
        usage: "GAUGE"
        description: "Estimated number of live rows"
    - n_dead_tup:
        usage: "GAUGE"
        description: "Estimated number of dead rows"
    - n_mod_since_analyze:
        usage: "GAUGE"
        description: "Estimated number of rows changed since last analyze"
    - last_vacuum:
        usage: "GAUGE"
        description: "Last time at which this table was manually vacuumed (not counting VACUUM FULL)"
    - last_autovacuum:
        usage: "GAUGE"
        description: "Last time at which this table was vacuumed by the autovacuum daemon"
    - last_analyze:
        usage: "GAUGE"
        description: "Last time at which this table was manually analyzed"
    - last_autoanalyze:
        usage: "GAUGE"
        description: "Last time at which this table was analyzed by the autovacuum daemon"
    - vacuum_count:
        usage: "COUNTER"
        description: "Number of times this table has been manually vacuumed (not counting VACUUM FULL)"
    - autovacuum_count:
        usage: "COUNTER"
        description: "Number of times this table has been vacuumed by the autovacuum daemon"
    - analyze_count:
        usage: "COUNTER"
        description: "Number of times this table has been manually analyzed"
    - autoanalyze_count:
        usage: "COUNTER"
        description: "Number of times this table has been analyzed by the autovacuum daemon"

pg_statio_user_tables:
  query: "SELECT current_database() datname, schemaname, relname, heap_blks_read, heap_blks_hit, idx_blks_read, idx_blks_hit, toast_blks_read, toast_blks_hit, tidx_blks_read, tidx_blks_hit FROM pg_statio_user_tables"
  metrics:
    - datname:
        usage: "LABEL"
        description: "Name of current database"
    - schemaname:
        usage: "LABEL"
        description: "Name of the schema that this table is in"
    - relname:
        usage: "LABEL"
        description: "Name of this table"
    - heap_blks_read:
        usage: "COUNTER"
        description: "Number of disk blocks read from this table"
    - heap_blks_hit:
        usage: "COUNTER"
        description: "Number of buffer hits in this table"
    - idx_blks_read:
        usage: "COUNTER"
        description: "Number of disk blocks read from all indexes on this table"
    - idx_blks_hit:
        usage: "COUNTER"
        description: "Number of buffer hits in all indexes on this table"
    - toast_blks_read:
        usage: "COUNTER"
        description: "Number of disk blocks read from this table's TOAST table (if any)"
    - toast_blks_hit:
        usage: "COUNTER"
        description: "Number of buffer hits in this table's TOAST table (if any)"
    - tidx_blks_read:
        usage: "COUNTER"
        description: "Number of disk blocks read from this table's TOAST table indexes (if any)"
    - tidx_blks_hit:
        usage: "COUNTER"
        description: "Number of buffer hits in this table's TOAST table indexes (if any)"

# WARNING: This set of metrics can be very expensive on a busy server as every unique query executed will create an additional time series
pg_stat_statements:
  query: "SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'"
  master: true
  metrics:
    - rolname:
        usage: "LABEL"
        description: "Name of user"
    - datname:
        usage: "LABEL"
        description: "Name of database"
    - queryid:
        usage: "LABEL"
        description: "Query ID"
    - calls:
        usage: "COUNTER"
        description: "Number of times executed"
    - total_time_seconds:
        usage: "COUNTER"
        description: "Total time spent in the statement, in milliseconds"
    - min_time_seconds:
        usage: "GAUGE"
        description: "Minimum time spent in the statement, in milliseconds"
    - max_time_seconds:
        usage: "GAUGE"
        description: "Maximum time spent in the statement, in milliseconds"
    - mean_time_seconds:
        usage: "GAUGE"
        description: "Mean time spent in the statement, in milliseconds"
    - stddev_time_seconds:
        usage: "GAUGE"
        description: "Population standard deviation of time spent in the statement, in milliseconds"
    - rows:
        usage: "COUNTER"
        description: "Total number of rows retrieved or affected by the statement"
    - shared_blks_hit:
        usage: "COUNTER"
        description: "Total number of shared block cache hits by the statement"
    - shared_blks_read:
        usage: "COUNTER"
        description: "Total number of shared blocks read by the statement"
    - shared_blks_dirtied:
        usage: "COUNTER"
        description: "Total number of shared blocks dirtied by the statement"
    - shared_blks_written:
        usage: "COUNTER"
        description: "Total number of shared blocks written by the statement"
    - local_blks_hit:
        usage: "COUNTER"
        description: "Total number of local block cache hits by the statement"
    - local_blks_read:
        usage: "COUNTER"
        description: "Total number of local blocks read by the statement"
    - local_blks_dirtied:
        usage: "COUNTER"
        description: "Total number of local blocks dirtied by the statement"
    - local_blks_written:
        usage: "COUNTER"
        description: "Total number of local blocks written by the statement"
    - temp_blks_read:
        usage: "COUNTER"
        description: "Total number of temp blocks read by the statement"
    - temp_blks_written:
        usage: "COUNTER"
        description: "Total number of temp blocks written by the statement"
    - blk_read_time_seconds:
        usage: "COUNTER"
        description: "Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)"
    - blk_write_time_seconds:
        usage: "COUNTER"
        description: "Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)"

pg_process_idle:
  query: |
    WITH
      metrics AS (
        SELECT
          application_name,
          SUM(EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change))::bigint)::float AS process_idle_seconds_sum,
          COUNT(*) AS process_idle_seconds_count
        FROM pg_stat_activity
        WHERE state = 'idle'
        GROUP BY application_name
      ),
      buckets AS (
        SELECT
          application_name,
          le,
          SUM(
            CASE WHEN EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change)) <= le
              THEN 1
              ELSE 0
            END
          )::bigint AS bucket
        FROM
          pg_stat_activity,
          UNNEST(ARRAY[1, 2, 5, 15, 30, 60, 90, 120, 300]) AS le
        GROUP BY application_name, le
        ORDER BY application_name, le
      )
    SELECT
      application_name,
      process_idle_seconds_sum as seconds_sum,
      process_idle_seconds_count as seconds_count,
      ARRAY_AGG(le) AS seconds,
      ARRAY_AGG(bucket) AS seconds_bucket
    FROM metrics JOIN buckets USING (application_name)
    GROUP BY 1, 2, 3
  metrics:
    - application_name:
        usage: "LABEL"
        description: "Application Name"
    - seconds:
        usage: "HISTOGRAM"
        description: "Idle time of server processes"

- 配置prometheus

scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"
    static_configs:
      - targets: ["127.0.0.1:9090"]
        #labels:
          #instance: prometheus

  - job_name: postgresql
    static_configs:
      - targets: ['127.0.0.1:15432']
        #labels:
          #instance: postgresql

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/536429.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【自动化测试入门知识】自动化测试生命周期

如今&#xff0c;项目经理和开发人员面临着用最少的资源并在日渐缩减的时间表中构建可靠应用程序的挑战。因此&#xff0c;组织正在转向自动化测试以有效地实现此目标。 在自动化测试方面&#xff0c;我们许多人认为这只是SDLC&#xff08;软件开发生命周期&#xff09;的一部…

【竣达】浅谈电视台机房智能化动力环境监控系统管理

电视台作为面向全国甚至是世界的广播平台&#xff0c;在节目转播过程中不能有任何的闪失&#xff0c;为了最大限度保障电视节目的安全播出&#xff0c;一套完整的机房动力环境监控系统必不可少。 电视台机房及广播控制室建立包括机房动力、环境及安防的监控系统&#xff0c;主…

30岁女程序媛,对职业和未来的迷茫和焦虑

前言 2023快过去一半马上就要2024年了&#xff0c;92年的我就要步入30的行列了。 一转眼&#xff0c;也到了三十而立的年纪。 反观自己&#xff0c;到了这个时候&#xff0c;更多的是迷茫、彷徨、焦虑、无措 但也在不断地跌跌撞撞中&#xff0c;找到了自己的节奏和目标。 …

程序员一个月拿两万,得知卖猪肉可以赚五万,你是选择做程序员还是卖猪肉?

在知乎上看到这么个帖子&#xff0c;觉得挺有意思&#xff0c;大家一起瞧瞧&#xff1f; 对此&#xff0c;我也看到了许多犀利的回答哈 **A&#xff1a;**我反过来问你&#xff0c;如果一对夫妇卖猪肉一个月只能挣一万&#xff0c;听说一名程序员一个月拿五万&#xff0c;他们…

【轻量化网络系列(1)】MobileNetV1论文超详细解读(翻译 +学习笔记+代码实现)

前言 这几天学了一下轻量化网络&#xff0c;我们先从MobileNetV1讲起吧~ MobileNetV1网络是谷歌团队在2017年提出的&#xff0c;专注于移动端和嵌入设备的轻量级CNN网络&#xff0c;相比于传统的神经网络&#xff0c;在准确率小幅度降低的前提下大大减少模型的参数与运算量。…

机器学习 day13(正则化,线性回归的正则化,逻辑回归的正则化)

正则化的思想 如果特征的参数值更小&#xff0c;那么对模型有影响的特征就越少&#xff0c;模型就越简单&#xff0c;因此就不太容易过拟合 如上图所示&#xff0c;成本函数中有W₃和W₄&#xff0c;且他们的系数很大&#xff0c;要想让该成本函数达到最小值&#xff0c;就得使…

【软件测试】未来软件测试必备的八大技能!你缺少哪个?

软件测试工程师是个神奇的职业&#xff0c;他是开发人员与老板之间的传话筒&#xff08;三夹板&#xff09;&#xff0c;也是开发人员与老板的好帮手&#xff1b; 他不仅需要有销售的沟通能力&#xff0c;也需要具备编辑人员的文档撰写技巧。如此一个面面俱到的岗位&#xff0…

软件设计师--易错题集结

计算机组成与结构 海明校验码是在n个数据位之外增选择题设k个校验位&#xff0c;从而形成一个kn位的新的码字&#xff0c;使新的码字的码距比较均匀地拉大。n与k的关系是&#xff08;A&#xff09;。 A、2k-1≥nk B、2n-1≤nk C、nk D、n-1≤k 知识&#xff1a; 确定要传输的信…

小白眼中的卷积神经网络(CNN)

相信和笔者一样爱技术对AI兴趣浓厚的小伙伴们&#xff0c;一定对卷积神经网络并不陌生&#xff0c;也一定曾经对如此“高级”的名字困惑良久。今天让我们一起回顾/学习这玩意到底是什么和为什么呢。 引言 说起CNN&#xff0c;必然会首先想到的是CV吧&#xff0c;而边缘检测可…

电动车防盗器语音芯片方案——NV020C 直推0.5w喇叭

电动车已经成为了越来越多个人的出行交通便利公交&#xff0c;近几年来各式各样的电动车也在不断更新中&#xff0c;功能也逐渐变多。电动车防盗装置也在逐渐更新换代中&#xff0c;其中电动车电池盒成为有些人窥测的重点对象。部分电动车的电池盒锁结构简单&#xff0c;很容易…

FiftyOne 系列教程(2)使用FiftyOne读取数据集

1. 支持的数据集 1.1. 支持各种常见的数据集格式 docs.voxel51.com/user guide/dataset creation/datasets.html#supported import formats此外&#xff0c;zoo上面有什么数据集&#xff0c;这里就可以加载到对应的数据集Available Zoo Datasets — FiftyOne 0.20.1 document…

终端电阻对CAN总线的影响

在进行CAN总线通信前&#xff0c;应保证正确的总线配置&#xff0c;比如终端电阻。它是影响总线通信的重要组件&#xff0c;下面我们不考虑信号的完整性&#xff0c;只从信号幅度和时间常数方面分析不加终端电阻时的影响。 根据ISO11898-2对终端电阻的取值规定&#xff0c;必须…

八股文大全

八股文大全 1. 基础篇1.1 网络基础1.1.1 TCP 三次握手1.1.2 TCP四次挥手![在这里插入图片描述](https://img-blog.csdnimg.cn/90a6997e8d414c84b499167c99da0397.png)1.1.3 TCP常见面试题 1. 基础篇 1.1 网络基础 1.1.1 TCP 三次握手 三次握手过程&#xff1a; 客户端——发…

Thread线程学习(3) 了解Linux线程中的pthread_cancel()函数

目录 一、了解pthread_cancel()函数 二、使用pthread_cancel()函数的基础示例 三、使用pthread_cancel()函数取消线程的进阶示例 (1) 注意事项 (2) 进阶示例 四、pthread_cancel()函数的扩展内容 (1) 如何定义取消点&#xff1a; (2) 使用pthread_cancel()函数需要谨慎…

电极法测污水常规五参数(PH、电导率、溶解氧、温度、浊度)

检测水质常规五参数的意义&#xff1a; pH&#xff1a;地表水水质中pH值的变化会影响藻类对氧气的摄入能力及动物对食物的摄取敏感度&#xff1b; 电导率&#xff1a;主要是测水的导电性&#xff0c;监测水体中总的离子浓度。包含了各种化学物质、重金属、杂质等等各种导电性物…

vue+elementui+nodejs校园生活信息服务快递系统v62911

语言 node.js 框架&#xff1a;Express 前端:Vue.js 数据库&#xff1a;mysql 数据库工具&#xff1a;Navicat 开发软件&#xff1a;VScode 学生登录&#xff0c;学生通过填写用户名、密码、权限等信息&#xff0c;输入完成后选择登录即可进入校园快领服务系统 学生登录进入校…

ESP32-C2系列开发板简介

C2是一个芯片采用4毫米x 4毫米封装&#xff0c;与272 kB内存。它运行框架&#xff0c;例如ESP-Jumpstart和ESP造雨者&#xff0c;同时它也运行ESP-IDF。ESP-IDF是Espressif面向嵌入式物联网设备的开源实时操作系统&#xff0c;受到了全球用户的信赖。它由支持Espressif以及所有…

阿里P6经验分享,这8种不同类型的自动化测试框架,你会吗?

以下为作者观点&#xff1a; 在自动化测试中&#xff0c;框架提供了一种组织和执行测试案例的结构化方式。它们提供了一套准则和最佳实践&#xff0c;使测试人员能够编写可重复使用、可维护和可扩展的测试脚本。在这篇文章中&#xff0c;我们将讨论自动化测试中不同类型的框架…

最近火起的 Bean Searcher 与 MyBatis Plus 到底有啥区别?

专属小彩蛋&#xff1a;前些天发现了一个巨牛的人工智能学习网站&#xff0c;通俗易懂&#xff0c;风趣幽默&#xff0c;忍不住分享一下给大家。点击跳转到网站&#xff08;前言 - 床长人工智能教程&#xff09; 福利&#xff1a;taobao扫码赚个零花钱~ Bean Searcher 号称 任…

使用Python爬取给定网页的所有链接(附完整代码)

&#x1f482; 个人网站:【海拥】【摸鱼游戏】【神级源码资源网】&#x1f91f; 前端学习课程&#xff1a;&#x1f449;【28个案例趣学前端】【400个JS面试题】&#x1f485; 想寻找共同学习交流、摸鱼划水的小伙伴&#xff0c;请点击【摸鱼学习交流群】 此脚本从给定的网页中…