- 本示例是通过prometheus的node exporter收集主机的信息,然后在grafana的dashborad进行展示。
- 本示例使用到的组件均是最新的,下文中会有具体版本说明,linux环境是centos。
- 本示例分为四个部分,即prometheus、grafana、node exporter的部署和三者集成的监控linux。
- 本文旨在说明三者如何使用,不涉及各自组件的介绍,如果需要使用到本文的,肯定都有了解。
说明:本示例仅仅是为了展示三者结合使用,故没有考虑集群部署以及实际环境的使用,故除了node exporter外,都部署在server2上,node exporter则是收集四台机器的性能指标。
该文章太长,故分成2个部分
【运维监控】prometheus+node exporter+grafana 监控linux机器运行情况(1)
【运维监控】prometheus+node exporter+grafana 监控linux机器运行情况(2)
【运维监控】prometheus+node exporter+grafana 监控linux机器运行情况(完整版)
一、部署prometheus
1、部署
1)、下载
下载地址:https://prometheus.io/download/
下载版本:prometheus-2.54.0.linux-amd64.tar.gz
2)、解压
tar xf prometheus-2.54.0.linux-amd64.tar.gz -C /usr/local/bigdata
cd /usr/local/bigdata/prometheus-2.54.0.linux-amd64
3)、启动
[alanchan@server2 prometheus-2.54.0.linux-amd64]$ ./prometheus
ts=2024-08-28T00:44:34.721Z caller=main.go:601 level=info msg="No time or size retention was set so using the default time retention" duration=15d
ts=2024-08-28T00:44:34.721Z caller=main.go:645 level=info msg="Starting Prometheus Server" mode=server version="(version=2.54.0, branch=HEAD, revision=5354e87a70d3eb26b81b601b286d66ff983990f6)"
ts=2024-08-28T00:44:34.721Z caller=main.go:650 level=info build_context="(go=go1.22.6, platform=linux/amd64, user=root@68a9e2472a68, date=20240809-11:36:32, tags=netgo,builtinassets,stringlabels)"
ts=2024-08-28T00:44:34.721Z caller=main.go:651 level=info host_details="(Linux 2.6.32-754.35.1.el6.x86_64 #1 SMP Sat Nov 7 12:42:14 UTC 2020 x86_64 server2 (none))"
ts=2024-08-28T00:44:34.721Z caller=main.go:652 level=info fd_limits="(soft=131072, hard=131072)"
ts=2024-08-28T00:44:34.721Z caller=main.go:653 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2024-08-28T00:44:34.725Z caller=web.go:571 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
ts=2024-08-28T00:44:34.725Z caller=main.go:1160 level=info msg="Starting TSDB ..."
ts=2024-08-28T00:44:34.727Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090
ts=2024-08-28T00:44:34.727Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090
ts=2024-08-28T00:44:34.730Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
ts=2024-08-28T00:44:34.730Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=10.811µs
ts=2024-08-28T00:44:34.730Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while"
ts=2024-08-28T00:44:34.730Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
ts=2024-08-28T00:44:34.730Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=49.241µs wal_replay_duration=495.341µs wbl_replay_duration=179ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=10.811µs total_replay_duration=587.433µs
ts=2024-08-28T00:44:34.732Z caller=main.go:1181 level=info fs_type=EXT4_SUPER_MAGIC
ts=2024-08-28T00:44:34.732Z caller=main.go:1184 level=info msg="TSDB started"
ts=2024-08-28T00:44:34.732Z caller=main.go:1367 level=info msg="Loading configuration file" filename=prometheus.yml
ts=2024-08-28T00:44:34.733Z caller=main.go:1404 level=info msg="updated GOGC" old=100 new=75
ts=2024-08-28T00:44:34.733Z caller=main.go:1415 level=info msg="Completed loading of configuration file" filename=prometheus.yml totalDuration=797.737µs db_storage=7.507µs remote_storage=14.22µs web_handler=348ns query_engine=4.314µs scrape=328.176µs scrape_sd=31.039µs notify=44.584µs notify_sd=11.801µs rules=4.957µs tracing=21.4µs
ts=2024-08-28T00:44:34.733Z caller=main.go:1145 level=info msg="Server is ready to receive web requests."
ts=2024-08-28T00:44:34.733Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..."
2、验证
1)、验证启动
验证方式可以查看进程也可以通过浏览器进行,本示例介绍的是浏览器,也即通过web UI的方式。
在浏览器输入地址:http://server2:9090/
展示出如下图,则说明部署及启动成功。
2)、验证功能
在浏览器中打开prometheus自己服务的指标情况,输入如下链接地址即可。
http://server2:9090/metrics
也可以在启动的页面中查询指标对应的值,具体示例如下图。
到此默认的prometheus已经完成了部署,其默认是监控本机的系统的。
二、部署grafana
1、部署
1)、下载
下载地址:https://grafana.com/grafana/dashboards/?search=influx&page=6
下载版本:https://dl.grafana.com/oss/release/grafana-11.1.4.linux-amd64.tar.gz
2)、解压
tar -zxvf grafana-11.1.4.linux-amd64.tar.gz
cd /usr/local/bigdata/grafana-v11.1.4/bin
3)、启动
启动命令可以是下面2种。
grafana-server start
或,推荐如下
grafana server start
[alanchan@server2 bin]$ grafana-server status
Deprecation warning: The standalone 'grafana-server' program is deprecated and will be removed in the future. Please update all uses of 'grafana-server' to 'grafana server'
INFO [08-28|00:55:36] Starting Grafana logger=settings version=11.1.4 commit=2355de00c61fdd6609a67f35ab506fae87f09a84 branch=HEAD compiled=2024-08-28T00:55:36Z
INFO [08-28|00:55:36] Config loaded from logger=settings file=/usr/local/bigdata/grafana-v11.1.4/conf/defaults.ini
INFO [08-28|00:55:36] Target logger=settings target=[all]
INFO [08-28|00:55:36] Path Home logger=settings path=/usr/local/bigdata/grafana-v11.1.4
INFO [08-28|00:55:36] Path Data logger=settings path=/usr/local/bigdata/grafana-v11.1.4/data
INFO [08-28|00:55:36] Path Logs logger=settings path=/usr/local/bigdata/grafana-v11.1.4/data/log
INFO [08-28|00:55:36] Path Plugins logger=settings path=/usr/local/bigdata/grafana-v11.1.4/data/plugins
INFO [08-28|00:55:36] Path Provisioning logger=settings path=/usr/local/bigdata/grafana-v11.1.4/conf/provisioning
INFO [08-28|00:55:36] App mode production logger=settings
2、验证
验证方式可以查看进程也可以通过浏览器进行,本示例介绍的是浏览器,也即通过web UI的方式。
在浏览器输入地址:http://server2:3000/login
展示出如下图,则说明部署及启动成功。
默认密码admin/admin,修改后admin/xxxxxx
登录进去后,如下图所示。
以上,则完成了grafana的部署。
三、部署node exporter
本示例仅以server2上的部署为示例进行说明,实际上本示例会部署在server1到server4上4台机器。
1、部署
1)、下载
在prometheus官网下载node_exporter-1.8.2.linux-amd64.tar.gz
2)、解压
tar xf node_exporter-1.8.2.linux-amd64.tar.gz -C /usr/local/bigdata
3)、启动
[alanchan@server2 node_exporter-1.8.2.linux-amd64]$ pwd
/usr/local/bigdata/node_exporter-1.8.2.linux-amd64
[alanchan@server2 node_exporter-1.8.2.linux-amd64]$ ll
total 20040
-rw-r--r-- 1 alanchan root 11357 Jul 14 11:57 LICENSE
-rwxr-xr-x 1 alanchan root 20500541 Jul 14 11:54 node_exporter
-rw-r--r-- 1 alanchan root 463 Jul 14 11:57 NOTICE
[alanchan@server2 node_exporter-1.8.2.linux-amd64]$ ./node_exporter
ts=2024-09-02T01:22:36.497Z caller=node_exporter.go:193 level=info msg="Starting node_exporter" version="(version=1.8.2, branch=HEAD, revision=f1e0e8360aa60b6cb5e5cc1560bed348fc2c1895)"
ts=2024-09-02T01:22:36.498Z caller=node_exporter.go:194 level=info msg="Build context" build_context="(go=go1.22.5, platform=linux/amd64, user=root@03d440803209, date=20240714-11:53:45, tags=unknown)"
ts=2024-09-02T01:22:36.498Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(z?ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
ts=2024-09-02T01:22:36.499Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
ts=2024-09-02T01:22:36.499Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
ts=2024-09-02T01:22:36.499Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:111 level=info msg="Enabled collectors"
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=arp
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=bcache
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=bonding
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=btrfs
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=conntrack
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=cpu
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=cpufreq
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=diskstats
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=dmi
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=edac
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=entropy
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=fibrechannel
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=filefd
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=filesystem
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=hwmon
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=infiniband
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=ipvs
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=loadavg
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=mdadm
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=meminfo
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=netclass
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=netdev
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=netstat
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=nfs
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=nfsd
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=nvme
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=os
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=powersupplyclass
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=pressure
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=rapl
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=schedstat
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=selinux
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=sockstat
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=softnet
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=stat
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=tapestats
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=textfile
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=thermal_zone
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=time
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=timex
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=udp_queues
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=uname
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=vmstat
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=watchdog
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=xfs
ts=2024-09-02T01:22:36.500Z caller=node_exporter.go:118 level=info collector=zfs
ts=2024-09-02T01:22:36.501Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9100
ts=2024-09-02T01:22:36.501Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9100
2、验证
验证方式可以查看进程也可以通过浏览器进行,本示例介绍的是浏览器,也即通过web UI的方式。
在浏览器输入地址:http://server2:9100/metrics
展示出如下图,则说明部署及启动成功。
以上,则完成了node exporter的部署、启动及验证。