docker 部署tig监控服务

news2024/11/15 19:28:40

前言

  1. tig对应的服务是influxdb grafana telegraf
    在这里插入图片描述
  2. 此架构比传统的promethus架构更为简洁,虽然influxdb开源方案没有集群部署,但是对于中小型服务监控需求该方案简单高效
    在这里插入图片描述
  3. 本文以docker-compose来演示这套监控体系的快速搭建和效果。

部署

docker-compose.yaml

version: '3'
networks:
  monitor:
    driver: bridge
#配置应用
services:
  #grafana 报警推送 
  #账号密码 prometheusalert prometheusalert
  prometheusalert:
    image: feiyu563/prometheus-alert
    container_name:   prometheusalert
    hostname:   prometheusalert
    restart: always
    ports:
      - 8087:8080
    networks:
      - monitor
    volumes:
      - ./docker/prometheusalert/conf:/app/conf
      - ./docker/prometheusalert/db:/app/db
    environment:
      - PA_LOGIN_USER=prometheusalert
      - PA_LOGIN_PASSWORD=prometheusalert
      - PA_TITLE=PrometheusAlert
      - PA_OPEN_FEISHU=1
  #界面展示 默认账号密码 admin admin
  grafana:
    image: grafana/grafana
    container_name: grafana
    hostname: grafana
    restart: always
    volumes:
      - ./docker/grafana/data/grafana:/var/lib/grafana
    ports:
      - "3000:3000"
    networks:
      - monitor
     
  #influxdb数据库v2自带管理端
  #账号密码 root root
  influxdb:
    image: influxdb
    container_name: influxdb
    environment:
      INFLUX_DB: test                   # 可能无效
      INFLUXDB_USER: root               # 可能无效
      INFLUXDB_USER_PASSWORD: root      # 可能无效
    ports:
      - "8086:8086"
    restart: always
    volumes:
      - ./docker/influxdb/:/var/lib/influxdb
    networks:
      - monitor
  #indluxdb数据库v1
  #influxdb1x: 
  #  image: influxdb:1.8
  #  container_name: influxdb1.8
  #  environment:
  #    INFLUXDB_DB: test
  #    INFLUXDB_ADMIN_ENABLED: true
  #    INFLUXDB_ADMIN_USER: root
  #    INFLUXDB_ADMIN_PASSWORD: root
  #  ports:
  #    - "8098:8086"
  #  restart: always
  #  volumes:
  #    - ./docker/influxdb1x/influxdb1x.conf:/etc/influxdb/influxdb.conf
  #    - ./docker/influxdb1x/:/var/lib/influxdb
  #  networks:
  #    - monitor

telegraf 安装 官方文档

# telegraf是采集端,部署于监控数据的源头,详细的部署教程可以通过官网,下面以linux服务器为例子
# 编写源
cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo
[influxdb]
name = InfluxData Repository - Stable
baseurl = https://repos.influxdata.com/stable/\$basearch/main
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
EOF

# 安装
sudo yum install telegraf

# 校验
telegraf --help 

使用

  1. 登陆influxdb http://localhost:8086.
    • 首次登陆会创建账号
    • org是分区意思
    • buk是库的概念
      在这里插入图片描述
      在这里插入图片描述
  2. 配置telegraf采集etl流程 官方telegraf采集插件介绍
    • 配置influxdb数据的访问的token
    • 配置telegraf采集的配置文件
    • 采集端启动telegraf采集etl
      a. 配置token在这里插入图片描述b. 可以通过平台生成配置文件,也可以自己保存配置文件。平台生成配置则提供http接口远程提供配置下载在这里插入图片描述
      以nginx文件为例提供配置
      telegraf.conf
# Configuration for telegraf agent 
# telegraf 采集端配置都是默认配置
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## Maximum number of unwritten metrics per output.  Increasing this value
  ## allows for longer periods of output downtime without dropping metrics at the
  ## cost of higher maximum memory usage.
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Log at debug level.
  # debug = false
  ## Log only error level messages.
  # quiet = false

  ## Log target controls the destination for logs and can be one of "file",
  ## "stderr" or, on Windows, "eventlog".  When set to "file", the output file
  ## is determined by the "logfile" setting.
  # logtarget = "file"

  ## Name of the file to be logged to when using the "file" logtarget.  If set to
  ## the empty string then logs are written to stderr.
  # logfile = ""

  ## The logfile will be rotated after the time interval specified.  When set
  ## to 0 no time based rotation is performed.  Logs are rotated only when
  ## written to, if there is no log activity rotation may be delayed.
  # logfile_rotation_interval = "0d"

  ## The logfile will be rotated when it becomes larger than the specified
  ## size.  When set to 0 no size based rotation is performed.
  # logfile_rotation_max_size = "0MB"

  ## Maximum number of rotated archives to keep, any older logs are deleted.
  ## If set to -1, no archives are removed.
  # logfile_rotation_max_archives = 5

  ## Pick a timezone to use when logging or type 'local' for local time.
  ## Example: America/Chicago
  # log_with_timezone = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false

# influxdb_v2 输出插件配置 这里需要配置的
# 数据库地址:urls 分区:organization 库:bucket 授权token:token
[[outputs.influxdb_v2]]
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ##   ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
  urls = ["http://localhost:8086"] 

  ## Token for authentication.
  token = "上一步创建的token"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "创建的分区 这里是test"

  ## Destination bucket to write into.
  bucket = "创建的表 这里是test"

  ## The value of this tag will be used to determine the bucket.  If this
  ## tag is not set the 'bucket' option is used as the default.
  # bucket_tag = ""

  ## If true, the bucket tag will not be added to the metric.
  # exclude_bucket_tag = false

  ## Timeout for HTTP messages.
  # timeout = "5s"

  ## Additional HTTP headers
  # http_headers = {"X-Special-Header" = "Special-Value"}

  ## HTTP Proxy override, if unset values the standard proxy environment
  ## variables are consulted to determine which proxy, if any, should be used.
  # http_proxy = "http://corporate.proxy:3128"

  ## HTTP User-Agent
  # user_agent = "telegraf"

  ## Content-Encoding for write request body, can be set to "gzip" to
  ## compress body or "identity" to apply no encoding.
  # content_encoding = "gzip"

  ## Enable or disable uint support for writing uints influxdb 2.0.
  # influx_uint_support = false

  ## Optional TLS Config for use on HTTP connections.
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

# Parse the new lines appended to a file
# tail 输入插件配置,以监听nginx日志为例 需要配置
# 监听文件位置 files 
# nginx行数据解析表达式 grok_patterns 提取监控字段,gork表达式不单独说明了
# nginx监控数据存储表名 name_override
[[inputs.tail]]
  ## File names or a pattern to tail.
  ## These accept standard unix glob matching rules, but with the addition of
  ## ** as a "super asterisk". ie:
  ##   "/var/log/**.log"  -> recursively find all .log files in /var/log
  ##   "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
  ##   "/var/log/apache.log" -> just tail the apache log file
  ##   "/var/log/log[!1-2]*  -> tail files without 1-2
  ##   "/var/log/log[^1-2]*  -> identical behavior as above
  ## See https://github.com/gobwas/glob for more examples
  ##
  files = ["/logs/nginx/access_main.log"]

  ## Read file from beginning.
  #from_beginning = false

  ## Whether file is a named pipe
  # pipe = false

  ## Method used to watch for file updates.  Can be either "inotify" or "poll".
  # watch_method = "inotify"

  ## Maximum lines of the file to process that have not yet be written by the
  ## output.  For best throughput set based on the number of metrics on each
  ## line and the size of the output's metric_batch_size.
  # max_undelivered_lines = 1000

  ## Character encoding to use when interpreting the file contents.  Invalid
  ## characters are replaced using the unicode replacement character.  When set
  ## to the empty string the data is not decoded to text.
  ##   ex: character_encoding = "utf-8"
  ##       character_encoding = "utf-16le"
  ##       character_encoding = "utf-16be"
  ##       character_encoding = ""
  # character_encoding = ""

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  grok_patterns = ["%{NGINX_ACCESS_LOG}"]
  name_override = "nginx_access_log"
  #grok_custom_pattern_files = []
  #grok_custom_patterns = '''
  #NGINX_ACCESS_LOG %{IP:remote_addr} - (-|%{WORD:remote_user}) [%{HTTPDATE:time_local}] %{BASE10NUM:request_time:float} (-|%{BASE10NUM:upstream_response_time:float}) %{IPORHOST:host} %{QS:request} %{NUMBER:status:int} %{NUMBER:body_bytes_sent:int} %{QS:referrer} %{QS:agent} %{IPORHOST:xforwardedfor}
  #'''
  grok_custom_patterns = '''
  NGINX_ACCESS_LOG %{IP:remote_addr} - (-|%{WORD:remote_user:drop}) \[%{HTTPDATE:ts:ts}\] %{BASE10NUM:request_time:float} %{BASE10NUM:upstream_response_time:float} %{IPORHOST:host:tag} "(?:%{WORD:verb:drop} %{NOTSPACE:request:tag}(?: HTTP/%{NUMBER:http_version:drop})?|%{DATA:rawrequest})" %{NUMBER:status:tag} (?:%{NUMBER:resp_bytes}|-)  %{QS:referrer:drop} %{QS:agent:drop} %{QS:xforwardedfor:drop}
  '''

  grok_timezone = "Local"
  data_format = "grok"

  ## Set the tag that will contain the path of the tailed file. If you don't want this tag, set it to an empty string.
  # path_tag = "path"

  ## multiline parser/codec
  ## https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-multiline.html
  #[inputs.tail.multiline]
    ## The pattern should be a regexp which matches what you believe to be an
	## indicator that the field is part of an event consisting of multiple lines of log data.
    #pattern = "^\s"

    ## This field must be either "previous" or "next".
	## If a line matches the pattern, "previous" indicates that it belongs to the previous line,
	## whereas "next" indicates that the line belongs to the next one.
    #match_which_line = "previous"

    ## The invert_match field can be true or false (defaults to false).
    ## If true, a message not matching the pattern will constitute a match of the multiline
	## filter and the what will be applied. (vice-versa is also true)
    #invert_match = false

    ## After the specified timeout, this plugin sends a multiline event even if no new pattern
	## is found to start a new event. The default timeout is 5s.
    #timeout = 5s

gork表达式举例

# nginx 日志格式
'$remote_addr - $remote_user [$time_local] $request_time $upstream_response_time $host "$request" '
                      '$status $body_bytes_sent  "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
 # grok  解析格式              
  NGINX_ACCESS_LOG %{IP:remote_addr} - (-|%{WORD:remote_user:drop}) \[%{HTTPDATE:ts:ts}\] %{BASE10NUM:request_time:float} %{BASE10NUM:upstream_response_time:float} %{IPORHOST:host:tag} "(?:%{WORD:verb:drop} %{NOTSPACE:request:tag}(?: HTTP/%{NUMBER:http_version:drop})?|%{DATA:rawrequest})" %{NUMBER:status:tag} (?:%{NUMBER:resp_bytes}|-)  %{QS:referrer:drop} %{QS:agent:drop} %{QS:xforwardedfor:drop}

# nginx 日志举例
1.1.1.2 - - [30/Jan/2023:02:27:24 +0000] 0.075 0.075 xxx.xxx.xxx "POST /api/xxx/xxx/xxx HTTP/1.1" 200 69  "https://xxx.xxx.xxx/" "Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148" "1.1.1.1"

# grok 解析变量如下再取变量生成influxdb行协议
{
  "NGINX_ACCESS_LOG": [
    [
      "1.1.1.2 - - [30/Jan/2023:02:27:24 +0000] 0.075 0.075 prod.webcomicsapp.com "POST /api/xxx/xxx/xxx HTTP/1.1" 200 69  "https://xxx.xxx.xxx/" "Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148" "1.46.138.190""
    ]
  ],
  "remote_addr": [
    [
      "1.1.1.2"
    ]
  ],
  "IPV6": [
    [
      null,
      null
    ]
  ],
  "IPV4": [
    [
      "1.1.1.2",
      null
    ]
  ],
  "remote_user": [
    [
      null
    ]
  ],
  "ts": [
    [
      "30/Jan/2023:02:27:24 +0000"
    ]
  ],
  "MONTHDAY": [
    [
      "30"
    ]
  ],
  "MONTH": [
    [
      "Jan"
    ]
  ],
  "YEAR": [
    [
      "2023"
    ]
  ],
  "TIME": [
    [
      "02:27:24"
    ]
  ],
  "HOUR": [
    [
      "02"
    ]
  ],
  "MINUTE": [
    [
      "27"
    ]
  ],
  "SECOND": [
    [
      "24"
    ]
  ],
  "INT": [
    [
      "+0000"
    ]
  ],
  "request_time": [
    [
      "0.075"
    ]
  ],
  "upstream_response_time": [
    [
      "0.075"
    ]
  ],
  "host": [
    [
      "xxx.xxx.xxx"
    ]
  ],
  "HOSTNAME": [
    [
      "xxx.xxx.xxx"
    ]
  ],
  "IP": [
    [
      null
    ]
  ],
  "verb": [
    [
      "POST"
    ]
  ],
  "request": [
    [
      "/api/xxx/xxx/xxx"
    ]
  ],
  "http_version": [
    [
      "1.1"
    ]
  ],
  "BASE10NUM": [
    [
      "1.1",
      "200",
      "69"
    ]
  ],
  "rawrequest": [
    [
      null
    ]
  ],
  "status": [
    [
      "200"
    ]
  ],
  "resp_bytes": [
    [
      "69"
    ]
  ],
  "referrer": [
    [
      ""https://xxx.xxx.xxx/""
    ]
  ],
  "QUOTEDSTRING": [
    [
      ""https://xxx.xxx.xxx/"",
      ""Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148"",
      ""1.1.1.1""
    ]
  ],
  "agent": [
    [
      ""Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148""
    ]
  ],
  "xforwardedfor": [
    [
      ""1.1.1.1""
    ]
  ]
}

启动telegraf

# 测试启动
nohup  telegraf --config telegraf.conf --debug
# 退出测试
ctl+c
# 后台进程开启
nohup  telegraf --config telegraf.conf >/dev/null 2>&1 &
# 关闭后台进程
ps -aux | grep telegraf
kill -9 '对应pid'

3.配置granfa
a. 登陆granfa http://localhost:3000/login admin admin。首次登陆需要改密码
在这里插入图片描述
在这里插入图片描述
b.添加数据源
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
至此tig流转全部完成

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1131293.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

vue手动拖入和导入excel模版

1.列表按钮 系统设置的按钮权限 v-if“$hasPermission(‘om:equipmentinformation:importProblemStatistics’)” <el-button click“importExcel(scope.row.id)” size“small” type“text” v-if“$hasPermission(‘om:equipmentinformation:importProblemStatistics’)…

【lesson14】进程控制之进程等待

文章目录 为什么要有进程等待&#xff1f;如何等待和什么是等待&#xff1f; 为什么要有进程等待&#xff1f; 1.子进程退出&#xff0c;父进程不管子进程&#xff0c;子进程就要处于僵尸状态------会导致内存泄漏 2.父进程创建子进程是要让子进程办事的&#xff0c;那么子进…

AI小百科 - 什么是词向量?

如何表示一个单词的意义&#xff1f;对人来说&#xff0c;一般用解释法&#xff0c;用一段话来解释词的含义。如“太阳”在新华字典中的释义是“太阳系的中心天体。银河系的一颗普通恒星。”然而&#xff0c;这样的解释计算机是听不懂的&#xff0c;必须用更简洁的方式来对词义…

图片放大缩小时,图片上会出现蓝色蒙版解决方案

1. 问题描述&#xff1a; 页面上需要展示几张图片&#xff0c;并且有放大、缩小、旋转功能&#xff0c;在放大时&#xff0c;图片上出现了蓝色蒙版&#xff0c;如下图&#xff1a; 2. 解决方案&#xff1a; body{ -webkit-user-select:none; -moz-user-select:none; -ms-user…

等电位连接器行业应用综合方案

等电位连接器的原理 等电位连接器的原理是利用气体放电管或压敏电阻等非线性元件&#xff0c;当连接器两端的电位差大于所限峰值电压时&#xff0c;连接器导通&#xff0c;迫使连接器两端不同接地体电位基本相等&#xff0c;消除接地体间放电现象&#xff0c;从而避免了由于地…

p11 第60题 设计一个算法,逆序单链表中的数据 电子科技大学2014年数据结构(c语言代码实现)

通过头插法可以实现单链表的逆置 下方博客有图解&#xff1a; 王道p40 5.将带头结点的单链表就地逆置&#xff0c;所谓“就地”是指辅助空间复杂度为O&#xff08;1&#xff09;-CSDN博客 单链表的逆置可以通过遍历链表&#xff0c;逐个将节点取出并插入到新链表的头部来实现…

第31届中国国际测量控制与仪器仪表展览会隆重举行,汉威科技创新产品精彩亮相

10月23日~25日&#xff0c;由中国仪器仪表学会主办的第31届中国国际测量控制与仪器仪表展览会&#xff08;原“多国仪器仪表展”&#xff09;于北京国家会议中心隆重举行。 本届展会吸引了全球400多家行业企业、高校及科研院所参展&#xff0c;同期还举办了主论坛及多场分论坛活…

GB28181学习(十三)——订阅与通知

事件订阅 要求 事件订阅应使用SUBSCRIBE方法&#xff1b;事件源接收事件订阅时&#xff0c;事件源应向事件观察者发送确认消息&#xff1b;事件源&#xff1a; 联网系统SIP服务器报警设备移动设备被集成的卡口系统等 事件观察者 联网系统SIP服务器客户端 事件&#xff1a; 报…

学习笔记-极大似然法与最小二乘法

1、极大似然法&#xff08; maximum likelihood estimation&#xff0c;MLE &#xff09; 极大似然法&#xff08; maximum likelihood estimation&#xff0c;MLE &#xff09;是概率统计中估算模型参数的一种很经典和重要的方法。 &#xff08;1&#xff09;定义 最大似然估…

ArcGIS中批量mxd高版本转低版本

我们经常在给别人发ArcGIS的工程文件mxd&#xff0c;结果到别人那发现mxd工程文件打不开&#xff0c;原因是我们的arcgis版本高于别人&#xff0c;此时工程文件又很多&#xff0c;一个个转存成低版本又嫌麻烦&#xff0c;于是我们做了个批量mxd高版本转低版本的小工具&#xff…

2023平台工程崭露头角,AI 带来新机遇与挑战

在今年&#xff0c;平台工程正在迅速在 IT 企业中崭露头角&#xff0c;成为软件开发团队的必要实践。根据 CloudBees 发布的最新报告《2023年平台工程&#xff1a;快速采纳和影响》&#xff0c;83%的受访者已经完全实施了平台工程&#xff0c;或正处于某种实施阶段。 平台工…

通过VScode连接远程 Linux 服务器修改vue代码

1先在Linux环境安装node&#xff0c;官网下载的node安装包放在自己新建文件夹 2解压 tar -zxvf node-v18.18.0-linux-x64.tar.xz 3新建代码路径&#xff0c; 下载代码 4安装 OpenSSH OpenSSH 可以让你在终端使用 ssh 命令&#xff0c;Windows10 一般自带。 可以通过以下方式…

DC-7 靶机

DC_7 信息搜集 存活检测 详细扫描 后台网页扫描 网页信息搜集 搜索相关信息 在配置中发现了用户名密码字样 $username "dc7user"; $password "MdR3xOgB7#dW";ssh 登录 尝试使用获取的账密进行登录 网页登录失败 尝试 ssh 登录 成功登录 登陆今后提…

迷你洗衣机哪个牌子好又实惠?内裤洗衣机热销前四榜单

小型内裤洗衣机是一款很实用的家用电器&#xff0c;非常适合住在小户型的房子里&#xff0c;或者经常要出差的人。所以&#xff0c;买什么牌子的内衣洗衣机比较好&#xff1f;目前市场上各品牌各有各的特色及应用场合&#xff0c;例如适合于贴身衣物如内衣、内裤、婴儿衣物清洗…

线性表操作的实现--单链表(链式存储结构)

本文参考朱战力老师的数据结构与算法--使用C语言一书 目录 文章目录 前言 一、链表是什么&#xff1f; 二、具体实现 1.单链表的定义 2.初始化ListInitiate&#xff08;SLNode **head&#xff09; 3.求当前元素的个数ListLength&#xff08;SLNode *head&#xff09; 4.插入Lis…

学会Docker之——界面化操作(Docker Desktop)

Docker Desktop 是一款用于在桌面环境下开发、构建和容器化应用程序的工具。它适用于 Windows 和 Mac 操作系统&#xff0c;让开发人员可以轻松地在本地环境中创建和运行容器&#xff0c;并与 Docker Hub 和其他容器注册表进行交互。Docker Desktop集成了Docker Engine&#xf…

C语言文件操作(详解)

&#x1f493;博客主页&#xff1a;江池俊的博客⏩收录专栏&#xff1a;C语言进阶之路&#x1f449;专栏推荐&#xff1a;✅C语言初阶之路 ✅数据结构探索✅C语言刷题专栏&#x1f4bb;代码仓库&#xff1a;江池俊的代码仓库&#x1f389;欢迎大家点赞&#x1f44d;评论&#x…

Maven历史版本下载

网址: https://archive.apache.org/dist/maven/maven-3/3.8.6/binaries/

个人博客测试报告

目录 一、项目背景 二、项目功能 三、测试计划 功能测试 1、测试用例 ​编辑 2、 实际执行测试的部分操作步骤/截图 3、发现的bug 自动化测试 博客访问连接 一、项目背景 个人博客系统采用前后端分离的方法来实现&#xff0c;同时使用了数据库来存储相关的数据&#xf…