Debian 环境使用 docker compose 部署 sentry

news2024/9/25 13:21:00

Debian 环境使用 docker compose 部署 sentry

  • Sentry 简介
    • 什么是 Sentry ?
    • Sentry 开发语言及支持的 SDK
    • Sentry 功能架构
  • 前置准备条件
    • 规格配置说明
    • Dcoker Desktop 安装
    • WSL2/Debian11 环境准备
  • Sentry 安装步骤
    • docker 部署 sentry 步骤
    • 演示过程说明
  • 总结

Sentry 简介

什么是 Sentry ?

  • 官方介绍:What’s Sentry?

Sentry is a developer-first error tracking and performance monitoring platform that helps developers see what actually matters, solve quicker, and learn continuously about their applications.

译文:Sentry 是一个开发人员优先的错误跟踪和性能监控平台,它可以帮助开发人员了解真正重要的内容,更快地解决问题,并不断了解他们的应用程序。

  • 通俗的介绍

Sentry 是一个实时事件日志记录和聚合平台。(官方说的是错误监控 Error Monitor)它 专门用于监视错误和提取执行适当的事后操作所需的所有信息,而无需使用标准用户循环反馈的任何麻烦不再麻烦地依赖用户反馈来定位问题)。

Sentry 开发语言及支持的 SDK

Sentry 使用 Python(Django) 开发,功能非常丰富,相比起 ExceptionLess 来说也重得多(在 .NetCore/.NET 平台通常使用 ExceptionLess ),其支持的平台很全,基本主流编程语言/框架都有,请看下图:

  • 官网,https://docs.sentry.io/
  • github 地址,https://github.com/getsentry/sentry

docs.sentry.io

Sentry 功能架构

Sentry 是一个 SaaS 产品,提供开箱即用的功能集,它有开源版和商业 SaaS 两个版本,开源版可自行独立部署,接下来我们来看下 Sentry 的功能架构和运行流程:

  • 功能架构

sentry 功能架构

  • 运行流程

sentry 运行流程

关于【Sentry 开源版与商业 SaaS 版的区别】请查看 :https://blog.csdn.net/o__cc/article/details/122445341

前置准备条件

说明:在部署 Sentry 之前,为了保障部署工作顺利进展,假定以下这些环境已经准备完成。

规格配置说明

说明:此处采用 Windows 10(22H2) + WSL2/Debian11 + Docker Desktop(v4.20.1) 部署。
部署环境规格

Dcoker Desktop 安装

说明:Docker + WSL2/Debian11 安装说明请自行查看相关资料,此处不再讲述。

安装好 Dcoker Desktop 环境后,点击设置,修改【Docker Engine】添加如下信息:

Docker Engine

json 配置信息如下:

{
  "builder": {
    "gc": {
      "defaultKeepStorage": "40GB",
      "enabled": true
    }
  },
  "experimental": true,
  "features": {
    "buildkit": true
  },
  "fixed-cidr-v6": "fd00:dead:beef:c0::/80",
  "ip6tables": true,
  "ipv6": true,
  "log-driver": "json-file",
  "log-opts": {
    "max-file": "3",
    "max-size": "20m"
  },
  "registry-mirrors": [
    "https://registry.docker-cn.com", // Docker中国区官方
    "https://docker.mirrors.ustc.edu.cn", // 中国科学技术大学
    "http://hub-mirror.c.163.com", // 网易
    "https://mirrors.tuna.tsinghua.edu.cn" // 清华
  ]
}

继续点击【Resources】开启 Debian,如下所示:

docker Resources

完成上面操作后,点击右下角【Apply & restart】按钮,使配置信息在 docker 中生效。

相关文章:

  • Debian 12 / Ubuntu 22.04 安装 Docker 以及 Docker Compose 教程,https://u.sb/debian-install-docker/

WSL2/Debian11 环境准备

使用 Windows Terminal 登录 WSL2/Debian 环境,执行如下操作:

  • /etc/apt/sources.list 添加 repo
jeff@master-jeff:/$ cat /etc/apt/sources.list
deb http://deb.debian.org/debian bullseye main
deb http://deb.debian.org/debian bullseye-updates main
deb http://security.debian.org/debian-security bullseye-security main
deb http://ftp.debian.org/debian bullseye-backports main

Debian11(bullseye) 国内软件源,https://www.cnblogs.com/liuguanglin/p/debian11_repo.html

  • apt 安装 git
# 安装 git(如果没有,就安装,后面需要拉取代码)
sudo apt update && install git

Sentry 安装步骤

通过上面的环境准备后,接下来我们就开始进入 Sentry 的安装环节,操作步骤如下:

docker 部署 sentry 步骤

  1. 使用 git clone 拉取 sentryself-hosted 源码
# 拉取 sentry 的 self-hosted
git clone https://github.com/getsentry/self-hosted.git

# 指定发布版本拉取(当下最新发布版本是23.6.1)
git clone https://github.com/getsentry/self-hosted/archive/refs/tags/23.6.1.tar.gz
  1. cdself-hosted 目录,运行 install.sh

此步骤是在 linux 环境安装 sentry 所需的依赖环境。

# 给 install.sh 文件执行权限
chmod +x install.sh 

# 执行 sh 安装
sudo ./install.sh
// 如果 git 链接不稳,可以跳过 commit 检查
sudo ./install.sh --skip-commit-check

查看 install.sh 脚本信息:

#!/usr/bin/env bash
set -eE

# Pre-pre-flight? 🤷
if [[ -n "$MSYSTEM" ]]; then
  echo "Seems like you are using an MSYS2-based system (such as Git Bash) which is not supported. Please use WSL instead."
  exit 1
fi

source install/_lib.sh

# Pre-flight. No impact yet.
source install/parse-cli.sh
source install/detect-platform.sh
source install/dc-detect-version.sh
source install/error-handling.sh
# We set the trap at the top level so that we get better tracebacks.
trap_with_arg cleanup ERR INT TERM EXIT
source install/check-latest-commit.sh
source install/check-minimum-requirements.sh

# Let's go! Start impacting things.
source install/turn-things-off.sh
source install/create-docker-volumes.sh
source install/ensure-files-from-examples.sh
source install/ensure-relay-credentials.sh
source install/generate-secret-key.sh
source install/update-docker-images.sh
source install/build-docker-images.sh
source install/install-wal2json.sh
source install/bootstrap-snuba.sh
source install/create-kafka-topics.sh
source install/upgrade-postgres.sh
source install/set-up-and-migrate-database.sh
source install/geoip.sh
source install/wrap-up.sh

查看 docker-compose.yml 信息:

x-restart-policy: &restart_policy
  restart: unless-stopped
x-depends_on-healthy: &depends_on-healthy
  condition: service_healthy
x-depends_on-default: &depends_on-default
  condition: service_started
x-healthcheck-defaults: &healthcheck_defaults
  # Avoid setting the interval too small, as docker uses much more CPU than one would expect.
  # Related issues:
  # https://github.com/moby/moby/issues/39102
  # https://github.com/moby/moby/issues/39388
  # https://github.com/getsentry/self-hosted/issues/1000
  interval: "$HEALTHCHECK_INTERVAL"
  timeout: "$HEALTHCHECK_TIMEOUT"
  retries: $HEALTHCHECK_RETRIES
  start_period: 10s
x-sentry-defaults: &sentry_defaults
  <<: *restart_policy
  image: sentry-self-hosted-local
  # Set the platform to build for linux/arm64 when needed on Apple silicon Macs.
  platform: ${DOCKER_PLATFORM:-}
  build:
    context: ./sentry
    args:
      - SENTRY_IMAGE
  depends_on:
    redis:
      <<: *depends_on-healthy
    kafka:
      <<: *depends_on-healthy
    postgres:
      <<: *depends_on-healthy
    memcached:
      <<: *depends_on-default
    smtp:
      <<: *depends_on-default
    snuba-api:
      <<: *depends_on-default
    snuba-consumer:
      <<: *depends_on-default
    snuba-outcomes-consumer:
      <<: *depends_on-default
    snuba-sessions-consumer:
      <<: *depends_on-default
    snuba-transactions-consumer:
      <<: *depends_on-default
    snuba-subscription-consumer-events:
      <<: *depends_on-default
    snuba-subscription-consumer-transactions:
      <<: *depends_on-default
    snuba-replacer:
      <<: *depends_on-default
    symbolicator:
      <<: *depends_on-default
    vroom:
      <<: *depends_on-default
  entrypoint: "/etc/sentry/entrypoint.sh"
  command: ["run", "web"]
  environment:
    PYTHONUSERBASE: "/data/custom-packages"
    SENTRY_CONF: "/etc/sentry"
    SNUBA: "http://snuba-api:1218"
    VROOM: "http://vroom:8085"
    # Force everything to use the system CA bundle
    # This is mostly needed to support installing custom CA certs
    # This one is used by botocore
    DEFAULT_CA_BUNDLE: &ca_bundle "/etc/ssl/certs/ca-certificates.crt"
    # This one is used by requests
    REQUESTS_CA_BUNDLE: *ca_bundle
    # This one is used by grpc/google modules
    GRPC_DEFAULT_SSL_ROOTS_FILE_PATH_ENV_VAR: *ca_bundle
    # Leaving the value empty to just pass whatever is set
    # on the host system (or in the .env file)
    SENTRY_EVENT_RETENTION_DAYS:
    SENTRY_MAIL_HOST:
    SENTRY_MAX_EXTERNAL_SOURCEMAP_SIZE:
    OPENAI_API_KEY:
  volumes:
    - "sentry-data:/data"
    - "./sentry:/etc/sentry"
    - "./geoip:/geoip:ro"
    - "./certificates:/usr/local/share/ca-certificates:ro"
x-snuba-defaults: &snuba_defaults
  <<: *restart_policy
  depends_on:
    clickhouse:
      <<: *depends_on-healthy
    kafka:
      <<: *depends_on-healthy
    redis:
      <<: *depends_on-healthy
  image: "$SNUBA_IMAGE"
  environment:
    SNUBA_SETTINGS: self_hosted
    CLICKHOUSE_HOST: clickhouse
    DEFAULT_BROKERS: "kafka:9092"
    REDIS_HOST: redis
    UWSGI_MAX_REQUESTS: "10000"
    UWSGI_DISABLE_LOGGING: "true"
    # Leaving the value empty to just pass whatever is set
    # on the host system (or in the .env file)
    SENTRY_EVENT_RETENTION_DAYS:
services:
  smtp:
    <<: *restart_policy
    image: tianon/exim4
    hostname: "${SENTRY_MAIL_HOST:-}"
    volumes:
      - "sentry-smtp:/var/spool/exim4"
      - "sentry-smtp-log:/var/log/exim4"
  memcached:
    <<: *restart_policy
    image: "memcached:1.6.21-alpine"
    command: ["-I", "${SENTRY_MAX_EXTERNAL_SOURCEMAP_SIZE:-1M}"]
    healthcheck:
      <<: *healthcheck_defaults
      # From: https://stackoverflow.com/a/31877626/5155484
      test: echo stats | nc 127.0.0.1 11211
  redis:
    <<: *restart_policy
    image: "redis:6.2.12-alpine"
    healthcheck:
      <<: *healthcheck_defaults
      test: redis-cli ping
    volumes:
      - "sentry-redis:/data"
    ulimits:
      nofile:
        soft: 10032
        hard: 10032
  postgres:
    <<: *restart_policy
    # Using the same postgres version as Sentry dev for consistency purposes
    image: "postgres:14.5"
    healthcheck:
      <<: *healthcheck_defaults
      # Using default user "postgres" from sentry/sentry.conf.example.py or value of POSTGRES_USER if provided
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
    command:
      [
        "postgres",
        "-c",
        "wal_level=logical",
        "-c",
        "max_replication_slots=1",
        "-c",
        "max_wal_senders=1",
      ]
    environment:
      POSTGRES_HOST_AUTH_METHOD: "trust"
    entrypoint: /opt/sentry/postgres-entrypoint.sh
    volumes:
      - "sentry-postgres:/var/lib/postgresql/data"
      - type: bind
        read_only: true
        source: ./postgres/
        target: /opt/sentry/
  zookeeper:
    <<: *restart_policy
    image: "confluentinc/cp-zookeeper:5.5.7"
    environment:
      ZOOKEEPER_CLIENT_PORT: "2181"
      CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
      ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: "WARN"
      ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: "WARN"
      KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=ruok"
    ulimits:
      nofile:
        soft: 4096
        hard: 4096
    volumes:
      - "sentry-zookeeper:/var/lib/zookeeper/data"
      - "sentry-zookeeper-log:/var/lib/zookeeper/log"
      - "sentry-secrets:/etc/zookeeper/secrets"
    healthcheck:
      <<: *healthcheck_defaults
      test:
        ["CMD-SHELL", 'echo "ruok" | nc -w 2 -q 2 localhost 2181 | grep imok']
  kafka:
    <<: *restart_policy
    depends_on:
      zookeeper:
        <<: *depends_on-healthy
    image: "confluentinc/cp-kafka:5.5.7"
    environment:
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
      KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: "1"
      KAFKA_LOG_RETENTION_HOURS: "24"
      KAFKA_MESSAGE_MAX_BYTES: "50000000" #50MB or bust
      KAFKA_MAX_REQUEST_SIZE: "50000000" #50MB on requests apparently too
      CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
      KAFKA_LOG4J_LOGGERS: "kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN"
      KAFKA_LOG4J_ROOT_LOGLEVEL: "WARN"
      KAFKA_TOOLS_LOG4J_LOGLEVEL: "WARN"
    ulimits:
      nofile:
        soft: 4096
        hard: 4096
    volumes:
      - "sentry-kafka:/var/lib/kafka/data"
      - "sentry-kafka-log:/var/lib/kafka/log"
      - "sentry-secrets:/etc/kafka/secrets"
    healthcheck:
      <<: *healthcheck_defaults
      test: ["CMD-SHELL", "nc -z localhost 9092"]
      interval: 10s
      timeout: 10s
      retries: 30
  clickhouse:
    <<: *restart_policy
    image: clickhouse-self-hosted-local
    build:
      context: ./clickhouse
      args:
        BASE_IMAGE: "${CLICKHOUSE_IMAGE:-}"
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
    volumes:
      - "sentry-clickhouse:/var/lib/clickhouse"
      - "sentry-clickhouse-log:/var/log/clickhouse-server"
      - type: bind
        read_only: true
        source: ./clickhouse/config.xml
        target: /etc/clickhouse-server/config.d/sentry.xml
    environment:
      # This limits Clickhouse's memory to 30% of the host memory
      # If you have high volume and your search return incomplete results
      # You might want to change this to a higher value (and ensure your host has enough memory)
      MAX_MEMORY_USAGE_RATIO: 0.3
    healthcheck:
      test: [
          "CMD-SHELL",
          # Manually override any http_proxy envvar that might be set, because
          # this wget does not support no_proxy. See:
          # https://github.com/getsentry/self-hosted/issues/1537
          "http_proxy='' wget -nv -t1 --spider 'http://localhost:8123/' || exit 1",
        ]
      interval: 10s
      timeout: 10s
      retries: 30
  geoipupdate:
    image: "maxmindinc/geoipupdate:v4.7.1"
    # Override the entrypoint in order to avoid using envvars for config.
    # Futz with settings so we can keep mmdb and conf in same dir on host
    # (image looks for them in separate dirs by default).
    entrypoint:
      ["/usr/bin/geoipupdate", "-d", "/sentry", "-f", "/sentry/GeoIP.conf"]
    volumes:
      - "./geoip:/sentry"
  snuba-api:
    <<: *snuba_defaults
  # Kafka consumer responsible for feeding events into Clickhouse
  snuba-consumer:
    <<: *snuba_defaults
    command: consumer --storage errors --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  # Kafka consumer responsible for feeding outcomes into Clickhouse
  # Use --auto-offset-reset=earliest to recover up to 7 days of TSDB data
  # since we did not do a proper migration
  snuba-outcomes-consumer:
    <<: *snuba_defaults
    command: consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750 --no-strict-offset-reset
  # Kafka consumer responsible for feeding session data into Clickhouse
  snuba-sessions-consumer:
    <<: *snuba_defaults
    command: consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  # Kafka consumer responsible for feeding transactions data into Clickhouse
  snuba-transactions-consumer:
    <<: *snuba_defaults
    command: consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  snuba-replays-consumer:
    <<: *snuba_defaults
    command: consumer --storage replays --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  snuba-replacer:
    <<: *snuba_defaults
    command: replacer --storage errors --auto-offset-reset=latest --no-strict-offset-reset
  snuba-subscription-consumer-events:
    <<: *snuba_defaults
    command: subscriptions-scheduler-executor --dataset events --entity events --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-events-subscriptions-consumers --followed-consumer-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
  snuba-subscription-consumer-sessions:
    <<: *snuba_defaults
    command: subscriptions-scheduler-executor --dataset sessions --entity sessions --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-sessions-subscriptions-consumers --followed-consumer-group=sessions-group --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
  snuba-subscription-consumer-transactions:
    <<: *snuba_defaults
    command: subscriptions-scheduler-executor --dataset transactions --entity transactions --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-transactions-subscriptions-consumers --followed-consumer-group=transactions_group --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
  snuba-profiling-profiles-consumer:
    <<: *snuba_defaults
    command: consumer --storage profiles --auto-offset-reset=latest --max-batch-time-ms 1000 --no-strict-offset-reset
  snuba-profiling-functions-consumer:
    <<: *snuba_defaults
    command: consumer --storage functions_raw --auto-offset-reset=latest --max-batch-time-ms 1000 --no-strict-offset-reset
  symbolicator:
    <<: *restart_policy
    image: "$SYMBOLICATOR_IMAGE"
    volumes:
      - "sentry-symbolicator:/data"
      - type: bind
        read_only: true
        source: ./symbolicator
        target: /etc/symbolicator
    command: run -c /etc/symbolicator/config.yml
  symbolicator-cleanup:
    <<: *restart_policy
    image: symbolicator-cleanup-self-hosted-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: "$SYMBOLICATOR_IMAGE"
    command: '"55 23 * * * gosu symbolicator symbolicator cleanup"'
    volumes:
      - "sentry-symbolicator:/data"
  web:
    <<: *sentry_defaults
    ulimits:
      nofile:
        soft: 4096
        hard: 4096
    healthcheck:
      <<: *healthcheck_defaults
      test:
        - "CMD"
        - "/bin/bash"
        - "-c"
        # Courtesy of https://unix.stackexchange.com/a/234089/108960
        - 'exec 3<>/dev/tcp/127.0.0.1/9000 && echo -e "GET /_health/ HTTP/1.1\r\nhost: 127.0.0.1\r\n\r\n" >&3 && grep ok -s -m 1 <&3'
  cron:
    <<: *sentry_defaults
    command: run cron
  worker:
    <<: *sentry_defaults
    command: run worker
  events-consumer:
    <<: *sentry_defaults
    command: run consumer ingest-events --consumer-group ingest-consumer
  attachments-consumer:
    <<: *sentry_defaults
    command: run consumer ingest-attachments --consumer-group ingest-consumer
  transactions-consumer:
    <<: *sentry_defaults
    command: run consumer ingest-transactions --consumer-group ingest-consumer
  ingest-replay-recordings:
    <<: *sentry_defaults
    command: run consumer ingest-replay-recordings --consumer-group ingest-replay-recordings
  ingest-profiles:
    <<: *sentry_defaults
    command: run consumer --no-strict-offset-reset ingest-profiles --consumer-group ingest-profiles
  post-process-forwarder-errors:
    <<: *sentry_defaults
    command: run consumer post-process-forwarder-errors --consumer-group post-process-forwarder --synchronize-commit-log-topic=snuba-commit-log --synchronize-commit-group=snuba-consumers
  post-process-forwarder-transactions:
    <<: *sentry_defaults
    command: run consumer post-process-forwarder-transactions --consumer-group post-process-forwarder --synchronize-commit-log-topic=snuba-transactions-commit-log --synchronize-commit-group transactions_group
  subscription-consumer-events:
    <<: *sentry_defaults
    command: run consumer events-subscription-results --consumer-group query-subscription-consumer
  subscription-consumer-transactions:
    <<: *sentry_defaults
    command: run consumer transactions-subscription-results --consumer-group query-subscription-consumer
  sentry-cleanup:
    <<: *sentry_defaults
    image: sentry-cleanup-self-hosted-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: sentry-self-hosted-local
    entrypoint: "/entrypoint.sh"
    command: '"0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS"'
  nginx:
    <<: *restart_policy
    ports:
      - "$SENTRY_BIND:80/tcp"
    image: "nginx:1.22.0-alpine"
    volumes:
      - type: bind
        read_only: true
        source: ./nginx
        target: /etc/nginx
      - sentry-nginx-cache:/var/cache/nginx
    depends_on:
      - web
      - relay
  relay:
    <<: *restart_policy
    image: "$RELAY_IMAGE"
    volumes:
      - type: bind
        read_only: true
        source: ./relay
        target: /work/.relay
      - type: bind
        read_only: true
        source: ./geoip
        target: /geoip
    depends_on:
      kafka:
        <<: *depends_on-healthy
      redis:
        <<: *depends_on-healthy
      web:
        <<: *depends_on-healthy
  vroom:
    <<: *restart_policy
    image: "$VROOM_IMAGE"
    environment:
      SENTRY_KAFKA_BROKERS_PROFILING: "kafka:9092"
      SENTRY_KAFKA_BROKERS_OCCURRENCES: "kafka:9092"
      SENTRY_BUCKET_PROFILES: file://localhost//var/lib/sentry-profiles
      SENTRY_SNUBA_HOST: "http://snuba-api:1218"
    volumes:
      - sentry-vroom:/var/lib/sentry-profiles
    depends_on:
      kafka:
        <<: *depends_on-healthy
volumes:
  # These store application data that should persist across restarts.
  sentry-data:
    external: true
  sentry-postgres:
    external: true
  sentry-redis:
    external: true
  sentry-zookeeper:
    external: true
  sentry-kafka:
    external: true
  sentry-clickhouse:
    external: true
  sentry-symbolicator:
    external: true
  # This volume stores profiles and should be persisted.
  # Not being external will still persist data across restarts.
  # It won't persist if someone does a docker compose down -v.
  sentry-vroom:
  # These store ephemeral data that needn't persist across restarts.
  # That said, volumes will be persisted across restarts until they are deleted.
  sentry-secrets:
  sentry-smtp:
  sentry-nginx-cache:
  sentry-zookeeper-log:
  sentry-kafka-log:
  sentry-smtp-log:
  sentry-clickhouse-log:

扩展小知识:Linux 环境下不同的文件类型有不同的颜色。

linux 目录执行权限

  • 蓝色表示目录;
  • 绿色表示可执行文件,可执行的程序;
  • 红色表示压缩文件或包文件;
  • 浅蓝色表示链接文件;
  • 灰色表示其它文件;
  1. 运动 docker compose 命令

self-hosted 目录运行 docker compose,执行如下命令:

sudo docker compose --env-file .env.custom up -d

此处不出意外情况,等待安装完成,直接在浏览器访问 http://127.0.0.1:9000/ 即可,这里 Ubuntu/DebianWindows 的端口是共用的。

sentry login

在上面的安装过程中会提示输入账号密码信息,此处填入该信息即可登录。

说明:安装 sentry 的过程有点漫长,受网络环境等因素影响,安装依赖,初始化镜像运行需要耐心等待。

其他环境安装,请查看相关文章:

  • https://www.jb51.net/article/256519.htm
  • Ubuntu下Sentry部署,https://www.cnblogs.com/Du704/p/15184228.html

演示过程说明

此时,我们可以在 docker 中查看 sentry 所需的相关镜像。

jeff@master-jeff:/mnt/c/Users/Jeffery.Chai$ docker image ls
REPOSITORY                               TAG             IMAGE ID       CREATED         SIZE
symbolicator-cleanup-self-hosted-local   latest          0a78e379e527   2 days ago      132MB
sentry-cleanup-self-hosted-local         latest          4e4186d222ea   2 days ago      949MB
<none>                                   <none>          1c5c191621d5   2 days ago      947MB
<none>                                   <none>          2e56488d9ca7   2 days ago      947MB
<none>                                   <none>          6c3a38538842   2 days ago      947MB
<none>                                   <none>          f610c450eb14   2 days ago      947MB
<none>                                   <none>          4fe6a3a4125e   2 days ago      947MB
<none>                                   <none>          896fd940c3b7   2 days ago      947MB
<none>                                   <none>          0a0db21d131c   2 days ago      947MB
<none>                                   <none>          07b4e8a187ba   2 days ago      947MB
<none>                                   <none>          04b4e5dc16be   2 days ago      947MB
sentry-self-hosted-local                 latest          075c3b95d316   2 days ago      947MB
<none>                                   <none>          9c05ee347871   2 days ago      947MB
<none>                                   <none>          6f0d3e316ecf   2 days ago      947MB
sentry-self-hosted-jq-local              latest          90ad6f6a6eb6   2 days ago      82.5MB
getsentry/sentry                         nightly         db241453686e   2 days ago      947MB
getsentry/relay                          nightly         363ed39f2234   2 days ago      254MB
getsentry/snuba                          nightly         e0fd19143e62   2 days ago      993MB
getsentry/symbolicator                   nightly         cb9fde9f635f   2 days ago      131MB
getsentry/vroom                          nightly         f44c0da3f4a9   3 days ago      42MB
busybox                                  latest          5242710cbd55   4 days ago      4.26MB
memcached                                1.6.21-alpine   1f7da6310656   11 days ago     9.7MB
redis                                    6.2.12-alpine   b9cad9a5aff9   2 weeks ago     27.4MB
tianon/exim4                             latest          6de8b48bcaf0   2 weeks ago     158MB
postgres                                 14.5            cefd1c9e490c   8 months ago    376MB
nginx                                    1.22.0-alpine   5685937b6bc1   8 months ago    23.5MB
confluentinc/cp-kafka                    5.5.7           b362671f2bc0   17 months ago   737MB
confluentinc/cp-zookeeper                5.5.7           22b646e1afd0   17 months ago   737MB
curlimages/curl                          7.77.0          e062233fb4a9   2 years ago     8.26MB
maxmindinc/geoipupdate                   v4.7.1          8ec32cc727c7   2 years ago     10.6MB
clickhouse-self-hosted-local             latest          3e6108f87619   3 years ago     497MB

然后 cd 进入 self-hosted 目录,ls 查看文件信息,里面有一个 sentry_install_log-xxx.txt 的安装日志文件。

使用 cat 命令查看 sentry_install_log-xxx.txt 日志文件:

在这里插入图片描述

看到输出如下信息:

-----------------------------------------------------------------

You're all done! Run the following command to get Sentry running:

  docker compose up -d

-----------------------------------------------------------------

说明上面安装的 步骤2 操作已经执行完成,此时可以开始运行容器化部署了。操作命令 docker compose up -d 启动容器,输出如下信息:

执行 docker compose 命令

使用 docker compose ls 即可查看运行的容器。

jeff@master-jeff:~/self-hosted$ docker compose ls
NAME                 STATUS              CONFIG FILES
sentry-self-hosted   running(37)         /home/jeff/self-hosted/docker-compose.yml

下面我们将介绍如何使用 .NET SDK 接入 Sentry,敬请观看后续。

总结

在安装 Sntery 时,一定要把前置环境准备完成,其次是部署规格要求必须满足,因为 Sentry 启动的资源多,产品相对偏重,所需的运行环境资源也是必不可少的,当部署规格资源不够时,在执行 ./install.sh 安装脚本时,首先会进行安装环境检测,不满足要求没法继续后续相关步骤。

运行资源

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/710043.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

python机器学习在气象模式订正、短临预报、气候预测等场景的应用

基于机器学习的天河机场物流预测研究 全球经济快速增长的形势下,八大区域性枢纽之一的武汉天河机场的物流需求也在攀升。文章针对天河机场的货邮吞吐量,运用机器学习中的线性回归模型通过Python对其进行需求预测,并用二次指数平滑法与之对比,在平均绝对百分误差比较下得出机器…

需求分析引言:架构漫谈(四)性能专题

前文介绍了非功能性需求里的可靠性和可用性&#xff0c; 本文对非功能性需求里的性能&#xff0c;进行一些详细的说明&#xff0c;和如何度量系统的性能问题。 1、概念 性能通常是指一个软件系统的处理能力和速度&#xff0c;一般通过 延迟 和 吞吐量 这两个指标进行度量。 不…

分布式软件架构——域名解析系统

透明多级分流系统的设计原则 用户在使用信息系统的过程中&#xff0c;请求首先是从浏览器出发&#xff0c;在DNS的指引下找到系统的入口&#xff0c;然后经过了网关、负载均衡器、缓存、服务集群等一系列设施&#xff0c;最后接触到了系统末端存储于数据库服务器中的信息&…

云计算——容器

作者简介&#xff1a;一名云计算网络运维人员、每天分享网络与运维的技术与干货。 座右铭&#xff1a;低头赶路&#xff0c;敬事如仪 个人主页&#xff1a;网络豆的主页​​​​​ 目录 前言 一.容器简介 二.主流容器技术 1.docker &#xff08;1&#xff09;容器的组…

HTML5+ Runtime提示

使用的环境 vue-cli框架&#xff0c;Andriod调试、云打包都会出现该弹框 1.我遇到的问题 上述弹框提示&#xff0c;HBuilderX3.8.2 &#xff0c; 手机SDK版本是3.8.4&#xff0c;不匹配 解决目的&#xff1a;需要让两个版本匹配 2. 点击“查看详情”&#xff0c;查看原因 …

JS文件UTF8格式乱码问题

UTF8格式的JS文件在IE中显示乱码问题的解决 这种情况通常是由于JS文件头缺少BOM标志引起的,解决方式: 方法1:用系统自带记事本,另存为 UTF-8,覆盖原文件,会自动加上BOM标志(就是文件开头的EF BB BF 三个字节) 方法2: 用notepad 打开,编码菜单,由UTF8编码改为 UTF8-BOM编码

10-Vue从入门到手撕

什么时候可以开始学习Vue? 学习路线&#xff1a;H5 CSS3 ---> ES6 ---> 网络 ---> 第三方库 ---> 工程化 ---> Vue 不经过前面的铺垫是无法学习vue的&#xff0c;就算学了还得倒回去补知识点 展现Vue Vue源码分析&#xff0c;走进作者的内心世界 …

记录一次对STM32G4串口硬件FIFO的调试

记录一次对STM32G4串口硬件FIFO的调试 前言&#xff1a;通常我们使用串口接收多字节数据会使用中断和DMA两种方式。使用中断方式&#xff0c;每接收到一个字节就会触发一次中断&#xff0c;我们可以在中断函数里将接收到的这一字节保存在内存中然后等待其他程序处理&#xff0c…

麦语言是什么东东?怎么学?

麦语言&#xff08;M Language&#xff09;是一种用于处理数据的编程语言&#xff0c;最初由微软公司开发。它是Power Query&#xff08;数据提取和转换工具&#xff09;和Power BI&#xff08;商业智能工具&#xff09;中的一部分。麦语言支持对各种数据源进行查询、转换和清理…

农业温室大棚数据监控系统的设计与实现

1.引言 农业温室大棚作为现在农业发展的必要条件&#xff0c;将高新技术融入农业温室大棚也愈发的重要&#xff0c;对农业温室大棚数据的监控&#xff0c;将温室大棚智能化。本设计对温室大棚实现远程数据监控&#xff0c;自动化控制&#xff0c;对温室内的环境数据进行巡回检…

解决Springboot在启动时报错:不支持发行版本17

今天在创建新项目时控制台出现如下错误&#xff1a; 最后经过排查发现问题出现如下几点。将以下几点进行修改问题得以解决。 1.将红色箭头地方由17改为11 2.将maven的pom文件中 的javaversion由17改为113.将spingboot的版本调为2.7.5 如果以上还没有解决问题&#xff0c;可以尝…

机器视觉(图像处理)入门金典之图像数字化及处理方法

图像的数字化 一般的图像(模拟图像)不能直接用计算机来处理,必须首先转化为数字图像 把模拟图像分割成一个个称为像素的小区域,每个像素的亮度或灰度值用一个整数表示 数字化的含义: 使模拟图像的灰度、亮度和色彩数据化 图像数字化的步骤: 两个步骤: 1、在空间坐标…

时间序列分解 | Matlab改进的自适应噪声完备集合经验模态分解ICEEMDAN

文章目录 效果一览文章概述部分源码参考资料效果一览 文章概述 时间序列分解 | Matlab改进的自适应噪声完备集合经验模态分解ICEEMDAN 部分源码 %--------------------

【百日冲大厂】第十九篇,牛客网选择题+编程题汽水瓶+ 查找两个字符串a,b中的最长公共子串(动态规划问题)

前言&#xff1a; 大家好&#xff0c;我是良辰丫&#xff0c;第十九篇,牛客网选择题编程题汽水瓶 查找两个字符串a,b中的最长公共子串(动态规划问题).&#x1f49e;&#x1f49e;&#x1f49e;生活就像一只盲盒&#xff0c;藏着意想不到的辛苦&#xff0c;当然也有万般惊喜的可…

【自动化测试】——Selenium (基于java)

前言 小亭子正在努力的学习编程&#xff0c;接下来将开启软件测试的学习~~ 分享的文章都是学习的笔记和感悟&#xff0c;如有不妥之处希望大佬们批评指正~~ 同时如果本文对你有帮助的话&#xff0c;烦请点赞关注支持一波, 感激不尽~~ 目录 一、认识Selenium 1.什么是自动化测…

原码的表示

原码表示 定点整数源码与定点小数源码 源码表示例题 正数与负数转换直接将高位变为1即可 原码的性质 原码的优缺点 乘除法直接符号位异或&#xff0c;数值相乘除即可加法与减法需要先判断两个数值的大小然后确定符号位

软件开发项目延期就天天加班,你认为有效吗?

目录 一、软件开发项目延期的因素 1.1 客户需求变更 1.2 开发人员变动 1.3 技术瓶颈 1.4 对外沟通问题 二、相应的解决方案 2.1 需求变更管理机制 2.2 公司内部人员培训和团队建设 2.3 技术难题攻关 2.4 优化沟通流程 三、总结 软件开发项目延期时加班并不是一个长期…

C++之std::forward模板函数用法(一百四十四)

简介&#xff1a; CSDN博客专家&#xff0c;专注Android/Linux系统&#xff0c;分享多mic语音方案、音视频、编解码等技术&#xff0c;与大家一起成长&#xff01; 优质专栏&#xff1a;Audio工程师进阶系列【原创干货持续更新中……】&#x1f680; 人生格言&#xff1a; 人生…

2023.7.2-逆向显示键入的整数

功能&#xff1a;输入一个整数(多位)&#xff0c;逆向显示输入的结果。 程序&#xff1a; int main() {int a;printf("请输入一个整数&#xff1a;");scanf("%d",&a);if (a < 0)printf("请输入一个正整数");else{while (a>0){printf…

力扣 -- 931. 下降路径最小和

题目链接&#xff1a;931. 下降路径最小和 - 力扣&#xff08;LeetCode&#xff09; 下面是用动态规划的思想解决这道题的过程&#xff0c;相信各位小伙伴都能看懂并且掌握这道经典的动规题目滴。 参考代码&#xff1a; class Solution { public:int minFallingPathSum(vect…