Harbor 部署

news2025/2/1 13:09:41

harbor镜像仓库搭建 版本v2.10.3

文章目录

  • 一. docker 安装 harbor
    • 1. harbor 配置http访问
      • 1.1 下载harbor二进制包
      • 1.2 修改配置文件
      • 1.3 运行
      • 1.4 访问
    • 2.【可选】harbor 配置https访问
      • 2.1 自签证书
      • 2.1 修改配置文件
      • 2.3 修改hosts文件
      • 2.4 运行
      • 2.5 访问
  • 二. k8s 安装harbor
    • 1 .安装NFS
    • 2. 安装Helm
    • 3. 安装Ingress-nginx
      • 3.1 下载包
      • 3.2 修改配置文件
      • 3.3 拉取镜像
      • 3.4 安装
    • 4. 安装Harbor(k8s 安装)
      • 4.1 下载包
      • 4.2 在安装nfs机器上操作
      • 4.3 创建PV
      • 4.4 完整配置文件
      • 4.4 安装
      • 4.5 查看容器是否运行

环境介绍

注意 192.168.100.150 这台机器 上面会安装nfs服务 nfs 服务是提供给k8s进行操作的 包括docker 服务这些

操作系统Centos 7Centos 7Centos 7Centos 7
内核版本Linux 3.10.0-1160.119.1.el7.x86_64Linux 3.10.0-1160.119.1.el7.x86_64Linux 3.10.0-1160.119.1.el7.x86_64Linux 3.10.0-1160.119.1.el7.x86_64
IP192.168.100.100192.168.100.200192.168.100.250192.168.100.150
docker 版本——————20.10.15
docker-compose 版本——————1.28.2
kubernetes 版本1.28.101.28.101.28.10——

本文档不提供 k8s 和 docker 以及 docker-compose 的安装

~~~Linux(Centos)系统 安装Docker Docker-Compose

k8s安装文档v1.28.10版本

一. docker 安装 harbor

1. harbor 配置http访问

1.1 下载harbor二进制包

# 下载二进制包
wget https://github.com/goharbor/harbor/releases/download/v2.10.3/harbor-offline-installer-v2.10.3.tgz

# 解压
mkdir -p /opt/software 

tar -zxvf -C /opt/software harbor-offline-installer-v2.10.3.tgz

1.2 修改配置文件

cd /opt/software/harbor

# 复制一份配置文件
cp harbor.yml.tmpl harbor.yml

# 修改几处即可

hostname: # 改成本机ip地址

# 注释掉 port  certificate private_key 即可
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /data/cert/harbor/harbor.dongdong.com.cert
  private_key: /data/cert/harbor/harbor.dongdong.com.key
  # enable strong ssl ciphers (default: false)
  # strong_ssl_ciphers: false
  
# 默认80端口  
external_url: # http://本机IP:端口

# 可以选择修改 或者默认
harbor_admin_password: # Harbor12345

# 改上面几个就可以了

1.3 运行

# 进入目录
cd /opt/software/harbor

# 运行脚本
sh install.sh
# 等待执行完成即可

1.4 访问

# 浏览器输入 http://本机id:端口

http://192.168.100.150

默认账号 密码
admin Harbor12345


# 如果是arm64 架构
# 需要修改 common 文件夹所有者和所有者组的
# chmod 777 -R common 
# chown 1001.1002 -R common
# 修改common/config/registry 下面passwd 和 root.crt 文件夹所有者和所有者组的
# chown 10000.10000 passwd root.crt

2.【可选】harbor 配置https访问

2.1 自签证书

mkdir -p /data/cert/harbor

cd /data/cert/harbor

# 生成CA
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -sha512 -days 3650 -subj
"/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.dongdong.com" -key ca.key -out ca.crt


# 生成证书请求
openssl genrsa -out harbor.dongdong.com.key 4096
openssl req -sha512 -new -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.dongdong.com" -key harbor.dongdong.com.key -out harbor.dongdong.com.csr


cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=harbor.dongdong.com
EOF


#  生成证书
openssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ca.crt -CAkey ca.key -CAcreateserial -in harbor.dongdong.com.csr -out harbor.dongdong.com.crt
openssl x509 -inform PEM -in harbor.dongdong.com.crt -out harbor.dongdong.com.cert


cp harbor.dongdong.com.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust

2.1 修改配置文件

hostname: harbor.dongdong.com

# 修改harbor.yml 文件
# 打开这些
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /data/cert/harbor/harbor.dongdong.com.cert
  private_key: /data/cert/harbor/harbor.dongdong.com.key
  # enable strong ssl ciphers (default: false)
  # strong_ssl_ciphers: false
external_url: https://harbor.dongdong.com

2.3 修改hosts文件

# ip 和 域名映射
vi /etc/hosts

192.168.100.150 harbor.dongdong.com

2.4 运行

# 进入目录
cd /opt/software/harbor

# 运行脚本
sh install.sh
# 等待执行完成即可

2.5 访问

# 浏览器输入 https://本机id:端口
https://harbor.dongdong.com

默认账号 密码
admin Harbor12345

二. k8s 安装harbor

1 .安装NFS

# 在192.168.100.150 上安装

# 主包提供文件系统
yum -y install nfs-utils   
# 提供rpc协议
yum -y install rpcbind

# 启动服务
systemctl restart rpcbind
systemctl restart nfs

systemctl enable nfs
systemctl enable rpcbind

#查看
exportfs -v

#查看存储端共享
showmount -e localhost

2. 安装Helm

# 在master 节点安装就可以了
# 下载helm https://get.helm.sh/helm-v3.15.2-linux-amd64.tar.gz
# 如果没有wget命令 则 yum install -y wget
wget https://get.helm.sh/helm-v3.15.2-linux-amd64.tar.gz

# 解压
tar -zxvf helm-v3.15.2-linux-amd64.tar.gz

mv linux-amd64/helm /usr/bin

# 打印信息
helm version

version.BuildInfo{Version:"v3.15.2", GitCommit:"1a500d5625419a524fdae4b33de351cc4f58ec35", GitTreeState:"clean", GoVersion:"go1.22.4"}

3. 安装Ingress-nginx

3.1 下载包

# 添加官方仓库
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# 创建目录
mkdir -p /opt/software/


# 下载包
cd /opt/software/ && wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.10.1/ingress-nginx-4.10.1.tgz

# 解压
tar -zxvf ingress-nginx-4.10.1.tgz

3.2 修改配置文件


# 进入对应目录修改
cd /opt/software/ingress-nginx


### 修改配置文件

# 进行提前拉取
## nginx configuration
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md
##

## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:

# -- Override the deployment namespace; defaults to .Release.Namespace
namespaceOverride: "ingress-nginx"
## Labels to apply to all resources
##
commonLabels: {}
# scmhash: abc123
# myLabel: aakkmd

controller:
  name: controller
  enableAnnotationValidations: false
  image:
    ## Keep false as default for now!
    chroot: false
    registry: registry.k8s.io
    image: ingress-nginx/controller
    ## for backwards compatibility consider setting the full image url via the repository value below
    ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
    ## repository:
    tag: "v1.10.1"
    #digest: sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e 注释掉
    #digestChroot: sha256:c155954116b397163c88afcb3252462771bd7867017e8a17623e83601bab7ac7 注释掉
    pullPolicy: IfNotPresent
    runAsNonRoot: true
    # www-data -> uid 101
    runAsUser: 101
    allowPrivilegeEscalation: false
    seccompProfile:
      type: RuntimeDefault
    readOnlyRootFilesystem: false
  # -- Use an existing PSP instead of creating one
  existingPsp: ""
  # -- Configures the controller container name
  containerName: controller
  # -- Configures the ports that the nginx-controller listens on
  containerPort:
    http: 80
    https: 443
  # -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
  config: {}
  # -- Annotations to be added to the controller config configuration configmap.
  configAnnotations: {}
  # -- Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers
  proxySetHeaders: {}
  # -- Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers
  addHeaders: {}
  # -- Optionally customize the pod dnsConfig.
  dnsConfig: {}
  # -- Optionally customize the pod hostAliases.
  hostAliases: []
  # - ip: 127.0.0.1
  #   hostnames:
  #   - foo.local
  #   - bar.local
  # - ip: 10.1.2.3
  #   hostnames:
  #   - foo.remote
  #   - bar.remote
  # -- Optionally customize the pod hostname.
  hostname: {}
  # -- Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.
  # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller
  # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.
  dnsPolicy: ClusterFirstWithHostNet
  # -- Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
  # Ingress status was blank because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply
  reportNodeInternalIp: false
  # -- Process Ingress objects without ingressClass annotation/ingressClassName field
  # Overrides value for --watch-ingress-without-class flag of the controller binary
  # Defaults to false
  watchIngressWithoutClass: false
  # -- Process IngressClass per name (additionally as per spec.controller).
  ingressClassByName: false
  # -- This configuration enables Topology Aware Routing feature, used together with service annotation service.kubernetes.io/topology-mode="auto"
  # Defaults to false
  enableTopologyAwareRouting: false
  # -- This configuration defines if Ingress Controller should allow users to set
  # their own *-snippet annotations, otherwise this is forbidden / dropped
  # when users add those annotations.
  # Global snippets in ConfigMap are still respected
  allowSnippetAnnotations: false
  # -- Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
  # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
  # is merged
  hostNetwork: true
  ## Use host ports 80 and 443
  ## Disabled by default
  hostPort:
    # -- Enable 'hostPort' or not
    enabled: false
    ports:
      # -- 'hostPort' http port
      http: 80
      # -- 'hostPort' https port
      https: 443
  # NetworkPolicy for controller component.
  networkPolicy:
    # -- Enable 'networkPolicy' or not
    enabled: false
  # -- Election ID to use for status update, by default it uses the controller name combined with a suffix of 'leader'
  electionID: ""
  # -- This section refers to the creation of the IngressClass resource.
  # IngressClasses are immutable and cannot be changed after creation.
  # We do not support namespaced IngressClasses, yet, so a ClusterRole and a ClusterRoleBinding is required.
  ingressClassResource:
    # -- Name of the IngressClass
    name: nginx
    # -- Create the IngressClass or not
    enabled: true
    # -- If true, Ingresses without `ingressClassName` get assigned to this IngressClass on creation.
    # Ingress creation gets rejected if there are multiple default IngressClasses.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class
    default: false
    # -- Controller of the IngressClass. An Ingress Controller looks for IngressClasses it should reconcile by this value.
    # This value is also being set as the `--controller-class` argument of this Ingress Controller.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class
    controllerValue: k8s.io/ingress-nginx
    # -- A link to a custom resource containing additional configuration for the controller.
    # This is optional if the controller consuming this IngressClass does not require additional parameters.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class
    parameters: {}
    # parameters:
    #   apiGroup: k8s.example.com
    #   kind: IngressParameters
    #   name: external-lb
  # -- For backwards compatibility with ingress.class annotation, use ingressClass.
  # Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation
  ingressClass: nginx
  # -- Labels to add to the pod container metadata
  podLabels: {}
  #  key: value

  # -- Security context for controller pods
  podSecurityContext: {}
  # -- sysctls for controller pods
  ## Ref: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
  sysctls: {}
  # sysctls:
  #   "net.core.somaxconn": "8192"
  # -- Security context for controller containers
  containerSecurityContext: {}
  # -- Allows customization of the source of the IP address or FQDN to report
  # in the ingress status field. By default, it reads the information provided
  # by the service. If disable, the status field reports the IP address of the
  # node or nodes where an ingress controller pod is running.
  publishService:
    # -- Enable 'publishService' or not
    enabled: true
    # -- Allows overriding of the publish service to bind to
    # Must be <namespace>/<service_name>
    pathOverride: ""
  # Limit the scope of the controller to a specific namespace
  scope:
    # -- Enable 'scope' or not
    enabled: false
    # -- Namespace to limit the controller to; defaults to $(POD_NAMESPACE)
    namespace: ""
    # -- When scope.enabled == false, instead of watching all namespaces, we watching namespaces whose labels
    # only match with namespaceSelector. Format like foo=bar. Defaults to empty, means watching all namespaces.
    namespaceSelector: ""
  # -- Allows customization of the configmap / nginx-configmap namespace; defaults to $(POD_NAMESPACE)
  configMapNamespace: ""
  tcp:
    # -- Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE)
    configMapNamespace: ""
    # -- Annotations to be added to the tcp config configmap
    annotations: {}
  udp:
    # -- Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE)
    configMapNamespace: ""
    # -- Annotations to be added to the udp config configmap
    annotations: {}
  # -- Maxmind license key to download GeoLite2 Databases.
  ## https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
  maxmindLicenseKey: ""
  # -- Additional command line arguments to pass to Ingress-Nginx Controller
  # E.g. to specify the default SSL certificate you can use
  extraArgs: {}
  ## extraArgs:
  ##   default-ssl-certificate: "<namespace>/<secret_name>"
  ##   time-buckets: "0.005,0.01,0.025,0.05,0.1,0.25,0.5,1,2.5,5,10"
  ##   length-buckets: "10,20,30,40,50,60,70,80,90,100"
  ##   size-buckets: "10,100,1000,10000,100000,1e+06,1e+07"

  # -- Additional environment variables to set
  extraEnvs: []
  # extraEnvs:
  #   - name: FOO
  #     valueFrom:
  #       secretKeyRef:
  #         key: FOO
  #         name: secret-resource

  # -- Use a `DaemonSet` or `Deployment`
  kind: Deployment
  # -- Annotations to be added to the controller Deployment or DaemonSet
  ##
  annotations: {}
  #  keel.sh/pollSchedule: "@every 60m"

  # -- Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels
  ##
  labels: {}
  #  keel.sh/policy: patch
  #  keel.sh/trigger: poll

  # -- The update strategy to apply to the Deployment or DaemonSet
  ##
  updateStrategy: {}
  #  rollingUpdate:
  #    maxUnavailable: 1
  #  type: RollingUpdate

  # -- `minReadySeconds` to avoid killing pods before we are ready
  ##
  minReadySeconds: 0
  # -- Node tolerations for server scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  tolerations: []
  #  - key: "key"
  #    operator: "Equal|Exists"
  #    value: "value"
  #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

  # -- Affinity and anti-affinity rules for server scheduling to nodes
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  # # An example of preferred pod anti-affinity, weight is in the range 1-100
  # podAntiAffinity:
  #   preferredDuringSchedulingIgnoredDuringExecution:
  #   - weight: 100
  #     podAffinityTerm:
  #       labelSelector:
  #         matchExpressions:
  #         - key: app.kubernetes.io/name
  #           operator: In
  #           values:
  #           - ingress-nginx
  #         - key: app.kubernetes.io/instance
  #           operator: In
  #           values:
  #           - ingress-nginx
  #         - key: app.kubernetes.io/component
  #           operator: In
  #           values:
  #           - controller
  #       topologyKey: kubernetes.io/hostname

  # # An example of required pod anti-affinity
  # podAntiAffinity:
  #   requiredDuringSchedulingIgnoredDuringExecution:
  #   - labelSelector:
  #       matchExpressions:
  #       - key: app.kubernetes.io/name
  #         operator: In
  #         values:
  #         - ingress-nginx
  #       - key: app.kubernetes.io/instance
  #         operator: In
  #         values:
  #         - ingress-nginx
  #       - key: app.kubernetes.io/component
  #         operator: In
  #         values:
  #         - controller
  #     topologyKey: "kubernetes.io/hostname"

  # -- Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.
  ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
  ##
  topologySpreadConstraints: []
  # - labelSelector:
  #     matchLabels:
  #       app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'
  #       app.kubernetes.io/instance: '{{ .Release.Name }}'
  #       app.kubernetes.io/component: controller
  #   topologyKey: topology.kubernetes.io/zone
  #   maxSkew: 1
  #   whenUnsatisfiable: ScheduleAnyway
  # - labelSelector:
  #     matchLabels:
  #       app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'
  #       app.kubernetes.io/instance: '{{ .Release.Name }}'
  #       app.kubernetes.io/component: controller
  #   topologyKey: kubernetes.io/hostname
  #   maxSkew: 1
  #   whenUnsatisfiable: ScheduleAnyway

  # -- `terminationGracePeriodSeconds` to avoid killing pods before we are ready
  ## wait up to five minutes for the drain of connections
  ##
  terminationGracePeriodSeconds: 300
  # -- Node labels for controller pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
  ##
  nodeSelector:
    kubernetes.io/os: linux
  ## Liveness and readiness probe values
  ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  ## startupProbe:
  ##   httpGet:
  ##     # should match container.healthCheckPath
  ##     path: "/healthz"
  ##     port: 10254
  ##     scheme: HTTP
  ##   initialDelaySeconds: 5
  ##   periodSeconds: 5
  ##   timeoutSeconds: 2
  ##   successThreshold: 1
  ##   failureThreshold: 5
  livenessProbe:
    httpGet:
      # should match container.healthCheckPath
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 5
  readinessProbe:
    httpGet:
      # should match container.healthCheckPath
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  # -- Path of the health check endpoint. All requests received on the port defined by
  # the healthz-port parameter are forwarded internally to this path.
  healthCheckPath: "/healthz"
  # -- Address to bind the health check endpoint.
  # It is better to set this option to the internal node address
  # if the Ingress-Nginx Controller is running in the `hostNetwork: true` mode.
  healthCheckHost: ""
  # -- Annotations to be added to controller pods
  ##
  podAnnotations: {}
  replicaCount: 1
  # -- Minimum available pods set in PodDisruptionBudget.
  # Define either 'minAvailable' or 'maxUnavailable', never both.
  minAvailable: 1
  # -- Maximum unavailable pods set in PodDisruptionBudget. If set, 'minAvailable' is ignored.
  # maxUnavailable: 1

  ## Define requests resources to avoid probe issues due to CPU utilization in busy nodes
  ## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903
  ## Ideally, there should be no limits.
  ## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/
  resources:
    ##  limits:
    ##    cpu: 100m
    ##    memory: 90Mi
    requests:
      cpu: 100m
      memory: 90Mi
  # Mutually exclusive with keda autoscaling
  autoscaling:
    enabled: false
    annotations: {}
    minReplicas: 1
    maxReplicas: 11
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
    behavior: {}
    # scaleDown:
    #   stabilizationWindowSeconds: 300
    #   policies:
    #   - type: Pods
    #     value: 1
    #     periodSeconds: 180
    # scaleUp:
    #   stabilizationWindowSeconds: 300
    #   policies:
    #   - type: Pods
    #     value: 2
    #     periodSeconds: 60
  autoscalingTemplate: []
  # Custom or additional autoscaling metrics
  # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
  # - type: Pods
  #   pods:
  #     metric:
  #       name: nginx_ingress_controller_nginx_process_requests_total
  #     target:
  #       type: AverageValue
  #       averageValue: 10000m

  # Mutually exclusive with hpa autoscaling
  keda:
    apiVersion: "keda.sh/v1alpha1"
    ## apiVersion changes with keda 1.x vs 2.x
    ## 2.x = keda.sh/v1alpha1
    ## 1.x = keda.k8s.io/v1alpha1
    enabled: false
    minReplicas: 1
    maxReplicas: 11
    pollingInterval: 30
    cooldownPeriod: 300
    # fallback:
    #   failureThreshold: 3
    #   replicas: 11
    restoreToOriginalReplicaCount: false
    scaledObject:
      annotations: {}
      # Custom annotations for ScaledObject resource
      #  annotations:
      # key: value
    triggers: []
    # - type: prometheus
    #   metadata:
    #     serverAddress: http://<prometheus-host>:9090
    #     metricName: http_requests_total
    #     threshold: '100'
    #     query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))

    behavior: {}
    # scaleDown:
    #   stabilizationWindowSeconds: 300
    #   policies:
    #   - type: Pods
    #     value: 1
    #     periodSeconds: 180
    # scaleUp:
    #   stabilizationWindowSeconds: 300
    #   policies:
    #   - type: Pods
    #     value: 2
    #     periodSeconds: 60
  # -- Enable mimalloc as a drop-in replacement for malloc.
  ## ref: https://github.com/microsoft/mimalloc
  ##
  enableMimalloc: true
  ## Override NGINX template
  customTemplate:
    configMapName: ""
    configMapKey: ""
  service:
    # -- Enable controller services or not. This does not influence the creation of either the admission webhook or the metrics service.
    enabled: true
    external:
      # -- Enable the external controller service or not. Useful for internal-only deployments.
      enabled: true
    # -- Annotations to be added to the external controller service. See `controller.service.internal.annotations` for annotations to be added to the internal controller service.
    annotations: {}
    # -- Labels to be added to both controller services.
    labels: {}
    # -- Type of the external controller service.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
    type: LoadBalancer
    # -- Pre-defined cluster internal IP address of the external controller service. Take care of collisions with existing services.
    # This value is immutable. Set once, it can not be changed without deleting and re-creating the service.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
    clusterIP: ""
    # -- List of node IP addresses at which the external controller service is available.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
    externalIPs: []
    # -- Deprecated: Pre-defined IP address of the external controller service. Used by cloud providers to connect the resulting load balancer service to a pre-existing static IP.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
    loadBalancerIP: ""
    # -- Restrict access to the external controller service. Values must be CIDRs. Allows any source address by default.
    loadBalancerSourceRanges: []
    # -- Load balancer class of the external controller service. Used by cloud providers to select a load balancer implementation other than the cloud provider default.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-class
    loadBalancerClass: ""
    # -- Enable node port allocation for the external controller service or not. Applies to type `LoadBalancer` only.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation
    # allocateLoadBalancerNodePorts: true

    # -- External traffic policy of the external controller service. Set to "Local" to preserve source IP on providers supporting it.
    # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    externalTrafficPolicy: ""
    # -- Session affinity of the external controller service. Must be either "None" or "ClientIP" if set. Defaults to "None".
    # Ref: https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinity
    sessionAffinity: ""
    # -- Specifies the health check node port (numeric port number) for the external controller service.
    # If not specified, the service controller allocates a port from your cluster's node port range.
    # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    # healthCheckNodePort: 0

    # -- Represents the dual-stack capabilities of the external controller service. Possible values are SingleStack, PreferDualStack or RequireDualStack.
    # Fields `ipFamilies` and `clusterIP` depend on the value of this field.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services
    ipFamilyPolicy: SingleStack
    # -- List of IP families (e.g. IPv4, IPv6) assigned to the external controller service. This field is usually assigned automatically based on cluster configuration and the `ipFamilyPolicy` field.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services
    ipFamilies:
      - IPv4
    # -- Enable the HTTP listener on both controller services or not.
    enableHttp: true
    # -- Enable the HTTPS listener on both controller services or not.
    enableHttps: true
    ports:
      # -- Port the external HTTP listener is published with.
      http: 80
      # -- Port the external HTTPS listener is published with.
      https: 443
    targetPorts:
      # -- Port of the ingress controller the external HTTP listener is mapped to.
      http: http
      # -- Port of the ingress controller the external HTTPS listener is mapped to.
      https: https
    # -- Declare the app protocol of the external HTTP and HTTPS listeners or not. Supersedes provider-specific annotations for declaring the backend protocol.
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#application-protocol
    appProtocol: true
    nodePorts:
      # -- Node port allocated for the external HTTP listener. If left empty, the service controller allocates one from the configured node port range.
      http: ""
      # -- Node port allocated for the external HTTPS listener. If left empty, the service controller allocates one from the configured node port range.
      https: ""
      # -- Node port mapping for external TCP listeners. If left empty, the service controller allocates them from the configured node port range.
      # Example:
      # tcp:
      #   8080: 30080
      tcp: {}
      # -- Node port mapping for external UDP listeners. If left empty, the service controller allocates them from the configured node port range.
      # Example:
      # udp:
      #   53: 30053
      udp: {}
    internal:
      # -- Enable the internal controller service or not. Remember to configure `controller.service.internal.annotations` when enabling this.
      enabled: false
      # -- Annotations to be added to the internal controller service. Mandatory for the internal controller service to be created. Varies with the cloud service.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
      annotations: {}
      # -- Type of the internal controller service.
      # Defaults to the value of `controller.service.type`.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
      type: "NodePort"
      # -- Pre-defined cluster internal IP address of the internal controller service. Take care of collisions with existing services.
      # This value is immutable. Set once, it can not be changed without deleting and re-creating the service.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
      clusterIP: ""
      # -- List of node IP addresses at which the internal controller service is available.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
      externalIPs: []
      # -- Deprecated: Pre-defined IP address of the internal controller service. Used by cloud providers to connect the resulting load balancer service to a pre-existing static IP.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
      loadBalancerIP: ""
      # -- Restrict access to the internal controller service. Values must be CIDRs. Allows any source address by default.
      loadBalancerSourceRanges: []
      # -- Load balancer class of the internal controller service. Used by cloud providers to select a load balancer implementation other than the cloud provider default.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-class
      loadBalancerClass: ""
      # -- Enable node port allocation for the internal controller service or not. Applies to type `LoadBalancer` only.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation
      # allocateLoadBalancerNodePorts: true

      # -- External traffic policy of the internal controller service. Set to "Local" to preserve source IP on providers supporting it.
      # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
      externalTrafficPolicy: ""
      # -- Session affinity of the internal controller service. Must be either "None" or "ClientIP" if set. Defaults to "None".
      # Ref: https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinity
      sessionAffinity: ""
      # -- Specifies the health check node port (numeric port number) for the internal controller service.
      # If not specified, the service controller allocates a port from your cluster's node port range.
      # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
      # healthCheckNodePort: 0

      # -- Represents the dual-stack capabilities of the internal controller service. Possible values are SingleStack, PreferDualStack or RequireDualStack.
      # Fields `ipFamilies` and `clusterIP` depend on the value of this field.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services
      ipFamilyPolicy: SingleStack
      # -- List of IP families (e.g. IPv4, IPv6) assigned to the internal controller service. This field is usually assigned automatically based on cluster configuration and the `ipFamilyPolicy` field.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services
      ipFamilies:
        - IPv4
      ports: {}
      # -- Port the internal HTTP listener is published with.
      # Defaults to the value of `controller.service.ports.http`.
      # http: 80
      # -- Port the internal HTTPS listener is published with.
      # Defaults to the value of `controller.service.ports.https`.
      # https: 443

      targetPorts: {}
      # -- Port of the ingress controller the internal HTTP listener is mapped to.
      # Defaults to the value of `controller.service.targetPorts.http`.
      # http: http
      # -- Port of the ingress controller the internal HTTPS listener is mapped to.
      # Defaults to the value of `controller.service.targetPorts.https`.
      # https: https

      # -- Declare the app protocol of the internal HTTP and HTTPS listeners or not. Supersedes provider-specific annotations for declaring the backend protocol.
      # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#application-protocol
      appProtocol: true
      nodePorts:
        # -- Node port allocated for the internal HTTP listener. If left empty, the service controller allocates one from the configured node port range.
        http: ""
        # -- Node port allocated for the internal HTTPS listener. If left empty, the service controller allocates one from the configured node port range.
        https: ""
        # -- Node port mapping for internal TCP listeners. If left empty, the service controller allocates them from the configured node port range.
        # Example:
        # tcp:
        #   8080: 30080
        tcp: {}
        # -- Node port mapping for internal UDP listeners. If left empty, the service controller allocates them from the configured node port range.
        # Example:
        # udp:
        #   53: 30053
        udp: {}
  # shareProcessNamespace enables process namespace sharing within the pod.
  # This can be used for example to signal log rotation using `kill -USR1` from a sidecar.
  shareProcessNamespace: false
  # -- Additional containers to be added to the controller pod.
  # See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.
  extraContainers: []
  #  - name: my-sidecar
  #    image: nginx:latest
  #  - name: lemonldap-ng-controller
  #    image: lemonldapng/lemonldap-ng-controller:0.2.0
  #    args:
  #      - /lemonldap-ng-controller
  #      - --alsologtostderr
  #      - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration
  #    env:
  #      - name: POD_NAME
  #        valueFrom:
  #          fieldRef:
  #            fieldPath: metadata.name
  #      - name: POD_NAMESPACE
  #        valueFrom:
  #          fieldRef:
  #            fieldPath: metadata.namespace
  #    volumeMounts:
  #    - name: copy-portal-skins
  #      mountPath: /srv/var/lib/lemonldap-ng/portal/skins

  # -- Additional volumeMounts to the controller main container.
  extraVolumeMounts: []
  #  - name: copy-portal-skins
  #   mountPath: /var/lib/lemonldap-ng/portal/skins

  # -- Additional volumes to the controller pod.
  extraVolumes: []
  #  - name: copy-portal-skins
  #    emptyDir: {}

  # -- Containers, which are run before the app containers are started.
  extraInitContainers: []
  # - name: init-myservice
  #   image: busybox
  #   command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']

  # -- Modules, which are mounted into the core nginx image. See values.yaml for a sample to add opentelemetry module
  extraModules: []
  # - name: mytestmodule
  #   image:
  #     registry: registry.k8s.io
  #     image: ingress-nginx/mytestmodule
  #     ## for backwards compatibility consider setting the full image url via the repository value below
  #     ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
  #     ## repository:
  #     tag: "v1.0.0"
  #     digest: ""
  #     distroless: false
  #   containerSecurityContext:
  #     runAsNonRoot: true
  #     runAsUser: <user-id>
  #     allowPrivilegeEscalation: false
  #     seccompProfile:
  #       type: RuntimeDefault
  #     capabilities:
  #       drop:
  #       - ALL
  #     readOnlyRootFilesystem: true
  #   resources: {}
  #
  # The image must contain a `/usr/local/bin/init_module.sh` executable, which
  # will be executed as initContainers, to move its config files within the
  # mounted volume.

  opentelemetry:
    enabled: false
    name: opentelemetry
    image:
      registry: registry.k8s.io
      image: ingress-nginx/opentelemetry
      ## for backwards compatibility consider setting the full image url via the repository value below
      ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
      ## repository:
      tag: "v20230721-3e2062ee5"
      digest: sha256:13bee3f5223883d3ca62fee7309ad02d22ec00ff0d7033e3e9aca7a9f60fd472
      distroless: true
    containerSecurityContext:
      runAsNonRoot: true
      # -- The image's default user, inherited from its base image `cgr.dev/chainguard/static`.
      runAsUser: 65532
      allowPrivilegeEscalation: false
      seccompProfile:
        type: RuntimeDefault
      capabilities:
        drop:
          - ALL
      readOnlyRootFilesystem: true
    resources: {}
  admissionWebhooks:
    name: admission
    annotations: {}
    # ignore-check.kube-linter.io/no-read-only-rootfs: "This deployment needs write access to root filesystem".

    ## Additional annotations to the admission webhooks.
    ## These annotations will be added to the ValidatingWebhookConfiguration and
    ## the Jobs Spec of the admission webhooks.
    enabled: true
    # -- Additional environment variables to set
    extraEnvs: []
    # extraEnvs:
    #   - name: FOO
    #     valueFrom:
    #       secretKeyRef:
    #         key: FOO
    #         name: secret-resource
    # -- Admission Webhook failure policy to use
    failurePolicy: Fail
    # timeoutSeconds: 10
    port: 8443
    certificate: "/usr/local/certificates/cert"
    key: "/usr/local/certificates/key"
    namespaceSelector: {}
    objectSelector: {}
    # -- Labels to be added to admission webhooks
    labels: {}
    # -- Use an existing PSP instead of creating one
    existingPsp: ""
    service:
      annotations: {}
      # clusterIP: ""
      externalIPs: []
      # loadBalancerIP: ""
      loadBalancerSourceRanges: []
      servicePort: 443
      type: ClusterIP
    createSecretJob:
      name: create
      # -- Security context for secret creation containers
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532
        allowPrivilegeEscalation: false
        seccompProfile:
          type: RuntimeDefault
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
      resources: {}
      # limits:
      #   cpu: 10m
      #   memory: 20Mi
      # requests:
      #   cpu: 10m
      #   memory: 20Mi
    patchWebhookJob:
      name: patch
      # -- Security context for webhook patch containers
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532
        allowPrivilegeEscalation: false
        seccompProfile:
          type: RuntimeDefault
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
      resources: {}
    patch:
      enabled: true
      image:
        registry: registry.k8s.io
        image: ingress-nginx/kube-webhook-certgen
        ## for backwards compatibility consider setting the full image url via the repository value below
        ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
        ## repository:
        tag: v1.4.1
        #digest: sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366 # 注释掉
        pullPolicy: IfNotPresent
      # -- Provide a priority class name to the webhook patching job
      ##
      priorityClassName: ""
      podAnnotations: {}
      # NetworkPolicy for webhook patch
      networkPolicy:
        # -- Enable 'networkPolicy' or not
        enabled: false
      nodeSelector:
        kubernetes.io/os: linux
      tolerations: []
      # -- Labels to be added to patch job resources
      labels: {}
      # -- Security context for secret creation & webhook patch pods
      securityContext: {}
    # Use certmanager to generate webhook certs
    certManager:
      enabled: false
      # self-signed root certificate
      rootCert:
        # default to be 5y
        duration: ""
      admissionCert:
        # default to be 1y
        duration: ""
        # issuerRef:
        #   name: "issuer"
        #   kind: "ClusterIssuer"
  metrics:
    port: 10254
    portName: metrics
    # if this port is changed, change healthz-port: in extraArgs: accordingly
    enabled: false
    service:
      annotations: {}
      # prometheus.io/scrape: "true"
      # prometheus.io/port: "10254"
      # -- Labels to be added to the metrics service resource
      labels: {}
      # clusterIP: ""

      # -- List of IP addresses at which the stats-exporter service is available
      ## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
      ##
      externalIPs: []
      # loadBalancerIP: ""
      loadBalancerSourceRanges: []
      servicePort: 10254
      type: ClusterIP
      # externalTrafficPolicy: ""
      # nodePort: ""
    serviceMonitor:
      enabled: false
      additionalLabels: {}
      annotations: {}
      ## The label to use to retrieve the job name from.
      ## jobLabel: "app.kubernetes.io/name"
      namespace: ""
      namespaceSelector: {}
      ## Default: scrape .Release.Namespace or namespaceOverride only
      ## To scrape all, use the following:
      ## namespaceSelector:
      ##   any: true
      scrapeInterval: 30s
      # honorLabels: true
      targetLabels: []
      relabelings: []
      metricRelabelings: []
    prometheusRule:
      enabled: false
      additionalLabels: {}
      # namespace: ""
      rules: []
      # # These are just examples rules, please adapt them to your needs
      # - alert: NGINXConfigFailed
      #   expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0
      #   for: 1s
      #   labels:
      #     severity: critical
      #   annotations:
      #     description: bad ingress config - nginx config test failed
      #     summary: uninstall the latest ingress changes to allow config reloads to resume
      # # By default a fake self-signed certificate is generated as default and
      # # it is fine if it expires. If `--default-ssl-certificate` flag is used
      # # and a valid certificate passed please do not filter for `host` label!
      # # (i.e. delete `{host!="_"}` so also the default SSL certificate is
      # # checked for expiration)
      # - alert: NGINXCertificateExpiry
      #   expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds{host!="_"}) by (host) - time()) < 604800
      #   for: 1s
      #   labels:
      #     severity: critical
      #   annotations:
      #     description: ssl certificate(s) will expire in less then a week
      #     summary: renew expiring certificates to avoid downtime
      # - alert: NGINXTooMany500s
      #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
      #   for: 1m
      #   labels:
      #     severity: warning
      #   annotations:
      #     description: Too many 5XXs
      #     summary: More than 5% of all requests returned 5XX, this requires your attention
      # - alert: NGINXTooMany400s
      #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
      #   for: 1m
      #   labels:
      #     severity: warning
      #   annotations:
      #     description: Too many 4XXs
      #     summary: More than 5% of all requests returned 4XX, this requires your attention
  # -- Improve connection draining when ingress controller pod is deleted using a lifecycle hook:
  # With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds
  # to 300, allowing the draining of connections up to five minutes.
  # If the active connections end before that, the pod will terminate gracefully at that time.
  # To effectively take advantage of this feature, the Configmap feature
  # worker-shutdown-timeout new value is 240s instead of 10s.
  ##
  lifecycle:
    preStop:
      exec:
        command:
          - /wait-shutdown
  priorityClassName: ""
# -- Rollback limit
##
revisionHistoryLimit: 10
## Default 404 backend
##
defaultBackend:
  ##
  enabled: false
  name: defaultbackend
  image:
    registry: registry.k8s.io
    image: defaultbackend-amd64
    ## for backwards compatibility consider setting the full image url via the repository value below
    ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
    ## repository:
    tag: "1.5"
    pullPolicy: IfNotPresent
    runAsNonRoot: true
    # nobody user -> uid 65534
    runAsUser: 65534
    allowPrivilegeEscalation: false
    seccompProfile:
      type: RuntimeDefault
    readOnlyRootFilesystem: true
  # -- Use an existing PSP instead of creating one
  existingPsp: ""
  extraArgs: {}
  serviceAccount:
    create: true
    name: ""
    automountServiceAccountToken: true
  # -- Additional environment variables to set for defaultBackend pods
  extraEnvs: []
  port: 8080
  ## Readiness and liveness probes for default backend
  ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
  ##
  livenessProbe:
    failureThreshold: 3
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  readinessProbe:
    failureThreshold: 6
    initialDelaySeconds: 0
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5
  # -- The update strategy to apply to the Deployment or DaemonSet
  ##
  updateStrategy: {}
  #  rollingUpdate:
  #    maxUnavailable: 1
  #  type: RollingUpdate

  # -- `minReadySeconds` to avoid killing pods before we are ready
  ##
  minReadySeconds: 0
  # -- Node tolerations for server scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  tolerations: []
  #  - key: "key"
  #    operator: "Equal|Exists"
  #    value: "value"
  #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

  affinity: {}
  # -- Security context for default backend pods
  podSecurityContext: {}
  # -- Security context for default backend containers
  containerSecurityContext: {}
  # -- Labels to add to the pod container metadata
  podLabels: {}
  #  key: value

  # -- Node labels for default backend pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
  ##
  nodeSelector:
    kubernetes.io/os: linux
  # -- Annotations to be added to default backend pods
  ##
  podAnnotations: {}
  replicaCount: 1
  minAvailable: 1
  resources: {}
  # limits:
  #   cpu: 10m
  #   memory: 20Mi
  # requests:
  #   cpu: 10m
  #   memory: 20Mi

  extraVolumeMounts: []
  ## Additional volumeMounts to the default backend container.
  #  - name: copy-portal-skins
  #   mountPath: /var/lib/lemonldap-ng/portal/skins

  extraVolumes: []
  ## Additional volumes to the default backend pod.
  #  - name: copy-portal-skins
  #    emptyDir: {}

  extraConfigMaps: []
  ## Additional configmaps to the default backend pod.
  #  - name: my-extra-configmap-1
  #    labels:
  #      type: config-1
  #    data:
  #      extra_file_1.html: |
  #        <!-- Extra HTML content for ConfigMap 1 -->
  #  - name: my-extra-configmap-2
  #    labels:
  #      type: config-2
  #    data:
  #      extra_file_2.html: |
  #        <!-- Extra HTML content for ConfigMap 2 -->

  autoscaling:
    annotations: {}
    enabled: false
    minReplicas: 1
    maxReplicas: 2
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  # NetworkPolicy for default backend component.
  networkPolicy:
    # -- Enable 'networkPolicy' or not
    enabled: false
  service:
    annotations: {}
    # clusterIP: ""

    # -- List of IP addresses at which the default backend service is available
    ## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
    ##
    externalIPs: []
    # loadBalancerIP: ""
    loadBalancerSourceRanges: []
    servicePort: 80
    type: ClusterIP
  priorityClassName: ""
  # -- Labels to be added to the default backend resources
  labels: {}
## Enable RBAC as per https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md and https://github.com/kubernetes/ingress-nginx/issues/266
rbac:
  create: true
  scope: false
## If true, create & use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
  enabled: false
serviceAccount:
  create: true
  name: ""
  automountServiceAccountToken: true
  # -- Annotations for the controller service account
  annotations: {}
# -- Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: secretName

# -- TCP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp: {}
#  "8080": "default/example-tcp-svc:9000"

# -- UDP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
udp: {}
#  "53": "kube-system/kube-dns:53"

# -- Prefix for TCP and UDP ports names in ingress controller service
## Some cloud providers, like Yandex Cloud may have a requirements for a port name regex to support cloud load balancer integration
portNamePrefix: ""
# -- (string) A base64-encoded Diffie-Hellman parameter.
# This can be generated with: `openssl dhparam 4096 2> /dev/null | base64`
## Ref: https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param
dhParam: ""

3.3 拉取镜像

# 提前拉取镜像
ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/controller:v1.10.1

ctr -n k8s.io images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/controller:v1.10.1  registry.k8s.io/ingress-nginx/controller:v1.10.1

ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1

ctr -n k8s.io images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1  registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1

3.4 安装

# 创建名称空间
kubectl create ns ingress-nginx

# 进入ingress-nginx 文件夹
helm install ingress-nginx . -f values.yaml -n ingress-nginx

# 运行成功
[root@kube-master ~]# kubectl get pod -n ingress-nginx 
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-79578949b-vbgn7   1/1     Running   0          25s

4. 安装Harbor(k8s 安装)

4.1 下载包

# 添加helm 官方仓库
helm repo add harbor https://helm.goharbor.io

# 创建目录
mkdir -p /opt/software/

# 下载包
cd /opt/software/ && wget https://helm.goharbor.io/harbor-1.14.3.tgz

# 解压
tar -zxvf harbor-1.14.3.tgz

4.2 在安装nfs机器上操作

# nfs配置信息 在192.168.100.150上
# 写入数据
cat >> /etc/exports <<EOF
/data/nfs/harbor/registry *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/jobservice *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/database *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/redis *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/trivy *(rw,no_root_squash,sync,insecure)
EOF

mkdir -p /data/nfs/harbor/registry /data/nfs/harbor/jobservice /data/nfs/harbor/database /data/nfs/harbor/redis /data/nfs/harbor/trivy

# 赋予权限
chmod 777 /data/nfs/harbor/*

# 重启服务
systemctl restart rpcbind
systemctl restart nfs

4.3 创建PV

# 编写pv 在k8s master 节点上操作
cat >> /opt/software/harbor/harbor.pv.yaml <<EOF
# 第一个
apiVersion: v1
kind: PersistentVolume
metadata:
  name: harbor-registry
  namespace: harbor
  labels:
    app: harbor-registry
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: harbor-nfs
  nfs:
    path: /data/nfs/harbor/registry
    server: 192.168.100.150
    
# 第二个    
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: harbor-jobservice
  namespace: harbor
  labels:
    app: harbor-jobservice
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: harbor-nfs
  nfs:
    path: /data/nfs/harbor/jobservice
    server: 192.168.100.150

# 第三个
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: harbor-database
  namespace: harbor
  labels:
    app: harbor-database
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: harbor-nfs
  nfs:
    path: /data/nfs/harbor/database
    server: 192.168.100.150
    
# 第四个
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: harbor-redis
  namespace: harbor
  labels:
    app: harbor-redis
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: harbor-nfs
  nfs:
    path:  /data/nfs/harbor/redis
    server: 192.168.100.150
    
# 第五个
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: harbor-trivy
  namespace: harbor
  labels:
    app: harbor-trivy
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: harbor-nfs
  nfs:
    path:  /data/nfs/harbor/trivy
    server: 192.168.100.150  
EOF    
# 创建pv
kubectl apply -f harbor-pv.yaml

4.4 完整配置文件

expose:
  # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"
  # and fill the information in the corresponding section
  type: ingress
  tls:
    # Enable TLS or not.
    # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
    # Note: if the "expose.type" is "ingress" and TLS is disabled,
    # the port must be included in the command when pulling/pushing images.
    # Refer to https://github.com/goharbor/harbor/issues/5291 for details.
    enabled: true
    # The source of the tls certificate. Set as "auto", "secret"
    # or "none" and fill the information in the corresponding section
    # 1) auto: generate the tls certificate automatically
    # 2) secret: read the tls certificate from the specified secret.
    # The tls certificate can be generated manually or by cert manager
    # 3) none: configure no tls certificate for the ingress. If the default
    # tls certificate is configured in the ingress controller, choose this option
    certSource: auto
    auto:
      # The common name used to generate the certificate, it's necessary
      # when the type isn't "ingress"
      commonName: ""
    secret:
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      secretName: ""
  ingress:
    hosts:
      core: harbor.dongdong.com # 修改成直接的域名
    # set to the type of ingress controller if it has specific requirements.
    # leave as `default` for most ingress controllers.
    # set to `gce` if using the GCE ingress controller
    # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
    # set to `alb` if using the ALB ingress controller
    # set to `f5-bigip` if using the F5 BIG-IP ingress controller
    controller: default
    ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress
    kubeVersionOverride: ""
    className: ""
    annotations:
      # note different ingress controllers may require a different ssl-redirect annotation
      # for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below
      ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
    harbor:
      # harbor ingress-specific annotations
      annotations: {}
      # harbor ingress-specific labels
      labels: {}
  clusterIP:
    # The name of ClusterIP service
    name: harbor
    # The ip address of the ClusterIP service (leave empty for acquiring dynamic ip)
    staticClusterIP: ""
    # Annotations on the ClusterIP service
    annotations: {}
    ports:
      # The service port Harbor listens on when serving HTTP
      httpPort: 80
      # The service port Harbor listens on when serving HTTPS
      httpsPort: 443
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving HTTP
        port: 80
        # The node port Harbor listens on when serving HTTP
        nodePort: 30002
      https:
        # The service port Harbor listens on when serving HTTPS
        port: 443
        # The node port Harbor listens on when serving HTTPS
        nodePort: 30003
  loadBalancer:
    # The name of LoadBalancer service
    name: harbor
    # Set the IP if the LoadBalancer supports assigning IP
    IP: ""
    ports:
      # The service port Harbor listens on when serving HTTP
      httpPort: 80
      # The service port Harbor listens on when serving HTTPS
      httpsPort: 443
    annotations: {}
    sourceRanges: []

# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://harbor.dongdong.com # 修改成自己的域名

# The internal TLS used for harbor components secure communicating. In order to enable https
# in each component tls cert files need to provided in advance.
internalTLS:
  # If internal TLS enabled
  enabled: false
  # enable strong ssl ciphers (default: false)
  strong_ssl_ciphers: false
  # There are three ways to provide tls
  # 1) "auto" will generate cert automatically
  # 2) "manual" need provide cert file manually in following value
  # 3) "secret" internal certificates from secret
  certSource: "auto"
  # The content of trust ca, only available when `certSource` is "manual"
  trustCa: ""
  # core related cert configuration
  core:
    # secret name for core's tls certs
    secretName: ""
    # Content of core's TLS cert file, only available when `certSource` is "manual"
    crt: ""
    # Content of core's TLS key file, only available when `certSource` is "manual"
    key: ""
  # jobservice related cert configuration
  jobservice:
    # secret name for jobservice's tls certs
    secretName: ""
    # Content of jobservice's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of jobservice's TLS key file, only available when `certSource` is "manual"
    key: ""
  # registry related cert configuration
  registry:
    # secret name for registry's tls certs
    secretName: ""
    # Content of registry's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of registry's TLS key file, only available when `certSource` is "manual"
    key: ""
  # portal related cert configuration
  portal:
    # secret name for portal's tls certs
    secretName: ""
    # Content of portal's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of portal's TLS key file, only available when `certSource` is "manual"
    key: ""
  # trivy related cert configuration
  trivy:
    # secret name for trivy's tls certs
    secretName: ""
    # Content of trivy's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of trivy's TLS key file, only available when `certSource` is "manual"
    key: ""

ipFamily:
  # ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related component
  ipv6:
    enabled: true
  # ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related component
  ipv4:
    enabled: true

# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamically.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you already have existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  # (this does not apply for PVCs that are created for internal database
  # and redis components, i.e. they are never deleted automatically)
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: ""
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used (the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "harbor-nfs" # 改成自己的
      subPath: ""
      accessMode: ReadWriteOnce
      size: 50Gi
      annotations: {}
    jobservice:
      jobLog:
        existingClaim: ""
        storageClass: "harbor-nfs"
        subPath: ""
        accessMode: ReadWriteOnce
        size: 10Gi
        annotations: {}
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: ""
      storageClass: "harbor-nfs"  # 一定要和pv中的storageClassName 对应
      subPath: ""
      accessMode: ReadWriteOnce
      size: 10Gi   # 一定要和pv中的storage 对应 不然绑定不上
      annotations: {}
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: ""
      storageClass: "harbor-nfs"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 10Gi
      annotations: {}
    trivy:
      existingClaim: ""
      storageClass: "harbor-nfs"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 10Gi
      annotations: {}
  # Define which storage backend is used for registry to store
  # images and charts. Refer to
  # https://github.com/distribution/distribution/blob/main/docs/configuration.md#storage
  # for the detail.
  imageChartStorage:
    # Specify whether to disable `redirect` for images and chart storage, for
    # backends which not supported it (such as using minio for `s3` storage type), please disable
    # it. To disable redirects, simply set `disableredirect` to `true` instead.
    # Refer to
    # https://github.com/distribution/distribution/blob/main/docs/configuration.md#redirect
    # for the detail.
    disableredirect: false
    # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
    # The secret must contain keys named "ca.crt" which will be injected into the trust store
    # of registry's containers.
    # caBundleSecretName:

    # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
    # "oss" and fill the information needed in the corresponding section. The type
    # must be "filesystem" if you want to use persistent volumes for registry
    type: filesystem
    filesystem:
      rootdirectory: /storage
      #maxthreads: 100
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
      #realm: core.windows.net
      # To use existing secret, the key must be AZURE_STORAGE_ACCESS_KEY
      existingSecret: ""
    gcs:
      bucket: bucketname
      # The base64 encoded json file which contains the key
      encodedkey: base64-encoded-json-key-file
      #rootdirectory: /gcs/object/name/prefix
      #chunksize: "5242880"
      # To use existing secret, the key must be GCS_KEY_DATA
      existingSecret: ""
      useWorkloadIdentity: false
    s3:
      # Set an existing secret for S3 accesskey and secretkey
      # keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry
      #existingSecret: ""
      region: us-west-1
      bucket: bucketname
      #accesskey: awsaccesskey
      #secretkey: awssecretkey
      #regionendpoint: http://myobjects.local
      #encrypt: false
      #keyid: mykeyid
      #secure: true
      #skipverify: false
      #v4auth: true
      #chunksize: "5242880"
      #rootdirectory: /s3/object/name/prefix
      #storageclass: STANDARD
      #multipartcopychunksize: "33554432"
      #multipartcopymaxconcurrency: 100
      #multipartcopythresholdsize: "33554432"
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
      # keys in existing secret must be REGISTRY_STORAGE_SWIFT_PASSWORD, REGISTRY_STORAGE_SWIFT_SECRETKEY, REGISTRY_STORAGE_SWIFT_ACCESSKEY
      existingSecret: ""
      #region: fr
      #tenant: tenantname
      #tenantid: tenantid
      #domain: domainname
      #domainid: domainid
      #trustid: trustid
      #insecureskipverify: false
      #chunksize: 5M
      #prefix:
      #secretkey: secretkey
      #accesskey: accesskey
      #authversion: 3
      #endpointtype: public
      #tempurlcontainerkey: false
      #tempurlmethods:
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
      # key in existingSecret must be REGISTRY_STORAGE_OSS_ACCESSKEYSECRET
      existingSecret: ""
      #endpoint: endpoint
      #internal: false
      #encrypt: false
      #secure: true
      #chunksize: 10M
      #rootdirectory: rootdirectory

imagePullPolicy: IfNotPresent

# Use this set to assign a list of default pullSecrets
imagePullSecrets:
#  - name: docker-registry-secret
#  - name: internal-registry-secret

# The update strategy for deployments with persistent volumes(jobservice, registry): "RollingUpdate" or "Recreate"
# Set it as "Recreate" when "RWM" for volumes isn't supported
updateStrategy:
  type: RollingUpdate

# debug, info, warning, error or fatal
logLevel: info

# The initial password of Harbor admin. Change it from portal after launching Harbor
# or give an existing secret for it
# key in secret is given via (default to HARBOR_ADMIN_PASSWORD)
# existingSecretAdminPassword:
existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD
harborAdminPassword: "Harbor12345"

# The name of the secret which contains key named "ca.crt". Setting this enables the
# download link on portal to download the CA certificate when the certificate isn't
# generated automatically
caSecretName: ""

# The secret key used for encryption. Must be a string of 16 chars.
secretKey: "not-a-secure-key"
# If using existingSecretSecretKey, the key must be secretKey
existingSecretSecretKey: ""

# The proxy settings for updating trivy vulnerabilities from the Internet and replicating
# artifacts from/to the registries that cannot be reached directly
proxy:
  httpProxy:
  httpsProxy:
  noProxy: 127.0.0.1,localhost,.local,.internal
  components:
    - core
    - jobservice
    - trivy

# Run the migration job via helm hook
enableMigrateHelmHook: false

# The custom ca bundle secret, the secret must contain key named "ca.crt"
# which will be injected into the trust store for core, jobservice, registry, trivy components
# caBundleSecretName: ""

## UAA Authentication Options
# If you're using UAA for authentication behind a self-signed
# certificate you will need to provide the CA Cert.
# Set uaaSecretName below to provide a pre-created secret that
# contains a base64 encoded CA Certificate named `ca.crt`.
# uaaSecretName:

# If service exposed via "ingress", the Nginx will not be used
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: v2.10.3
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  replicas: 1
  revisionHistoryLimit: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  extraEnvVars: []
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Spread Pods across failure-domains like regions, availability zones or nodes
  topologySpreadConstraints: []
  # - maxSkew: 1
  #   topologyKey: topology.kubernetes.io/zone
  #   nodeTaintsPolicy: Honor
  #   whenUnsatisfiable: DoNotSchedule
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}
  ## The priority class to run the pod as
  priorityClassName:

portal:
  image:
    repository: goharbor/harbor-portal
    tag: v2.10.3
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  replicas: 1
  revisionHistoryLimit: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  extraEnvVars: []
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Spread Pods across failure-domains like regions, availability zones or nodes
  topologySpreadConstraints: []
  # - maxSkew: 1
  #   topologyKey: topology.kubernetes.io/zone
  #   nodeTaintsPolicy: Honor
  #   whenUnsatisfiable: DoNotSchedule
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}
  ## Additional service annotations
  serviceAnnotations: {}
  ## The priority class to run the pod as
  priorityClassName:

core:
  image:
    repository: goharbor/harbor-core
    tag: v2.10.3
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  replicas: 1
  revisionHistoryLimit: 10
  ## Startup probe values
  startupProbe:
    enabled: true
    initialDelaySeconds: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  extraEnvVars: []
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Spread Pods across failure-domains like regions, availability zones or nodes
  topologySpreadConstraints: []
  # - maxSkew: 1
  #   topologyKey: topology.kubernetes.io/zone
  #   nodeTaintsPolicy: Honor
  #   whenUnsatisfiable: DoNotSchedule
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}
  ## Additional service annotations
  serviceAnnotations: {}
  ## User settings configuration json string
  configureUserSettings:
  # The provider for updating project quota(usage), there are 2 options, redis or db.
  # By default it is implemented by db but you can configure it to redis which
  # can improve the performance of high concurrent pushing to the same project,
  # and reduce the database connections spike and occupies.
  # Using redis will bring up some delay for quota usage updation for display, so only
  # suggest switch provider to redis if you were ran into the db connections spike around
  # the scenario of high concurrent pushing to same project, no improvment for other scenes.
  quotaUpdateProvider: db # Or redis
  # Secret is used when core server communicates with other components.
  # If a secret key is not specified, Helm will generate one. Alternatively set existingSecret to use an existing secret
  # Must be a string of 16 chars.
  secret: ""
  # Fill in the name of a kubernetes secret if you want to use your own
  # If using existingSecret, the key must be secret
  existingSecret: ""
  # Fill the name of a kubernetes secret if you want to use your own
  # TLS certificate and private key for token encryption/decryption.
  # The secret must contain keys named:
  # "tls.key" - the private key
  # "tls.crt" - the certificate
  secretName: ""
  # If not specifying a preexisting secret, a secret can be created from tokenKey and tokenCert and used instead.
  # If none of secretName, tokenKey, and tokenCert are specified, an ephemeral key and certificate will be autogenerated.
  # tokenKey and tokenCert must BOTH be set or BOTH unset.
  # The tokenKey value is formatted as a multiline string containing a PEM-encoded RSA key, indented one more than tokenKey on the following line.
  tokenKey: |
  # If tokenKey is set, the value of tokenCert must be set as a PEM-encoded certificate signed by tokenKey, and supplied as a multiline string, indented one more than tokenCert on the following line.
  tokenCert: |
  # The XSRF key. Will be generated automatically if it isn't specified
  xsrfKey: ""
  # If using existingSecret, the key is defined by core.existingXsrfSecretKey
  existingXsrfSecret: ""
  # If using existingSecret, the key
  existingXsrfSecretKey: CSRF_KEY
  ## The priority class to run the pod as
  priorityClassName:
  # The time duration for async update artifact pull_time and repository
  # pull_count, the unit is second. Will be 10 seconds if it isn't set.
  # eg. artifactPullAsyncFlushDuration: 10
  artifactPullAsyncFlushDuration:
  gdpr:
    deleteUser: false
    auditLogsCompliant: false

jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: v2.10.3
  replicas: 1
  revisionHistoryLimit: 10
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  maxJobWorkers: 10
  # The logger for jobs: "file", "database" or "stdout"
  jobLoggers:
    - file
    # - database
    # - stdout
  # The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
  loggerSweeperDuration: 14 #days
  notification:
    webhook_job_max_retry: 3
    webhook_job_http_client_timeout: 3 # in seconds
  reaper:
    # the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24
    max_update_hours: 24
    # the max time for execution in running state without new task created
    max_dangling_hours: 168

  # resources:
  #   requests:
  #     memory: 256Mi
  #     cpu: 100m
  extraEnvVars: []
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Spread Pods across failure-domains like regions, availability zones or nodes
  topologySpreadConstraints:
  # - maxSkew: 1
  #   topologyKey: topology.kubernetes.io/zone
  #   nodeTaintsPolicy: Honor
  #   whenUnsatisfiable: DoNotSchedule
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}
  # Secret is used when job service communicates with other components.
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # Use an existing secret resource
  existingSecret: ""
  # Key within the existing secret for the job service secret
  existingSecretKey: JOBSERVICE_SECRET
  ## The priority class to run the pod as
  priorityClassName:

registry:
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  registry:
    image:
      repository: goharbor/registry-photon
      tag: v2.10.3
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    extraEnvVars: []
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: v2.10.3

    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    extraEnvVars: []
  replicas: 1
  revisionHistoryLimit: 10
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Spread Pods across failure-domains like regions, availability zones or nodes
  topologySpreadConstraints: []
  # - maxSkew: 1
  #   topologyKey: topology.kubernetes.io/zone
  #   nodeTaintsPolicy: Honor
  #   whenUnsatisfiable: DoNotSchedule
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}
  ## The priority class to run the pod as
  priorityClassName:
  # Secret is used to secure the upload state from client
  # and registry storage backend.
  # See: https://github.com/distribution/distribution/blob/main/docs/configuration.md#http
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # Use an existing secret resource
  existingSecret: ""
  # Key within the existing secret for the registry service secret
  existingSecretKey: REGISTRY_HTTP_SECRET
  # If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.
  relativeurls: false
  credentials:
    username: "harbor_registry_user"
    password: "harbor_registry_password"
    # If using existingSecret, the key must be REGISTRY_PASSWD and REGISTRY_HTPASSWD
    existingSecret: ""
    # Login and password in htpasswd string format. Excludes `registry.credentials.username`  and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt.
    # htpasswdString: $apr1$XLefHzeG$Xl4.s00sMSCCcMyJljSZb0 # example string
    htpasswdString: ""
  middleware:
    enabled: false
    type: cloudFront
    cloudFront:
      baseurl: example.cloudfront.net
      keypairid: KEYPAIRID
      duration: 3000s
      ipfilteredby: none
      # The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key
      # that allows access to CloudFront
      privateKeySecret: "my-secret"
  # enable purge _upload directories
  upload_purging:
    enabled: true
    # remove files in _upload directories which exist for a period of time, default is one week.
    age: 168h
    # the interval of the purge operations
    interval: 24h
    dryrun: false

trivy:
  # enabled the flag to enable Trivy scanner
  enabled: true
  image:
    # repository the repository for Trivy adapter image
    repository: goharbor/trivy-adapter-photon
    # tag the tag for Trivy adapter image
    tag: v2.10.3
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  # replicas the number of Pod replicas
  replicas: 1
  # debugMode the flag to enable Trivy debug mode with more verbose scanning log
  debugMode: false
  # vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`.
  vulnType: "os,library"
  # severity a comma-separated list of severities to be checked
  severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
  # ignoreUnfixed the flag to display only fixed vulnerabilities
  ignoreUnfixed: false
  # insecure the flag to skip verifying registry certificate
  insecure: false
  # gitHubToken the GitHub access token to download Trivy DB
  #
  # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
  # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
  # in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update
  # timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.
  # Currently, the database is updated every 12 hours and published as a new release to GitHub.
  #
  # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
  # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
  # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
  # https://developer.github.com/v3/#rate-limiting
  #
  # You can create a GitHub token by following the instructions in
  # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
  gitHubToken: ""
  # skipUpdate the flag to disable Trivy DB downloads from GitHub
  #
  # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
  # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
  # `/home/scanner/.cache/trivy/db/trivy.db` path.
  skipUpdate: false
  # skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the
  # `/home/scanner/.cache/trivy/java-db/trivy-java.db` path
  #
  skipJavaDBUpdate: false  
  # The offlineScan option prevents Trivy from sending API requests to identify dependencies.
  #
  # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.
  # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't
  # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.
  # It would work if all the dependencies are in local.
  # This option doesn’t affect DB download. You need to specify skipUpdate as well as offlineScan in an air-gapped environment.
  offlineScan: false
  # Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.
  securityCheck: "vuln"
  # The duration to wait for scan completion
  timeout: 5m0s
  resources:
    requests:
      cpu: 200m
      memory: 512Mi
    limits:
      cpu: 1
      memory: 1Gi
  extraEnvVars: []
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Spread Pods across failure-domains like regions, availability zones or nodes
  topologySpreadConstraints: []
  # - maxSkew: 1
  #   topologyKey: topology.kubernetes.io/zone
  #   nodeTaintsPolicy: Honor
  #   whenUnsatisfiable: DoNotSchedule
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}
  ## The priority class to run the pod as
  priorityClassName:

database:
  # if external database is used, set "type" to "external"
  # and fill the connection information in "external" section
  type: internal
  internal:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    # mount the service account token
    automountServiceAccountToken: false
    image:
      repository: goharbor/harbor-db
      tag: v2.10.3
    # The initial superuser password for internal database
    password: "changeit"
    # The size limit for Shared memory, pgSQL use it for shared_buffer
    # More details see:
    # https://github.com/goharbor/harbor/issues/15034
    shmSizeLimit: 512Mi
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    # The timeout used in livenessProbe; 1 to 5 seconds
    livenessProbe:
      timeoutSeconds: 1
    # The timeout used in readinessProbe; 1 to 5 seconds
    readinessProbe:
      timeoutSeconds: 1
    extraEnvVars: []
    nodeSelector: {}
    tolerations: []
    affinity: {}
    ## The priority class to run the pod as
    priorityClassName:
    initContainer:
      migrator: {}
      # resources:
      #  requests:
      #    memory: 128Mi
      #    cpu: 100m
      permissions: {}
      # resources:
      #  requests:
      #    memory: 128Mi
      #    cpu: 100m
  external:
    host: "192.168.0.1"
    port: "5432"
    username: "user"
    password: "password"
    coreDatabase: "registry"
    # if using existing secret, the key must be "password"
    existingSecret: ""
    # "disable" - No SSL
    # "require" - Always SSL (skip verification)
    # "verify-ca" - Always SSL (verify that the certificate presented by the
    # server was signed by a trusted CA)
    # "verify-full" - Always SSL (verify that the certification presented by the
    # server was signed by a trusted CA and the server host name matches the one
    # in the certificate)
    sslmode: "disable"
  # The maximum number of connections in the idle connection pool per pod (core+exporter).
  # If it <=0, no idle connections are retained.
  maxIdleConns: 100
  # The maximum number of open connections to the database per pod (core+exporter).
  # If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 1024 for postgre of harbor.
  maxOpenConns: 900
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}

redis:
  # if external Redis is used, set "type" to "external"
  # and fill the connection information in "external" section
  type: internal
  internal:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    # mount the service account token
    automountServiceAccountToken: false
    image:
      repository: goharbor/redis-photon
      tag: v2.10.3
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    extraEnvVars: []
    nodeSelector: {}
    tolerations: []
    affinity: {}
    ## The priority class to run the pod as
    priorityClassName:
    # # jobserviceDatabaseIndex defaults to "1"
    # # registryDatabaseIndex defaults to "2"
    # # trivyAdapterIndex defaults to "5"
    # # harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional
    # # cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optional
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    trivyAdapterIndex: "5"
    # harborDatabaseIndex: "6"
    # cacheLayerDatabaseIndex: "7"
  external:
    # support redis, redis+sentinel
    # addr for redis: <host_redis>:<port_redis>
    # addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
    addr: "192.168.0.2:6379"
    # The name of the set of Redis instances to monitor, it must be set to support redis+sentinel
    sentinelMasterSet: ""
    # The "coreDatabaseIndex" must be "0" as the library Harbor
    # used doesn't support configuring it
    # harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional
    # cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optional
    coreDatabaseIndex: "0"
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    trivyAdapterIndex: "5"
    # harborDatabaseIndex: "6"
    # cacheLayerDatabaseIndex: "7"
    # username field can be an empty string, and it will be authenticated against the default user
    username: ""
    password: ""
    # If using existingSecret, the key must be REDIS_PASSWORD
    existingSecret: ""
  ## Additional deployment annotations
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}

exporter:
  replicas: 1
  revisionHistoryLimit: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  extraEnvVars: []
  podAnnotations: {}
  ## Additional deployment labels
  podLabels: {}
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  image:
    repository: goharbor/harbor-exporter
    tag: v2.10.3
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Spread Pods across failure-domains like regions, availability zones or nodes
  topologySpreadConstraints: []
  # - maxSkew: 1
  #   topologyKey: topology.kubernetes.io/zone
  #   nodeTaintsPolicy: Honor
  #   whenUnsatisfiable: DoNotSchedule
  cacheDuration: 23
  cacheCleanInterval: 14400
  ## The priority class to run the pod as
  priorityClassName:

metrics:
  enabled: false
  core:
    path: /metrics
    port: 8001
  registry:
    path: /metrics
    port: 8001
  jobservice:
    path: /metrics
    port: 8001
  exporter:
    path: /metrics
    port: 8001
  ## Create prometheus serviceMonitor to scrape harbor metrics.
  ## This requires the monitoring.coreos.com/v1 CRD. Please see
  ## https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md
  ##
  serviceMonitor:
    enabled: false
    additionalLabels: {}
    # Scrape interval. If not set, the Prometheus default scrape interval is used.
    interval: ""
    # Metric relabel configs to apply to samples before ingestion.
    metricRelabelings:
      []
      # - action: keep
      #   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
      #   sourceLabels: [__name__]
    # Relabel configs to apply to samples before ingestion.
    relabelings:
      []
      # - sourceLabels: [__meta_kubernetes_pod_node_name]
      #   separator: ;
      #   regex: ^(.*)$
      #   targetLabel: nodename
      #   replacement: $1
      #   action: replace

trace:
  enabled: false
  # trace provider: jaeger or otel
  # jaeger should be 1.26+
  provider: jaeger
  # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
  sample_rate: 1
  # namespace used to differentiate different harbor services
  # namespace:
  # attributes is a key value dict contains user defined attributes used to initialize trace provider
  # attributes:
  #   application: harbor
  jaeger:
    # jaeger supports two modes:
    #   collector mode(uncomment endpoint and uncomment username, password if needed)
    #   agent mode(uncomment agent_host and agent_port)
    endpoint: http://hostname:14268/api/traces
    # username:
    # password:
    # agent_host: hostname
    # export trace data by jaeger.thrift in compact mode
    # agent_port: 6831
  otel:
    endpoint: hostname:4318
    url_path: /v1/traces
    compression: false
    insecure: true
    # timeout is in seconds
    timeout: 10

# cache layer configurations
# if this feature enabled, harbor will cache the resource
# `project/project_metadata/repository/artifact/manifest` in the redis
# which help to improve the performance of high concurrent pulling manifest.
cache:
  # default is not enabled.
  enabled: false
  # default keep cache for one day.
  expireHours: 24

4.4 安装

# 需要改的地方
expose:
  type: ingress
  tls:
    enabled: true
  ingress:
    hosts:
      core: harbor.dongdong.com

externalURL: harbor.dongdong.com

persistence:
  enabled: true
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      storageClass: "harbor-nfs" # 一定要和pv中的storageClassName 对应
      size: 50Gi # 一定要和pv中的storage 对应 不然绑定不上
    jobservice:
      storageClass: "harbor-nfs"
      size: 10Gi
    database:
      storageClass: "harbor-nfs"
      size: 10Gi
    redis:
      storageClass: "harbor-nfs"
      size: 10Gi
    trivy:
      storageClass: "harbor-nfs"
      size: 10Gi
      
      
# 创建名称空间
kubectl create ns harbor
# 进入 harbor 文件夹下 运行
helm install harbor . -f values.yaml -n harbor

# 标识操作成功
[root@kube-master harbor]# helm install harbor . -f harbor.yaml -n harbor
NAME: harbor
LAST DEPLOYED: Thu Sep 12 20:30:37 2024
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at harbor.dongdong.com
For more details, please visit https://github.com/goharbor/harbor

4.5 查看容器是否运行

# 经过一段时间 发现还要两个容器没启动

[root@kube-master harbor]# kubectl get pod -n harbor 
NAME                                 READY   STATUS             RESTARTS      AGE
harbor-core-6969ffd694-xd8h4         0/1     Running            4 (58s ago)   6m13s
harbor-database-0                    1/1     Running            0             6m13s
harbor-jobservice-79cfc6f696-pzpqj   0/1     CrashLoopBackOff   4 (73s ago)   6m13s
harbor-portal-675d4c5858-sgxw7       1/1     Running            0             6m13s
harbor-redis-0                       1/1     Running            0             6m13s
harbor-registry-5859f958cf-vx9fg     2/2     Running            0             6m13s
harbor-trivy-0                       1/1     Running            0             6m13s


 kubectl describe pod harbor-core-6969ffd694-xd8h4 -n harbor
 
 # 发现 错误信息是
2024-09-12T13:16:36Z [ERROR] [/pkg/config/rest/rest.go:50]: Failed on load rest config err:Get "http://harbor-core:80/api/v2.0/internalconfig": dial tcp 10.103.74.3:80: connect: connection refused, url:http://harbor-core:80/api/v2.0/internalconfig
panic: failed to load configuration, error: failed to load rest config

 
 # 我们重启coredns
 [root@kube-master harbor]# kubectl get pod -n kube-system 
NAME                                  READY   STATUS    RESTARTS        AGE
coredns-66f779496c-m4qjx              1/1     Running   4 (5h52m ago)   34d
coredns-66f779496c-s8lxx              1/1     Running   4 (5h52m ago)   34d
etcd-kube-master                      1/1     Running   4 (5h52m ago)   34d
kube-apiserver-kube-master            1/1     Running   4 (5h52m ago)   34d
kube-controller-manager-kube-master   1/1     Running   5 (5h52m ago)   34d
kube-proxy-2hrzj                      1/1     Running   3 (5h52m ago)   34d
kube-proxy-h6sft                      1/1     Running   4 (5h52m ago)   34d
kube-proxy-vcdzq                      1/1     Running   3 (5h52m ago)   34d
kube-scheduler-kube-master            1/1     Running   5 (5h52m ago)   34d

[root@kube-master harbor]# kubectl delete pod coredns-66f779496c-m4qjx coredns-66f779496c-s8lxx -n kube-system 
pod "coredns-66f779496c-m4qjx" deleted
pod "coredns-66f779496c-s8lxx" deleted

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2289310.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

PSpice for TI体验

前言 基于 从零开始学 PSpice for TI 仿真工具 - 手把手操作实训课程_哔哩哔哩_bilibili 体验PSpice for TI的功能&#xff0c;并记录下来。文章内容大部分都参考自视频&#xff0c;可以理解成图文版。目前发现是没有支持中文语言&#xff0c;而且部分仿真&#xff0c;时间消耗…

苯乙醇苷类化合物的从头生物合成-文献精读108

Complete pathway elucidation of echinacoside in Cistanche tubulosa and de novo biosynthesis of phenylethanoid glycosides 管花肉苁蓉中松果菊苷全生物合成途径解析及苯乙醇苷类化合物的从头生物合成 摘要 松果菊苷&#xff08;ECH&#xff09;是最具代表性的苯乙醇苷…

【C++】设计模式详解:单例模式

文章目录 Ⅰ. 设计一个类&#xff0c;不允许被拷贝Ⅱ. 请设计一个类&#xff0c;只能在堆上创建对象Ⅲ. 请设计一个类&#xff0c;只能在栈上创建对象Ⅳ. 请设计一个类&#xff0c;不能被继承Ⅴ. 请设计一个类&#xff0c;只能创建一个对象&#xff08;单例模式&#xff09;&am…

解决vsocde ssh远程连接同一ip,不同端口情况下,无法区分的问题

一般服务器会通过镜像分身或者容器的方式&#xff0c;一个ip分出多个端口给多人使用&#xff0c;但如果碰到需要连接同一user&#xff0c;同一个ip,不同端口的情况&#xff0c;vscode就无法识别&#xff0c;如下图所示&#xff0c;vscode无法区分该ip下不同端口的连接&#xff…

AJAX案例——图片上传个人信息操作

黑马程序员视频地址&#xff1a; AJAX-Day02-11.图片上传https://www.bilibili.com/video/BV1MN411y7pw?vd_source0a2d366696f87e241adc64419bf12cab&spm_id_from333.788.videopod.episodes&p26 图片上传 <!-- 文件选择元素 --><input type"file"…

LabVIEW温度修正部件测试系统

LabVIEW温度修正部件测试系统 这个基于LabVIEW的温度修正部件测试系统旨在解决飞行器温度测量及修正电路的测试需求。该系统的意义在于提供一个可靠的测试平台&#xff0c;用于评估温度修正部件在实际飞行器环境中的性能表现&#xff0c;从而确保飞行器的安全性和可靠性。 系统…

细说机器学习算法之ROC曲线用于模型评估

系列文章目录 第一章&#xff1a;Pyhton机器学习算法之KNN 第二章&#xff1a;Pyhton机器学习算法之K—Means 第三章&#xff1a;Pyhton机器学习算法之随机森林 第四章&#xff1a;Pyhton机器学习算法之线性回归 第五章&#xff1a;Pyhton机器学习算法之有监督学习与无监督…

DeepSeek本地部署(windows)

一、下载并安装Ollama 1.下载Ollama Ollama官网:Ollama 点击"Download",会跳转至下载页面。 点击"Download for Windows"。会跳转Github进行下载,如下载速度过慢,可在浏览器安装GitHub加速插件。 2.安装Ollama 双击下载的安装文件,点击"Inst…

简要介绍C语言/C++的三目运算符

三元运算符是C语言和C中的一种简洁的条件运算符&#xff0c;它的形式为&#xff1a; 条件表达式 ? 表达式1 : 表达式2; 三元运算符的含义 条件表达式&#xff1a;这是一个布尔表达式&#xff0c;通常是一个比较操作&#xff08;如 >、<、 等&#xff09;。 表达式1&am…

SpringCloud系列教程:微服务的未来(十九)请求限流、线程隔离、Fallback、服务熔断

前言 前言 在现代微服务架构中&#xff0c;系统的高可用性和稳定性至关重要。为了解决系统在高并发请求或服务不可用时出现的性能瓶颈或故障&#xff0c;常常需要使用一些技术手段来保证服务的平稳运行。请求限流、线程隔离、Fallback 和服务熔断是微服务中常用的四种策略&…

STM32 对射式红外传感器配置

这次用的是STM32F103的开发板&#xff08;这里面的exti.c文件没有how to use this driver 配置说明&#xff09; 对射式红外传感器 由一个红外发光二极管和NPN光电三极管组成&#xff0c;M3固定安装孔&#xff0c;有输出状态指示灯&#xff0c;输出高电平灯灭&#xff0c;输出…

(动态规划路径基础 最小路径和)leetcode 64

视频教程 1.初始化dp数组&#xff0c;初始化边界 2、从[1行到n-1行][1列到m-1列]依次赋值 #include<vector> #include<algorithm> #include <iostream>using namespace std; int main() {vector<vector<int>> grid { {1,3,1},{1,5,1},{4,2,1}…

嵌入式C语言:什么是共用体?

在嵌入式C语言编程中&#xff0c;共用体&#xff08;Union&#xff09;是一种特殊的数据结构&#xff0c;它允许在相同的内存位置存储不同类型的数据。意味着共用体中的所有成员共享同一块内存区域&#xff0c;因此&#xff0c;在任何给定时间&#xff0c;共用体只能有效地存储…

QT简单实现验证码(字符)

0&#xff09; 运行结果 1&#xff09; 生成随机字符串 Qt主要通过QRandomGenerator类来生成随机数。在此之前的版本中&#xff0c;qrand()函数也常被使用&#xff0c;但从Qt 5.10起&#xff0c;推荐使用更现代化的QRandomGenerator类。 在头文件添加void generateRandomNumb…

【4Day创客实践入门教程】Day2 探秘微控制器——单片机与MicroPython初步

Day2 探秘微控制器——单片机与MicroPython初步 目录 Day2 探秘微控制器——单片机与MicroPython初步MicroPython语言基础开始基础语法注释与输出变量模块与函数 单片机基础后记 Day0 创想启程——课程与项目预览Day1 工具箱构建——开发环境的构建Day2 探秘微控制器——单片机…

[论文阅读] (37)CCS21 DeepAID:基于深度学习的异常检测(解释)

祝大家新春快乐&#xff0c;蛇年吉祥&#xff01; 《娜璋带你读论文》系列主要是督促自己阅读优秀论文及听取学术讲座&#xff0c;并分享给大家&#xff0c;希望您喜欢。由于作者的英文水平和学术能力不高&#xff0c;需要不断提升&#xff0c;所以还请大家批评指正&#xff0…

Java面试题2025-并发编程进阶(线程池和并发容器类)

线程池 一、什么是线程池 为什么要使用线程池 在开发中&#xff0c;为了提升效率的操作&#xff0c;我们需要将一些业务采用多线程的方式去执行。 比如有一个比较大的任务&#xff0c;可以将任务分成几块&#xff0c;分别交给几个线程去执行&#xff0c;最终做一个汇总就可…

【算法应用】基于鲸鱼优化算法求解OTSU多阈值图像分割问题

目录 1.鲸鱼优化算法WOA 原理2.OTSU多阈值图像分割模型3.结果展示4.参考文献5.代码获取 1.鲸鱼优化算法WOA 原理 SCI二区|鲸鱼优化算法&#xff08;WOA&#xff09;原理及实现 2.OTSU多阈值图像分割模型 Otsu 算法&#xff08;最大类间方差法&#xff09;设灰度图像有 L L …

设计模式的艺术-策略模式

行为型模式的名称、定义、学习难度和使用频率如下表所示&#xff1a; 1.如何理解策略模式 在策略模式中&#xff0c;可以定义一些独立的类来封装不同的算法&#xff0c;每个类封装一种具体的算法。在这里&#xff0c;每个封装算法的类都可以称之为一种策略&#xff08;Strategy…

7. 马科维茨资产组合模型+金融研报AI长文本智能体(Qwen-Long)增强方案(理论+Python实战)

目录 0. 承前1. 深度金融研报准备2. 核心AI函数代码讲解2.1 函数概述2.2 输入参数2.3 主要流程2.4 异常处理2.5 清理工作2.7 get_ai_weights函数汇总 3. 汇总代码4. 反思4.1 不足之处4.2 提升思路 5. 启后 0. 承前 本篇博文是对前两篇文章&#xff0c;链接: 5. 马科维茨资产组…