【1】K8s上部署KubeSphere

news2024/11/19 1:39:11

目录

1、环境初始化

【1】改名字和主机名解析

 【2】时间同步

【3】禁用iptables和firewalld服务(三台都要设置)

【4】禁用selinux(三台都要设置)

【5】禁用swap分区 

【6】修改linux的内核参数

2、安装docker

【1】安装docker依赖

 【2】设置docker仓库镜像地址

【3】查看镜像支持的docker版本

【4】安装docker

【5】设置docker开机启动

【6】设置镜像加速(去阿里云创建自己的)

 3、安装kubernetes组件

【1】由于kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源

【2】安装kubeadm、kubelet和kubect1

 【4】设置开机启动

【5】查看是否安装好

4、集群初始化

【1】提前下载好镜像

【2】下面开始对集群进行初始化,并将node节点加入到集群中(在master执行)

 输出的内容记一下:

5、安装网络插件(主节点)

【1】下载calico网络

【2】将node加入master

【3】耐心几分钟后,检查一下

6、安装NFS存储

【1】所有节点安装NFS

【2】然后再主节点:

【3】从节点

7、配置默认存储

8、安装metrics-server

9、安装KubeSphere

10、最后访问任意机器的 30880端口


1、环境初始化

【1】改名字和主机名解析

#给每台机子改名字

hostnamectl set-hostname master
hostnamectl set-hostname node01
hostnamectl set-hostname node02

#为了方便后面集群节点间的直接调用,在这配置一下主机名解析, 编辑三台服务器的/etc /hosts文件,添加下面内容

172.31.0.2  master
172.31.0.3  node01
172.31.0.4  node02

测试都可以用名字ping通

 【2】时间同步

kubernetes要求集群中的节点时间必须精确一致,这里直接使用chronyd服务从网络同步时间.企业中建议配置内部的时间同步服务器

 systemctl start chronyd
 systemctl enable chronyd

【3】禁用iptables和firewalld服务(三台都要设置)

kubernetes和docker在运行中会产生大量的iptables规则,为了不让系统规则跟它们混淆,直接关闭系统的规则

 systemctl stop firewalld
 systemctl disable firewalld

【4】禁用selinux(三台都要设置)

selinux是linux系统下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题

 setenforce 0

编辑 /etc/selinux/config 文件,修改SELINUX的值为disabled

# 注意修改完毕之后需要重启linux服务

SELINUX=disabled

【5】禁用swap分区 

swap分区指的是虚拟内存分区,它的作用是在物理内存使用完之后,将磁盘空间虚拟成内存来使用启用swap设备会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁用swap设备但是如果因为某些原因确实不能关闭swap分区,就需要在集群安装过程中通过明确的参数进行配置说明 

编辑分区配置文件/etc/fstab,注释掉swap分区一行

# 注意修改完毕之后需要重启linux服务

# /etc/fstab
# Created by anaconda on Wed Dec 29 17:42:21 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=8d3c5312-ab8f-425f-8a0e-87aca3402d91 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
~                              

【6】修改linux的内核参数

修改linux的内核参数,添加网桥过和地址转发功能

# 编辑/etc/sysctl.d/kubernetes.cof文件,添加如下配置

vi /etc/sysctl.d/kubernetes.conf 

#   
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#

#然后执行
sysctl --system   //生效命令

加载网桥过滤模块并查看是否成功

[root@node01 ~]# modprobe br_netfilter
[root@node01 ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
[root@node01 ~]# 

2、安装docker

【1】安装docker依赖

yum install -y yum-utils

 【2】设置docker仓库镜像地址

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

【3】查看镜像支持的docker版本

yum  list docker-ce --showduplicates

【4】安装docker

yum install docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 -y

【5】设置docker开机启动

systemctl enable docker && systemctl start docker

【6】设置镜像加速(去阿里云创建自己的)

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://oihu2r88.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

 3、安装kubernetes组件

【1】由于kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源

 

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

【2】安装kubeadm、kubelet和kubect1

yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes 

 【4】设置开机启动

systemctl enable kubelet --now kubelet

【5】查看是否安装好

yum list installed | grep kubelet
yum list installed | grep kubeadm
yum list installed | grep kubectl
#查看安装的版本: kubelet --version

4、集群初始化

【1】提前下载好镜像

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
    kube-apiserver:v1.20.9
    kube-controller-manager:v1.20.9
    kube-scheduler:v1.20.9
    kube-proxy:v1.20.9
    pause:3.2
    etcd:3.4.13-0
    coredns:1.7.0
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF

给一个执行权限并执行

chmod  +x ./images.sh && ./images.sh

【2】下面开始对集群进行初始化,并将node节点加入到集群中(在master执行)

下面的操作只需要在 master 节点上执行即可(172.31.0.2为master节点)

kubeadm init  \
--apiserver-advertise-address=172.31.0.2  \
--control-plane-endpoint=master  \
--image-repository registry.aliyuncs.com/google_containers  \
--kubernetes-version v1.20.9  \
--service-cidr=10.96.0.0/16  --pod-network-cidr=192.168.0.0/16

 输出的内容记一下:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join master:6443 --token 8f21i4.47o0wzoivzsyaaqw \
    --discovery-token-ca-cert-hash sha256:e1946aac342d570b78bd48a5718f4eac43b0f58eae17cc7511a1ec4d3a414826 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master:6443 --token 8f21i4.47o0wzoivzsyaaqw \
    --discovery-token-ca-cert-hash sha256:e1946aac342d570b78bd48a5718f4eac43b0f58eae17cc7511a1ec4d3a414826 

然后重启,重启

5、安装网络插件(主节点)

【1】下载calico网络

curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O

执行命令 

kubectl apply -f calico.yaml 

【2】将node加入master

根据主节点的提示想要其他节点加入本master节点输入下列(在node01和node02)

kubeadm join master:6443 --token 8f21i4.47o0wzoivzsyaaqw \
    --discovery-token-ca-cert-hash sha256:e1946aac342d570b78bd48a5718f4eac43b0f58eae17cc7511a1ec4d3a414826 

然后等一会儿,然后等一会儿,然后等一会儿

【3】耐心几分钟后,检查一下

[root@master ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-56c7cdffc6-trxkj   1/1     Running   0          26m
kube-system   calico-node-8zsdw                          1/1     Running   0          4m50s
kube-system   calico-node-khp6k                          1/1     Running   0          4m57s
kube-system   calico-node-vkqhc                          1/1     Running   0          26m
kube-system   coredns-7f89b7bc75-65jnr                   1/1     Running   0          39m
kube-system   coredns-7f89b7bc75-lrtqq                   1/1     Running   0          39m
kube-system   etcd-master                                1/1     Running   0          40m
kube-system   kube-apiserver-master                      1/1     Running   0          40m
kube-system   kube-controller-manager-master             1/1     Running   0          40m
kube-system   kube-proxy-9k8ps                           1/1     Running   0          39m
kube-system   kube-proxy-nvzmj                           1/1     Running   0          4m57s
kube-system   kube-proxy-zph7s                           1/1     Running   0          4m50s
kube-system   kube-scheduler-master                      1/1     Running   0          4

全部好了以后

[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   43m     v1.20.9
node01   Ready    <none>                 7m55s   v1.20.9
node02   Ready    <none>                 7m48s   v1.20.9

6、安装NFS存储

【1】所有节点安装NFS

yum install -y nfs-utils

【2】然后再主节点:

# nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

mkdir -p /nfs/data
systemctl enable rpcbind --now
systemctl enable nfs-server --now

# 配置生效
exportfs -r 

 实例:

[root@master ~]# echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
[root@master ~]# mkdir -p /nfs/data
[root@master ~]# systemctl enable rpcbind --now
[root@master ~]# systemctl enable nfs-server --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@master ~]# exportfs -r 
[root@master ~]# 
[root@master ~]# exportfs 
/nfs/data         <world>

【3】从节点

#看一下远程的服务器有哪些目录可以挂载
showmount -e masterIP

#先在本机创建一个目录/nfs/data
#然后将远程服务器(master)的/nfs/data目录挂载到本机的/nfs/data目录
mkdir -p /nfs/data
mount -t nfs masteIP:/nfs/data /nfs/data

#写入一个测试文件
echo "hello nfs server" > /nfs/data/test.txt

实例:

[root@node02 ~]# showmount -e 172.31.0.2
Export list for 172.31.0.4:
/nfs/data *
[root@node02 ~]# mkdir -p /nfs/data
[root@node02 ~]# mount -t nfs 172.31.0.2:/nfs/data /nfs/data
[root@node02 ~]# ls /nfs/data/

7、配置默认存储

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.31.0.2 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.31.0.2
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

执行yaml文件

[root@master ~]# vi sc.yaml
[root@master ~]# 
[root@master ~]# 
[root@master ~]# kubectl apply -f sc.yaml 
storageclass.storage.k8s.io/nfs-storage created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

创建一个pvc(用yaml)

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes: 
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
[root@master ~]# vi pvc.yaml

[root@master ~]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/nginx-pvc created
[root@master ~]# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pvc-d5301783-c63f-4998-b14a-e9b3657e1470   200Mi      RWX            nfs-storage    32s
[root@master ~]# 

8、安装metrics-server

集群指标监控组件(yaml)

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-insecure-tls
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: docker.io/bitnami/metrics-server:0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

[root@master ~]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   169m         8%     1313Mi          69%       
node01   82m          4%     726Mi           38%       
node02   70m          3%     739Mi           38%       

9、安装KubeSphere

先下载

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

kubesphere-installer.yaml

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
  - name: v1alpha1
    served: true
    storage: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
    - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kubeedge.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v3.1.1
        imagePullPolicy: "Always"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time

cluster-configuration.yaml


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
  local_registry: ""        # Add your private registry address if it is needed.
  etcd:
    monitoring: true      # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 172.31.0.2  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    redis:
      enabled: false
    minioVolumeSize: 20Gi # Minio PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
      # elasticsearchDataReplicas: 1     # The total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    enabled: true      # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
    enabled: true         # Enable or disable the KubeSphere Auditing Log System. 
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
    enabled: true         # Enable or disable the KubeSphere Events System.
    ruler:
      enabled: true
      replicas: 2
    enabled: true         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
      enabled: true  # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true    # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: true   # Enable or disable KubeEdge.
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

安装 

[root@master ~]# kubectl apply -f kubesphere-installer.yaml 
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
namespace/kubesphere-system created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
[root@master ~]# kubectl apply -f cluster-configuration.yaml 
clusterconfiguration.installer.kubesphere.io/ks-installer created

执行

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration.yaml

看安装进度 

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

解决etcd监控证书找不到问题

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

10、最后访问任意机器的 30880端口

账号 : admin

密码 : P@88w0rd

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/159834.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

化学试剂Silane-PEG-Biotin,硅烷聚乙二醇生物素材料说明

英文名称&#xff1a;Biotin-PEG-Silane 中文名称&#xff1a;生物素-聚乙二醇-硅烷 分子量&#xff1a;1k&#xff0c;2k&#xff0c;3.4k&#xff0c;5k&#xff0c;10k&#xff0c;20k。。。 存储条件&#xff1a;-20C&#xff0c;避光&#xff0c;避湿 用 途&#xff…

RS485 TO ETH (B)无法联网怎么解决?

1 设置下静态IP&#xff0c;例如&#xff1a; RS485 TO ETH (B) 设置为 192.168.1.200 端口号1111&#xff1b; 电脑设置为192.168.1.199 端口号1111&#xff1b; 2 用互联网社区分享的TCP服务器测试下&#xff1a;120.79.100.197 端口号 10002 3 请关闭电脑的所有防火墙&am…

Java 中 List 排序的 3 种方法

在某些特殊的场景下&#xff0c;我们需要在 Java 程序中对 List 集合进行排序操作。比如从第三方接口中获取所有用户的列表&#xff0c;但列表默认是以用户编号从小到大进行排序的&#xff0c;而我们的系统需要按照用户的年龄从大到小进行排序&#xff0c;这个时候&#xff0c;…

1.1.1信号的概念、描述及分类

目录 1.信号的概念 2.描述 3.分类 3.1确定信号和随机信号 ​3.2连续信号和离散信号​​​​​​​ 两者之间存在这样一种关系 3.3周期信号和非周期信号 3.4能量信号和功率信号 一些例题 1.信号的概念 信号是表示消息的 物理量 &#xff0c;如电信号可以通过 幅度 、 …

23.指针数组的概念,定义方法

1.二维数组 二维数组有行有列&#xff0c;二维数组可以看成由多个一维数组构成的&#xff0c;是多个一维数组的集合&#xff0c;可以认为二维数组的每一个元素都是一个一维数组。 #include<stdio.h> int main() {int a[3][5];printf("a%p\n",a);printf("…

Zabbix与乐维监控对比分析(八)——其他功能篇

前面我们详细介绍了Zabbix与乐维监控的架构与性能、Agent管理、自动发现、权限管理、对象管理、告警管理、可视化、图形图表及网络功能方面的对比分析&#xff0c;接下来我们将对二者其他功能进行对比分析。 本篇是Zabbix与乐维监控对比分析专题系列文章的第八篇&#xff0c;也…

Visual Studio Code快捷键的使用-RK3568开发

在 Visual Studio Code 软件中使用快捷键可以增加自身的效率&#xff0c;一些常用的快捷键如下所 示&#xff1a; F1 打开可以输入命令 F2 重命名变量&#xff0c;方便重构 F5 运行和调试代码 F12 去到定义的地方 shiftF12 查找所有引用 ctrlg 会让你输入数字&#xff0c;快速定…

SpringBoot2源码1-嵌入式Tomcat启动和@SpringBootApplication注解原理

1. 测试案例 1.1 引入依赖 <!-- https://mvnrepository.com/artifact/org.springframework/spring-webmvc --> <dependency><groupId>org.springframework</groupId><artifactId>spring-webmvc</artifactId><version>5.3.5</ve…

看完RabbitMQ了的几种消息模型,我对RabbitMQ有了新的认识

前言&#xff1a;大家好&#xff0c;我是小威&#xff0c;24届毕业生&#xff0c;曾经在某央企公司实习&#xff0c;目前在某税务公司。前面已经记录了两种消息模型知识&#xff0c;因此本篇文章将记录和分享剩下的RabbitMQ几种消息模型相关知识点。 本篇文章记录的基础知识&am…

锁策略和synchronized

1.常见的锁策略&#xff08;1&#xff09;乐观锁 和 悲观锁乐观锁&#xff1a;预测锁竞争的情况不激烈&#xff08;工作量较少&#xff09;悲观锁&#xff1a;预测锁竞争的情况很激烈&#xff08;工作量较多&#xff09;&#xff08;2&#xff09;轻量级锁 和 重量级锁轻量级锁…

IDEA Alibaba代码格式化(EclipseCodeFormatter)

1.阿里模板下载地址https://github.com/alibaba/p3c/tree/master/p3c-formatter2.下载阿里配置文件&#xff08;eclipse-codestyle.xml&#xff09;&#xff0c;因为此文件是针对ecplice的&#xff0c;在IDEA中使用配置文件&#xff0c;需要安装Eclipse Code Formatter插件3.配…

文档控件 Aspose.PDF for Java 授权须知

Aspose.PDF是一款高级PDF处理API&#xff0c;可以在跨平台应用程序中轻松生成&#xff0c;修改&#xff0c;转换&#xff0c;呈现&#xff0c;保护和打印文档。无需使用Adobe Acrobat。此外&#xff0c;API提供压缩选项&#xff0c;表创建和处理&#xff0c;图形和图像功能&…

马蹄集 卡罗尔数

卡罗尔数 难度&#xff1a;白银 0时间限制&#xff1a;1秒 巴占用内存&#xff1a;64M 卡罗尔数是其值满足4n-2(n1)-1的整数(n为正整数)。输入正整 数N判断它是不是卡罗尔数&#xff0c;输出YES或者NO。 #include <bits/stdc.h>> using namespace std; int main(…

SpringMVC超详解

SpringMVC超详解1、SpringMVC简介1.1、什么是MVCMVC是一种软件架构的思想&#xff0c;将软件按照模型、视图、控制器来划分M&#xff1a;Model&#xff0c;模型层&#xff0c;指工程中的JavaBean&#xff0c;作用是处理数据JavaBean分为两类&#xff1a;一类称为实体类Bean&…

算法拾遗二十四之暴力递归到动态规划二

算法拾遗二十四之暴力递归到动态规划二背包问题一优化题目二优化题目三&#xff08;贴纸拼词&#xff09;优化题目四&#xff1a;最长公共子序列优化背包问题一 weights[i]和values[i]的值都是大于等于0的&#xff0c;不存在负数的情况。 可以从尝试入手&#xff0c;改动态规划…

基于智能矿山电力监控系统的设计与应用方法

摘要&#xff1a;随着煤矿建设的智能化程度越来越高&#xff0c;构建智能电力监控系统实现对矿山生产的有效监控至关重要。首先分析了矿山电力监控系统存在的主要问题&#xff0c;其次重点介绍了基于智能矿山电力监控系统的设计过程&#xff0c;后提出了加强智能电力监控系统的…

Inspur KOS 龙蜥衍生版面向智慧新媒体转型的探索与实践 | 龙蜥案例

编者按&#xff1a;日前&#xff0c;龙蜥社区理事单位浪潮信息正式对外发布基于龙蜥操作系统&#xff08;Anolis OS&#xff09;的服务器操作系统 Inspur KOS&#xff0c;并基于 Inspur KOS 推出可视化迁移方案 C2K&#xff0c;该方案能够将用户应用安全可靠地切换到 Inspur KO…

【系列02】Java流程控制 scanner 选择结构 循环结构语句使用 [有目录]

Scanner输入 Next和NextLine区别 NextLine 用的会多点 因为Next遇到空格就断开了 next语法使用 package com.SunAo.scanner; import java.util.Scanner; public class Demo01 {public static void main(String[] args) {//创建扫描器用来接收 键盘输入Scanner scanner new …

李宏毅ML-批次与动量

批次与动量 文章目录批次与动量1. Small batch or Large batch?2. Gradient descent Momentum3. 总结1. Small batch or Large batch? 在使用 gradient descent 进行 optimization 时&#xff0c;在每一次 epoch 前&#xff0c;要 shuffle 所有的资料&#xff0c;然后再分成…

无桌面Centos7系统安装Cypress@9.0.0并运行

一、安装Cypress 安装前准备 1、安装npm 下载安装包 cd /usr/local mkdir node cd node wget https://npm.taobao.org/mirrors/node/v15.8.0/node-v15.8.0-linux-x64.tar.gz ls -l解压这个包 tar -zxvf node-v15.8.0-linux-x64.tar.gz 你会发现已经有一个node的目录解压…