k8s kubeadmin方式安装部署

news2024/12/24 8:11:41

1、节点至少2C2G.

2、首先安装docker,

sudo yum install -y docker-ce docker-ce-cli containerd.io


#以下是在安装k8s使用的docker版本。注意保持一致
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6



sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

systemctl daemon-reload

systemctl restart docker

systemctl enable docker 

​

 #各个机器设置自己的主机名
hostnamectl set-hostname k8s1

hostnamectl set-hostname k8s2

hostnamectl set-hostname k8s3

#各个机器分别执行如下内容

systemctl stop firewalld
systemctl disable firewalld

#可选
 systemctl stop iptables
systemctl disable iptables


# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

#允许 iptables 检查桥接流量
#下面还可以加个net.ipv4.ip_forward=1
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

 #各个机器分别执行下面内容

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF


sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

 #各个机器分别执行下面内容,下载需要的镜像

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
   
chmod +x ./images.sh && ./images.sh

#所有机器添加master域名映射,以下需要修改为自己的
echo "192.168.179.51  cluster-endpoint" >> /etc/hosts

初始化主节点(下面只在主节点执行)



#主节点初始化
kubeadm init \
--apiserver-advertise-address=192.168.179.51 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.169.0.0/16

运行结束后会生成如下内容


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

 其中生成的 kubeadm join字段,第一个是能把机器变成主节点的命令,第二个是能把机器变成从节点的命令。

先运行下面部分

 mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

可以敲如下命令查看当前的k8s集群(只初始化了主节点)

#查看集群所有节点
kubectl get nodes

#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml

#查看集群部署了哪些应用?
docker ps   ===   kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A
[root@k8s1 ~]# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  198k  100  198k    0     0   255k      0 --:--:-- --:--:-- --:--:--  254k
[root@k8s1 ~]# cat calico.yaml | grep 192.168
            #   value: "192.168.0.0/16"

vim calico.yaml

这两个字段取消注释,改成我们初始化主节点里的“--pod-network-cidr=” 的字段值。注意对齐!!!。这里改成192.169.0.0/16

kubectl apply -f calico.yaml

#打印加入节点的命令

kubeadm token create --print-join-command --ttl 0

在每个work节点上运行加入命令

kubectl get nodes

kubectl get pods -A

等待

node状态正常为Ready

pod状态为Running

如下完成

到这里k8s就装好了,很多公司都是直接用这种原生的。但是直接看不太容易理解集群工作原理怎么办?上图形化界面。如下

部署dashorad

vim recommended.yaml  然后后输入:paste 敲回车,进入了vim的粘贴模式

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

 kubectl apply -f recommended.yaml

kubectl get pods -A

 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

修改为Nodeport登陆方式
输入:type 回车

下图可以看到dashborad暴露的端口是30441

 浏览器的问题,直接在此页面键盘输入thisisunsafe

到下面界面需要填一个令牌

 准备一个yaml文件

vim dash-user.yaml

#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

kubectl apply -f dash-user.yaml
 

获取登陆token

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

复制 

 粘贴

 如果有401的报错,重新复制一下token,粘上来

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/744681.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

基于iTOP-RK3568开发板进行讲解,本次更新第十一期主要讲解pinctrl子系统共计16讲

1.课cheng规划 2.pinctrl子系统课程引入 3.前置理论-pinctrl_desc结构体.mp4 4.实践-讲解pinctrl_desc结构体实际应用 5.理论&#xff1a;pinctrl子系统三个函数操作集 6.进一步了解rockchip_pinctrl结构体 7.实践&#xff1a;pinctrl子系统三个函数操作集 8.重点&#xff1a;d…

UE4 解决在同一场景播放多个本地视频卡顿的问题(4.27+)

在使用4.27版本开发项目时&#xff0c;需要在同一场景播放多个本地视频&#xff0c;用的是ue自带的播放器&#xff0c;一旦播放的视频多了就会导致卡顿甚至播放不了&#xff0c;查了一下官方文档&#xff0c;虚幻引擎4.27中内置了Bink Media插件&#xff0c;这个插件可以解决这…

Spring MVC 系列1 -- 初识Spring MVC

目录 1. 什么是 Spring MVC&#xff1f; 2. MVC定义 3. 创建SpringMVC项目 ​4. Spring MVC要学习哪些? 1. 什么是 Spring MVC&#xff1f; 官⽅对于 Spring MVC 的描述是这样的&#xff1a; 翻译成中文 从上述定义我们可以得出两个关键信息&#xff1a; 1. Spring MVC 是…

骨传导蓝牙耳机怎么样?盘点当下最流行的几款骨传导耳机

作为耳机资深用户&#xff0c;特别推荐大家使用骨传导耳机&#xff0c;因为骨传导耳机完全不需要入耳&#xff0c;在一定程度上&#xff0c;减少了外耳道和耳膜受损以此保护听力&#xff0c;而且佩戴骨传导耳机时&#xff0c;周围的声音仍然可以听到&#xff0c;避免了因听不到…

伦敦银最新行情分析独到技巧

伦敦银的行情走势时时刻刻都处于变化之中&#xff0c;如果投资者对行情分析得好&#xff0c;无论是市场上涨还是下跌都可以获利&#xff0c;无论做多做空都可以赚钱。如果投资者想更好地发挥伦敦银双向交易的优势&#xff0c;就要对后市的运行的方向有一个基本的判断。 在分析伦…

​​​​layui 实现左侧菜单栏及子类(二)​​

目录 ​ 一&#xff0c;分析左侧菜单栏及子类的数据表 ​ 二&#xff0c;实现左侧菜单栏及子类的具体步骤 2.1实体类&#xff08;与数据库中的字段对应一一&#xff01;&#xff01;&#xff01;&#xff09; 2.2dao方法 dao类代码&#xff1a; TreeOv工具类代码&#xf…

实战:Springboot集成Sentinel实现流量控制、熔断降级、负载保护

文章目录 前言知识积累流量控制负载保护熔断降级官方文档 实战演练部署sentinel-dashboard直接jar包部署docker-compose编排 springboot集成sentinel基础架构搭建sentinel控制台sentinel验证 延伸&#xff1a;系统自适应限流系统规则原理配置页面 写在最后 前言 前面的文章我们…

网络编程6——TCP协议的两大特性:面向字节流 + 异常情况 + 沾包BUG解决方案

文章目录 前言一、TCP协议段与机制TCP协议的特点TCP报头结构TCP协议的机制与特性 二、TCP协议的 面向字节流特性 TCP协议接发数据的流程TCP协议面向字节流导致的BUG 三、TCP协议的 异常情况 总结 前言 本人是一个普通程序猿!分享一点自己的见解,如果有错误的地方欢迎各位大佬莅…

DHorse v1.2.1 发布,基于k8s的发布平台

综述 DHorse是一个简单易用、以应用为中心的云原生DevOps系统&#xff0c;具有持续集成、持续部署、微服务治理等功能&#xff0c;无需安装依赖Docker、Maven、Node等环境即可发布Java、Vue、React应用&#xff0c;主要特点&#xff1a;部署简单、操作简洁、功能快速。 优化内…

TypeScript 学习笔记(三):函数

一、函数定义 函数是由一连串的子程序&#xff08;语句的集合&#xff09;所组成的&#xff0c;可以被外部程序调用&#xff0c;向函数传递参数之后&#xff0c;函数可以返回一定的值。 通常情况下&#xff0c;TypeScript 代码是自上而下执行的&#xff0c;不过函数体内部的代…

【说明书】TA3001信号隔离器使用说明书

-为了方便需要的时候查阅分享说明书&#xff0c;特将其整理到CSDN。- 一、产品介绍&#xff1a; 给现场的变送器提供隔离电源&#xff0c;接收变送器输出的电流信号&#xff0c;隔离后输出给系统。该产品需要独立供电。 二、主要技术参数 1、供电电压:18~36 VDC 2、消耗电流…

python的字符串操作

1、字符串的驻留机制 字符串 在Python中字符串是基本数据类型&#xff0c;是一个不可变的字符序列什么叫字符串驻留机制呢? 仅保存一份相同且不可变字符串的方法&#xff0c;不同的值被存放在字符串的驻留池中&#xff0c;Python的驻留机制对相同的字符串只保留一份拷贝&…

想知道MLGO是如何工作的吗?看完这篇文章你就懂了

在当今软件开发领域&#xff0c;代码优化对于提高性能和减小代码体积至关重要。在这方面&#xff0c;内联&#xff08;Inlining&#xff09;被认为是一项关键的优化技术之一。而MLGO能够在编译过程中智能地做出内联/非内联的决策&#xff0c;从而提供更高效、更紧凑的代码。本文…

apk应用完整性校验

测试客户端程序是否对自身完整性进行校验。攻击者能够通过反编译的方法在客户端 程序中植入自己的木马&#xff0c;客户端程序如果没有自校验机制的话&#xff0c;攻击者可能会通过篡改客 户端程序窃取手机用户的隐私信息。 用 ApkTool 将目标 APK 文件解包&#xff0c;命令如…

linux下实现串口功能

1.先从wiringpi库复制一个串口代码 2.查看串口类型 3.将代码修改成ttyS5 4.改完代码后打开串口助手然后编译 代码示例&#xff1a; #include <stdio.h> #include <string.h> #include <errno.h> #include <pthread.h> #include <wiringPi.h> #i…

Ubuntu18.04.6本地部署PaddleSpeech实验代码(CPU版)

前言 因为本人不是搞python和AI的&#xff0c;所以部署这个项目是耗时耗力&#xff0c;本地部署还是挺麻烦的&#xff0c;发现了很多问题&#xff0c;关键就是权限和源代码路径问题&#xff0c;经历了14天&#xff08;大部分时间扔在做系统&#xff0c;装环境&#xff0c;代码阅…

【数学建模】常微分方程

常微分方程 博客园解释 https://www.cnblogs.com/docnan/p/8126460.html https://www.cnblogs.com/hanxi/archive/2011/12/02/2272597.html https://www.cnblogs.com/b0ttle/p/ODEaid.html matlab求解常微分方程 https://www.cnblogs.com/xxfx/p/12460628.html https://www.cn…

使用FreeMarker生成word文件自定义每页页眉或页脚

最新工作中遇到生成word中表格时&#xff0c;要求文档中每页头部和底部都是固定格式的表格&#xff0c;但是内容不一样&#xff0c;头部信息在word中画样式的时候就可以设置为“在各页顶端以标题形式重复出现”&#xff0c;而底部就没有办法这样设置了&#xff0c;之后就想着在…

【GPT模型】遥感云大数据在灾害、水体与湿地领域中的应用

近年来遥感技术得到了突飞猛进的发展&#xff0c;航天、航空、临近空间等多遥感平台不断增加&#xff0c;数据的空间、时间、光谱分辨率不断提高&#xff0c;数据量猛增&#xff0c;遥感数据已经越来越具有大数据特征。遥感大数据的出现为相关研究提供了前所未有的机遇&#xf…

TypeScript 中【class类】与 【 接口 Interfaces】的联合使用解读

导读&#xff1a; 前面章节&#xff0c;我们讲到过 接口&#xff08;Interface&#xff09;可以用于对「对象的形状&#xff08;Shape&#xff09;」进行描述。 本章节主要介绍接口的另一个用途&#xff0c;对类的一部分行为进行抽象。 类配合实现接口 实现&#xff08;impleme…