K8s集群部署-详细步骤

news2024/11/15 21:32:49

不够详细,后面有时间再编辑

安装

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭swap, selinux

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
systemctl reboot #重启生效
free ‐m #查看下swap交换区是否都为0,如果都为0则swap关闭成功

 主节点加入:

cat >> /etc/hosts << EOF
192.168.201.100 master-node
192.168.201.101 work‐node-1
192.168.201.102 work‐node-2
EOF

将桥接的IPv4流量传递到iptables  

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge‐nf‐call‐ip6tables = 1
net.bridge.bridge‐nf‐call‐iptables = 1
EOF
sysctl -w net.bridge.bridge-nf-call-iptables=1
sysctl ‐‐system

设置时间同步

yum install ntpdate ‐y
ntpdate time.windows.com

添加k8s yum

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装指定版本

yum install kubeadm-1.23.1-0 kubectl-1.23.1-0 kubelet-1.23.1-0 -y

开机启动kubelet

systemctl enable kubelet
systemctl start kubelet

查看安装

yum list installed | grep kubelet
yum list installed | grep kubeadm
yum list installed | grep kubectl
kubelet --version

启动

主节点启动:

kubeadm init ‐‐apiserver‐advertise‐address=192.168.65.160 ‐‐image‐repository registry.aliyuncs.com/goo
gle_containers ‐‐kubernetes‐version v1.18.0 ‐‐service‐cidr=10.96.0.0/12 ‐‐pod‐network‐cidr=10.244.0.0/16
kubeadm init ‐‐apiserver‐advertise‐address=192.168.192.137 ‐‐image‐repository registry.aliyuncs.com/google_containers ‐‐kubernetes‐version v1.23.1 ‐‐service‐cidr=10.96.0.0/12 ‐‐pod‐network‐cidr=10.244.0.0/16

kubeadm init ‐‐apiserver‐advertise‐address=192.168.201.100 ‐‐image‐repository registry.aliyuncs.com/google_containers ‐‐kubernetes‐version v1.23.1 ‐‐service‐idr=10.96.0.0/12 ‐‐pod‐network‐cidr=10.244.0.0/16


[root@master-node ~]# kubeadm init ‐‐apiserver‐advertise‐address=192.168.201.100 ‐‐image‐repository registry.aliyuncs.com/google_containers ‐‐kubernetes‐version v1.23.1 ‐‐service‐idr=10.96.0.0/12 ‐‐pod‐network‐cidr=10.244.0.0/16
unknown command "‐‐apiserver‐advertise‐address=192.168.201.100" for "kubeadm init"
To see the stack trace of this error execute with --v=5 or higher
复制的格式有误,需手动敲一下

启动成功:

[root@master-node ~]# kubeadm init --apiserver-advertise-address=192.168.201.100 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 192.168.201.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [192.168.201.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [192.168.201.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.503459 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-node as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0h3sva.210pheplmd2rsotf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.201.100:6443 --token 0h3sva.210pheplmd2rsotf \
        --discovery-token-ca-cert-hash sha256:d50a7454ce983ced7878dd3d55b3052addc394ffd519283796f6e9240decf61c

在k8s-master机器上执行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master-node ~]# mkdir -p $HOME/.kube
[root@master-node ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-node ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master-node ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
master-node   NotReady   control-plane,master   3m48s   v1.23.1

加入节点

kubeadm join 192.168.192.137:6443 --token 1yjs17.9g4d9faxnktcdbt5 \
        --discovery-token-ca-cert-hash sha256:86579c2915f30a80201697b262333c351e2305f67b94f3fdd30fdd0f2c711768

部署容器网络(master执行)


Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。下载YAML:

wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的–pod-network-cidr指定的一样。

            - name: CALICO_IPV4POOL_CIDR
              value: "10.20.0.0/16"

网络插件可能下载不成功,可以在以下链接中复制yaml文件内容:

calico.yaml_傅华涛Fu的博客-CSDN博客

[root@master-node ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

查看节点:


[root@master-node ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
master-node   NotReady   control-plane,master   16m     v1.23.1
work-node-1   NotReady   <none>                 5m8s    v1.23.1
work-node-2   NotReady   <none>                 4m48s   v1.23.1

稍等一会全部节点就绪:

[root@master-node ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
master-node   Ready    control-plane,master   29m   v1.23.1
work-node-1   Ready    <none>                 17m   v1.23.1
work-node-2   Ready    <none>                 17m   v1.23.1

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,可以直接使用命令快捷生成:

kubeadm token create --print-join-command

部署Nginx

[root@master-node ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master-node ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master-node ~]# kubectl get pod,svc -o wide
NAME                         READY   STATUS             RESTARTS   AGE   IP             NODE          NOMINATED NODE   READINESS GATES
pod/nginx-85b98978db-qmmfh   0/1     ImagePullBackOff   0          90s   10.244.154.1   work-node-1   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        43m   <none>
service/nginx        NodePort    10.100.52.211   <none>        80:30742/TCP   53s   app=nginx
[root@master-node ~]#

任意节点访问:IP地址:端口号,如192.168.201.100:30742

 

问题:

1、[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

sysctl -w net.bridge.bridge-nf-call-iptables=1

[知识讲解篇-159] k8s 中为什么要开启bridge-nf-call-iptables - 知乎

2、[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
 

[root@master-node ~]# kubeadm init --apiserver-advertise-address=192.168.201.100 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 192.168.201.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [192.168.201.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [192.168.201.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

 

 /etc/docker/daemon.json:

{
	"exec-opts": [ "native.cgroupdriver=systemd" ]
}

解决K8s安装中节点初始化时 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ 问题._鹰KING的博客-CSDN博客

再重启:

[root@master-node ~]# systemctl daemon-reload
[root@master-node ~]# systemctl restart docker
[root@master-node ~]# systemctl restart kubelet
[root@master-node ~]# sudo kubeadm reset

systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
sudo kubeadm reset

参考文献:

Kubeadm、Kubelet 和 Kubectl 安装部署 Kubernetes (上) - 简书

https://www.cnblogs.com/jinzhenshui/p/16724379.html

https://github.com/kubernetes/kubernetes/blob/v1.23.1/build/dependencies.yaml

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md

https://www.cnblogs.com/xiaoyingzhanchi/p/14320177.html

https://www.cnblogs.com/skystep/articles/15362851.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/797095.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Safari 查看 http 请求

文章目录 1、开启 Safari 开发菜单2、显示 JavaScript 控制台 1、开启 Safari 开发菜单 Safari 设置中&#xff0c;打开开发菜单选项 *** 选择完成后&#xff0c;Safari 的目录栏就会出现一个 开发 功能。 2、显示 JavaScript 控制台 开启页面后&#xff0c;在开发中选中 显…

Android 之 动画合集之补间动画

本节引言&#xff1a; 本节带来的是Android三种动画中的第二种——补间动画(Tween)&#xff0c;和前面学的帧动画不同&#xff0c;帧动画 是通过连续播放图片来模拟动画效果&#xff0c;而补间动画开发者只需指定动画开始&#xff0c;以及动画结束"关键帧"&#xff0…

提示计算机丢失MSVCP140.dll怎么办?这三个修复方法可解决

最近在使用电脑的过程中&#xff0c;遇到了一个问题&#xff0c;即缺少了MSVCP140.dll文件。这个文件是一个动态链接库文件&#xff0c;常用于Windows操作系统中的应用程序中。由于缺少这个文件&#xff0c;会导致计算机系统无法运行某些软件或游戏。丢失MSVCP140.dll可能是由于…

【技术分享】oracle数据库相关操作

-- 截断表 TRUNCATE TABLE TABLE_NAME;-- 删除表 DROP TABLE TABLE_NAME;-- 查询表 SELECT * FROM TABLE_NAME;-- 添加一条记录 INSERT INTO TABLE_NAME(COLUMN) VALUES(VALUE);-- 删除记录 DELETE FROM TABLE_NAME WHERE COLUMNVALUE;-- 修改记录 UPDATE TABLE_NAME SET…

Android性能优化之Thread native层源码分析(InternalError/Out of memory)

近期处理Bugly上OOM问题&#xff0c;很多发生在Thread创建启动过程&#xff0c;虽然最后分析出是32位4G虚拟内存不足导致&#xff0c;但还是分析下Java层Thread 源码过程&#xff0c;可能会抛出的异常InternalError/Out of memory。 Thread报错堆栈&#xff1a; Java线程创建…

数据库|手把手教你成为 TiDB 的 Contributor

一、背景 最近笔者在 AskTUG 回答问题的时候发现&#xff0c;在 6.5.0 版本出现了几个显示未启动必要组件 NgMonitoring 的问题贴。经过排查发现&#xff0c;是 ngmonitoring.toml 中的配置文件出现了问题。文件中的 endpoints 应该是以逗号分隔的&#xff0c;但是却写成了以空…

JavaWeb 项目实现(二) 注销功能

3.注销功能 接前篇&#xff0c;实现了登录功能之后&#xff0c;现在实现注销功能。 因为我们实现登录就是在Session中记录了用户信息。 所以注销功能&#xff0c;就是在Session中移除用户信息。 代码&#xff1a;删除Session中的用户信息&#xff0c;跳转登录页面 package…

【安全渗透】第一次作业(编码知识总结)

目录 1. ASCII编码 2、Unicode 3、UTF-8 1. ASCII编码 ASCII 是“American Standard Code for Information Interchange”的缩写&#xff0c;翻译过来是“美国信息交换标准代码”。ASCII 的标准版本于 1967 年第一次发布&#xff0c;最后一次更新则是在 1986 年&#xff0c…

QEMU源码全解析13 —— QOM介绍(2)

接前一篇文章&#xff1a;QEMU源码全解析12 —— QOM介绍&#xff08;1&#xff09; 本文内容参考&#xff1a; 《趣谈Linux操作系统》 —— 刘超&#xff0c;极客时间 《QEMU/KVM》源码解析与应用 —— 李强&#xff0c;机械工业出版社 特此致谢&#xff01; 本回开始对QOM…

django学习笔记(1)

django创建项目 先创建一个文件夹用来放django的项目&#xff0c;我这里是My_Django_it 之后打开到该文件下&#xff0c;并用下面的指令来创建myDjango1项目 D:\>cd My_Django_itD:\My_Django_it>"D:\zzu_it\Django_learn\Scripts\django-admin.exe" startpr…

记录每日LeetCode 2500.删除每行中的最大值 Java实现

题目描述&#xff1a; 给你一个 m x n 大小的矩阵 grid &#xff0c;由若干正整数组成。 执行下述操作&#xff0c;直到 grid 变为空矩阵&#xff1a; 从每一行删除值最大的元素。如果存在多个这样的值&#xff0c;删除其中任何一个。 将删除元素中的最大值与答案相加。 …

Reinforcement Learning with Code 【Chapter 7. Temporal-Difference Learning】

Reinforcement Learning with Code This note records how the author begin to learn RL. Both theoretical understanding and code practice are presented. Many material are referenced such as ZhaoShiyu’s Mathematical Foundation of Reinforcement Learning, . 文章…

Linux内核与内核空间是什么关系呢?

对内核空间的认识清晰了许多。要理解用户空间与内核空间需要有如下的几个认识&#xff1a; 内核的认识&#xff1a;从2个不同的角度来理解&#xff0c;一个是静态的角度&#xff0c;如“芦中人”所比喻&#xff0c;内核可以看做是一个lib库&#xff0c;内核对外提供的API打包…

快速远程桌面控制公司电脑远程办公

文章目录 快速远程桌面控制公司电脑远程办公**第一步****第二步****第三步** 快速远程桌面控制公司电脑远程办公 远程办公的概念很早就被提出来&#xff0c;但似乎并没有多少项目普及落实到实际应用层面&#xff0c;至少在前几年&#xff0c;远程办公距离我们仍然很遥远。但20…

1分钟上手Apifox

1、客户端右上角账号设置-生成令牌 2、IDEA下载插件 Apifox Helper 3、 配置ApiFoxHelper 令牌 4、在controller类界面右键 5、输入项目id 6、项目ID从客户端 项目设置-项目ID获取 7、导入成功 8、右键刷新查看导入的接口 9、自动生成数据&#xff08;某postman还要自己手输&a…

说一说java中的自定义注解之设计及实现

一、需求背景 比如我们需要对系统的部分接口进行token验证&#xff0c;防止对外的接口裸奔。所以&#xff0c;在调用这类接口前&#xff0c;先校验token的合法性&#xff0c;进而得到登录用户的userId/role/authority/tenantId等信息&#xff1b;再进一步对比当前用户是否有权…

MyBatis 快速入门【中】

&#x1f600;前言 本篇博文是MyBatis(简化数据库操作的持久层框架)–快速入门[上]的核心部分&#xff0c;分享了MyBatis实现sql的xml配置和一些关联配置、异常分析 &#x1f3e0;个人主页&#xff1a;晨犀主页 &#x1f9d1;个人简介&#xff1a;大家好&#xff0c;我是晨犀&a…

软件外包开发的需求分析

需求分析是软件开发中的关键步骤&#xff0c;其目的是确定用户需要什么样的软件&#xff0c;以及软件应该完成哪些任务。需求分析是软件工程的早期工作&#xff0c;也是软件项目成功的基础&#xff0c;因此花费大量精力和时间去做好需求分析是值得的。今天和大家分享软件需求分…

数字孪生-数字城市效果实现方法

数字孪生-数字城市效果实现方法 效果图&#xff1a; 一、效果分析&#xff1a; .0 1、城市非主展示区域白模快速生成方案&#xff1a; 参考视频&#xff1a; 1、CityEngine 引用数据源生成。 cityengine2022一键生成城市模型&#xff0c;不用再用blendergis_哔哩哔哩_bil…

Python运算符列表及其优先顺序、结合性

本文表格对Python中运算符的优先顺序进行了总结&#xff0c;从最高优先级&#xff08;最先绑定&#xff09;到最低优先级&#xff08;最后绑定&#xff09;。相同单元格内的运算符具有相同优先级。除非句法显式地给出&#xff0c;否则运算符均指二元运算。 相同单元格内的运算…