【k8s完整实战教程3】k8s集群部署kubesphere

news2024/11/13 11:28:41

系列文章:这个系列已完结,如对您有帮助,求点赞收藏评论。
读者寄语:再小的帆,也能远航!

  1. 【k8s完整实战教程0】前言
  2. 【k8s完整实战教程1】源码管理-Coding
  3. 【k8s完整实战教程2】腾讯云搭建k8s托管集群
  4. 【k8s完整实战教程3】k8s集群部署kubesphere
  5. 【k8s完整实战教程4】使用kubesphere部署项目到k8s
  6. 【k8s完整实战教程5】网络服务配置(nodeport/loadbalancer/ingress)
  7. 【k8s完整实战教程6】完整实践-部署一个federated_download项目

kubesphere官网上有安装文档

1 集群配置

3.3版本需要开启外网访问,之前实践的时候没有这一环,因此先不设置
在这里插入图片描述

设置一下:
在这里插入图片描述

点击保存之后
在这里插入图片描述

目前就开启好了

可通过 kubectl 命令行工具来验证集群连接:

ubuntu@VM-1-13-ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.6-tke.27", GitCommit:"9921bde307511509f8cbdf2391339b33a1207ba7", GitTreeState:"clean", BuildDate:"2022-10-08T04:36:47Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.6-tke.27", GitCommit:"9921bde307511509f8cbdf2391339b33a1207ba7", GitTreeState:"clean", BuildDate:"2022-10-08T04:02:09Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

2 cmd登录集群中的某个节点

注意,之前建立集群设置密码时,集群的用户名为 ubuntu

C:\Users\17211>ssh root@10.0.1.8
ssh: connect to host 10.0.1.8 port 22: Connection timed out

C:\Users\17211>ssh root@192.144.150.57
The authenticity of host '192.144.150.57 (192.144.150.57)' can't be established.
ECDSA key fingerprint is SHA256:3qfu7eftcXL+h9vK/O2G2NSYhjXEnSVw5t9+VZ6RKbc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.144.150.57' (ECDSA) to the list of known hosts.
root@192.144.150.57's password:
Permission denied, please try again.

C:\Users\17211>ssh ubuntu@192.144.150.57
ubuntu@192.144.150.57's password:
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-180-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Tue Oct 18 10:56:46 CST 2022

  System load:  0.08               Users logged in:        0
  Usage of /:   12.5% of 49.15GB   IP address for eth0:    10.0.1.8
  Memory usage: 24%                IP address for docker0: 169.254.32.1
  Swap usage:   0%                 IP address for cbr0:    172.16.0.65
  Processes:    144

 * Super-optimized for small spaces - read how we shrank the memory
   footprint of MicroK8s to make it the smallest full K8s around.

   https://ubuntu.com/blog/microk8s-memory-optimisation

 * Canonical Livepatch is available for installation.
   - Reduce system reboots and improve kernel security. Activate at:
     https://ubuntu.com/livepatch

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.'

ubuntu@VM-1-8-ubuntu:~$

由此可见:

  1. 不能通过节点的内网IP进行登录,需要使用公网IP
  2. 用户名为ubuntu而不是root

3 安装

可以使用github上的 ks-installer 在已有的 Kubernetes 集群上来执行 KubeSphere 部署

ubuntu@VM-1-8-ubuntu:~$ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
namespace/kubesphere-system created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created

安装完成

4 下载配置文件修改并应用

  1. 下载配置文件:
 ubuntu@VM-1-8-ubuntu:~$ wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
--2022-10-18 11:02:28--  https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
Resolving github.com (github.com)... 20.205.243.166
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/196956614/9cb8209d-7f4e-48d3-bf46-494d607fa991?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221018%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221018T030228Z&X-Amz-Expires=300&X-Amz-Signature=7514391e069f5db43a08250e65ce4661d2337a20363fe730251c0cbbe75b5d61&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=196956614&response-content-disposition=attachment%3B%20filename%3Dcluster-configuration.yaml&response-content-type=application%2Foctet-stream [following]
--2022-10-18 11:02:29--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/196956614/9cb8209d-7f4e-48d3-bf46-494d607fa991?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221018%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221018T030228Z&X-Amz-Expires=300&X-Amz-Signature=7514391e069f5db43a08250e65ce4661d2337a20363fe730251c0cbbe75b5d61&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=196956614&response-content-disposition=attachment%3B%20filename%3Dcluster-configuration.yaml&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10021 (9.8K) [application/octet-stream]
Saving to: ‘cluster-configuration.yaml’

cluster-configuration.yaml    100%[=================================================>]   9.79K  --.-KB/s    in 0.003s

2022-10-18 11:02:29 (2.79 MB/s) - ‘cluster-configuration.yaml’ saved [10021/10021]
  1. 修改配置文件:PVC需要改为10的倍数
 ubuntu@VM-1-8-ubuntu:~$ vim cluster-configuration.yaml   
--------------------------------------------------------------------
 redis:
      enabled: false
      enableHA: false
      volumeSize: 20Gi # Redis PVC size.
    openldap:
      enabled: false
      volumeSize: 20Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi # Minio PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
--------------------------------
% 这里只有jenkinsVolumeSize: 10Gi需要改
# resources: {}
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 10Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 1200m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 1600m
    jenkinsJavaOpts_MaxRAM: 2g

最后保存即可

  1. 应用配置文件
 ubuntu@VM-1-8-ubuntu:~$ kubectl apply -f cluster-configuration.yaml
clusterconfiguration.installer.kubesphere.io/ks-installer created
  1. 查看日志消息
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

创建过程中有一段时间,需要等一等

5 等完成之后登录

# 访问管理页面(替换ip为你的节点ip)
http://82.157.47.240:30880
# 用户名 admin 密码 P@88w0rd

在这里插入图片描述
成功!

6 卸载

脚本:

#!/usr/bin/env bash

function delete_sure(){
  cat << eof
$(echo -e "\033[1;36mNote:\033[0m")
Delete the KubeSphere cluster, including the module kubesphere-system kubesphere-devops-system kubesphere-monitoring-system kubesphere-logging-system openpitrix-system.
eof

read -p "Please reconfirm that you want to delete the KubeSphere cluster.  (yes/no) " ans
while [[ "x"$ans != "xyes" && "x"$ans != "xno" ]]; do
    read -p "Please reconfirm that you want to delete the KubeSphere cluster.  (yes/no) " ans
done

if [[ "x"$ans == "xno" ]]; then
    exit
fi
}


delete_sure

# delete ks-install
kubectl delete deploy ks-installer -n kubesphere-system 2>/dev/null

# delete helm
for namespaces in kubesphere-system kubesphere-devops-system kubesphere-monitoring-system kubesphere-logging-system openpitrix-system kubesphere-monitoring-federated
do
  helm list -n $namespaces | grep -v NAME | awk '{print $1}' | sort -u | xargs -r -L1 helm uninstall -n $namespaces 2>/dev/null
done

# delete kubefed
kubectl get cc -n kubesphere-system ks-installer -o jsonpath="{.status.multicluster}" | grep enable
if [[ $? -eq 0 ]]; then
  helm uninstall -n kube-federation-system kubefed 2>/dev/null
  #kubectl delete ns kube-federation-system 2>/dev/null
fi


helm uninstall -n kube-system snapshot-controller 2>/dev/null

# delete kubesphere deployment
kubectl delete deployment -n kubesphere-system `kubectl get deployment -n kubesphere-system -o jsonpath="{.items[*].metadata.name}"` 2>/dev/null

# delete monitor statefulset
kubectl delete prometheus -n kubesphere-monitoring-system k8s 2>/dev/null
kubectl delete statefulset -n kubesphere-monitoring-system `kubectl get statefulset -n kubesphere-monitoring-system -o jsonpath="{.items[*].metadata.name}"` 2>/dev/null
# delete grafana
kubectl delete deployment -n kubesphere-monitoring-system grafana 2>/dev/null
kubectl --no-headers=true get pvc -n kubesphere-monitoring-system -o custom-columns=:metadata.namespace,:metadata.name | grep -E kubesphere-monitoring-system | xargs -n2 kubectl delete pvc -n 2>/dev/null

# delete pvc
pvcs="kubesphere-system|openpitrix-system|kubesphere-devops-system|kubesphere-logging-system"
kubectl --no-headers=true get pvc --all-namespaces -o custom-columns=:metadata.namespace,:metadata.name | grep -E $pvcs | xargs -n2 kubectl delete pvc -n 2>/dev/null


# delete rolebindings
delete_role_bindings() {
  for rolebinding in `kubectl -n $1 get rolebindings -l iam.kubesphere.io/user-ref -o jsonpath="{.items[*].metadata.name}"`
  do
    kubectl -n $1 delete rolebinding $rolebinding 2>/dev/null
  done
}

# delete roles
delete_roles() {
  kubectl -n $1 delete role admin 2>/dev/null
  kubectl -n $1 delete role operator 2>/dev/null
  kubectl -n $1 delete role viewer 2>/dev/null
  for role in `kubectl -n $1 get roles -l iam.kubesphere.io/role-template -o jsonpath="{.items[*].metadata.name}"`
  do
    kubectl -n $1 delete role $role 2>/dev/null
  done
}

# remove useless labels and finalizers
for ns in `kubectl get ns -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl label ns $ns kubesphere.io/workspace-
  kubectl label ns $ns kubesphere.io/namespace-
  kubectl patch ns $ns -p '{"metadata":{"finalizers":null,"ownerReferences":null}}'
  delete_role_bindings $ns
  delete_roles $ns
done

# delete clusters
for cluster in `kubectl get clusters -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch cluster $cluster -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete clusters --all 2>/dev/null

# delete workspaces
for ws in `kubectl get workspaces -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch workspace $ws -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete workspaces --all 2>/dev/null

# delete devopsprojects
for devopsproject in `kubectl get devopsprojects -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch devopsprojects $devopsproject -p '{"metadata":{"finalizers":null}}' --type=merge
done

for pip in `kubectl get pipeline -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch pipeline $pip -n `kubectl get pipeline -A | grep $pip | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2ibinaries in `kubectl get s2ibinaries -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2ibinaries $s2ibinaries -n `kubectl get s2ibinaries -A | grep $s2ibinaries | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2ibuilders in `kubectl get s2ibuilders -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2ibuilders $s2ibuilders -n `kubectl get s2ibuilders -A | grep $s2ibuilders | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2ibuildertemplates in `kubectl get s2ibuildertemplates -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2ibuildertemplates $s2ibuildertemplates -n `kubectl get s2ibuildertemplates -A | grep $s2ibuildertemplates | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2iruns in `kubectl get s2iruns -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2iruns $s2iruns -n `kubectl get s2iruns -A | grep $s2iruns | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

kubectl delete devopsprojects --all 2>/dev/null


# delete validatingwebhookconfigurations
for webhook in ks-events-admission-validate users.iam.kubesphere.io network.kubesphere.io validating-webhook-configuration
do
  kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io $webhook 2>/dev/null
done

# delete mutatingwebhookconfigurations
for webhook in ks-events-admission-mutate logsidecar-injector-admission-mutate mutating-webhook-configuration
do
  kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io $webhook 2>/dev/null
done

# delete users
for user in `kubectl get users -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch user $user -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete users --all 2>/dev/null


# delete helm resources
for resource_type in `echo helmcategories helmapplications helmapplicationversions helmrepos helmreleases`; do
  for resource_name in `kubectl get ${resource_type}.application.kubesphere.io -o jsonpath="{.items[*].metadata.name}"`; do
    kubectl patch ${resource_type}.application.kubesphere.io ${resource_name} -p '{"metadata":{"finalizers":null}}' --type=merge
  done
  kubectl delete ${resource_type}.application.kubesphere.io --all 2>/dev/null
done

# delete workspacetemplates
for workspacetemplate in `kubectl get workspacetemplates.tenant.kubesphere.io -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch workspacetemplates.tenant.kubesphere.io $workspacetemplate -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete workspacetemplates.tenant.kubesphere.io --all 2>/dev/null

# delete federatednamespaces in namespace kubesphere-monitoring-federated
for resource in $(kubectl get federatednamespaces.types.kubefed.io -n kubesphere-monitoring-federated -oname); do
  kubectl patch "${resource}" -p '{"metadata":{"finalizers":null}}' --type=merge -n kubesphere-monitoring-federated
done

# delete crds
for crd in `kubectl get crds -o jsonpath="{.items[*].metadata.name}"`
do
  if [[ $crd == *kubesphere.io ]]; then kubectl delete crd $crd 2>/dev/null; fi
done

# delete relevance ns
for ns in kubesphere-alerting-system kubesphere-controls-system kubesphere-devops-system kubesphere-logging-system kubesphere-monitoring-system kubesphere-monitoring-federated openpitrix-system kubesphere-system
do
  kubectl delete ns $ns 2>/dev/null
done

创建脚本:
ubuntu@VM-1-8-ubuntu:~$ vim kubesphere-delete.sh
增加权限:
ubuntu@VM-1-8-ubuntu:~$ sudo chmod u+x kubesphere-delete.sh
执行脚本:
ubuntu@VM-1-8-ubuntu:~$ ./kubesphere-delete.sh

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/441042.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【C++】海量数据面试题

海量数据面试题 文章目录 海量数据面试题一、哈希切割二、位图应用1.给定100亿个整数&#xff0c;设计算法找到只出现一次的整数2.求两个文件交集3.在100亿个整数中找到出现次数不超过2次的所有整数 三、布隆过滤器1.求两文件交集&#xff08;近似算法&#xff09;2.求两文件交…

气传导和骨传导耳机哪个好?简单科普这两种蓝牙耳机

在生活中&#xff0c;我们经常会用到耳机&#xff0c;特别是在日常娱乐听歌、运动休闲、户外通勤的时候&#xff0c;一款舒适的耳机是必不可少的。 而最近几年&#xff0c;随着科技的发展&#xff0c;各大品牌也相继推出了各种类型的耳机&#xff0c;其中比较热门的就有气传导…

如何在电脑上使用wink一键高清优化短视频画质

如何在电脑上使用wink一键高清优化短视频画质 文章目录 如何在电脑上使用wink一键高清优化短视频画质1.软件简介1.1痛点1.2解决方案 2.实际操作2.1准备工作2.1.1下载雷电模拟器2.1.2下载wink 2.2.安装软件2.2.1安装雷电模拟器2.2.2在雷电模拟器中安装wink 2.3雷电模拟器基本设置…

软件测试实验:Junit单元测试

目录 前言 实验目的 实验内容 实验要求 实验过程 题目一 题目一测试结果 题目二 题目二实验结果 总结 前言 软件测试是软件开发过程中不可缺少的一个环节&#xff0c;它可以保证软件的质量和功能&#xff0c;发现并修复软件的缺陷和错误。软件测试分为多种类型&…

《数据结构》---术语篇

目录 前言: 一.术语 1.1数据 1.2数据结构 1.3逻辑结构和物理结构 二.数据类型和抽象数据类型 ​​​​​​​ ❤博主CSDN&#xff1a;啊苏要学习 ▶专栏分类&#xff1a;数据结构◀ 学习数据结构是一件有趣的事情&#xff0c;希望读者能在我的博文切实感受到&#xff0c…

Numpy从入门到精通——随机生成数组|特定生成数组|规则生成数组

这个专栏名为《Numpy从入门到精通》&#xff0c;顾名思义&#xff0c;是记录自己学习numpy的学习过程&#xff0c;也方便自己之后复盘&#xff01;为深度学习的进一步学习奠定基础&#xff01;希望能给大家带来帮助&#xff0c;爱睡觉的咋祝您生活愉快&#xff01; 这一篇介绍《…

Qt内存管理及泄露后定位到内存泄漏位置的方法

Qt内存管理机制 Qt使用对象父子关系进行内存管理。在创建类的对象时&#xff0c;为对象指定父对象指针。当父对象在某一时刻被销毁释放时&#xff0c;父对象会先遍历其所有的子对象&#xff0c;并逐个将子对象销毁释放。 Qt内存管理代码示例 QLabel *label new QLabel;这里…

【==是判断相等吗?---错辣】C++和JAVA中判断字符串值相等的区别

文章目录 先上结论C中stringJAVA中String回顾结论 参考文章&#xff1a;这里&#xff1b;这里&#xff1b;这里 先上结论 C中的string类型可以使用和!来判断两个字符串的值是否相等&#xff1b;而JAVA不行&#xff0c;JAVA中和!是用来判断两个字符串的地址是否相同&#xff08…

c++学习之类与对象3

目录 成员变量和函数的存储 this指针 this指针的工作原理 this指针的应用 const修饰的成员函数 友元 友元的语法 1.普通全局函数成为类的友元 2.类的某个成员函数作为另一个类的友元 整个类作为另一个类的友元 运算符重载 1 运算符重载的基本概念 2 重载加号运算符…

MySQL数据库学习笔记之存储引擎

存储引擎 MySQL体系结构 连接层 最上层是一些客户端和连接服务&#xff0c;主要完成一些类似于连接处理、授权认证、以及相关的安全方案。服务器也会为安全接入的每个客户端验证它所具有的操作权限。 服务层 第二层架构主要完成大多数的核心服务功能&#xff0c;如SQL接口&am…

【JavaScript】6.DOM

文章目录 DOM1. 简介2. 获取元素2.1 根据 ID 获取2.2 根据标签名获取2.3 通过 HTML5 新增的方法获取2.4 特殊元素获取 3. 事件基础3.1 事件概述3.2 事件三要素3.3 执行事件步骤 DOM 1. 简介 文档对象模型&#xff08;Document Object Model&#xff0c;简称 DOM&#xff09;&…

web自动化测试框架落地实施全过程-测试环境搭建 (Selenium+Python)

一、什么是web自动化测试? Web自动化测试是指使用自动化工具模拟用户在Web浏览器中执行的操作&#xff0c;通过编写脚本来自动化执行测试用例&#xff0c;以验证Web应用程序的功能、性能和兼容性等方面的质量。其主要目的是降低测试成本和时间&#xff0c;并提高测试效率和准…

LDAP未授权漏洞验证

因为工作需要&#xff0c;这里验证了下LDAP未授权。 以下是收集到的资料&#xff0c;最后是具体使用&#xff01;&#xff01;&#xff01;&#xff01;&#xff01; 更新 2&#xff09;连接ad域有两个地址&#xff1a; ldap://http://XXXXX.com:389 和 ldap://http://XXXXX.…

算法的时间复杂度和空间复杂度(2)

计算斐波那契递归Fib的时间复杂度&#xff1f; long long Fib(size_t N) { if(N < 3) return 1; return Fib(N-1) Fib(N-2); } 因为递归先递推后回归&#xff0c;看起来规律像等比数列&#xff0c;也可以用错位相减法&#xff0c;因为斐波那契数列到第二项就不会再计算了&a…

传输层重点协议之【UDP协议】

1. UDP协议端格式 2. UDP的特点 2.1 无连接 知道对端的IP和端口号就直接传输&#xff0c;不需要建立连接 2.2 不可靠 没有任何的安全机制&#xff0c;发送端发送数据报后&#xff0c;如果因为网络故障数据报无法发送对方&#xff0c;UDP协议层也不会给应用层返回任何错误信…

第六章 Linux实际操作——实用指令

第六章 Linux实际操作——实用指令 6.1 指定运行级别6.2 找回root密码6.3 帮助指令6.3.1 man获得帮助信息6.3.2 help指令6.3.3 搜索引擎帮助更直接 6.4 文件目录类6.4.1 pwd指令6.4.2 ls指令6.4.3 cd指令6.4.4 mkdir 指令6.4.5rmdir指令删除空目录6.4.6 touch 指令6.4.7 cp 指令…

火山引擎边缘云,助力业务敏捷创新

[中国&#xff0c;上海&#xff0c;4 月 18 日]2023 春季火山引擎 FORCE 原动力大会正式举办。大会主论坛&#xff0c;火山引擎总裁谭待围绕云上增长三要素发表了重要演讲。在敏捷迭代专题中&#xff0c;谭待分享了火山引擎边缘云连接与计算无处不在的理念&#xff0c;并于现场…

【fluent udf】定义源项宏时,在迭代计算过程中UDM变量变inf、NAN、发散时如何解决?

一、问题背景 最近做的一个fluent仿真算例里用到源项宏&#xff0c;源项宏里用UDM定义了树脂固化度场。 在迭代计算的过程中&#xff0c;UDM的取值发散成了无穷大inf&#xff08;第一次计算取值是NAN&#xff09;&#xff0c;如下图所示。 由于每一次迭代计算过程中&#xf…

【嵌入式系统与入门】Day01 Arduino开发板

文章目录 1. Arduino概述1.1 是什么&#xff1f;1.2 分类1.3 组成1.4 电源 2. Arduino软件开发流程2.1 明确接口函数2.2 连接板子2.3 打开项目【或者自己编程序】2.4 选择板子类型2.5 选择通讯端口2.6 下载程序2.7. 编写程序代码——程序架构 3. 较常用的封装函数3.1 pinMode(p…

技术报告:Efficient and Effective Text Encoding for Chinese LLaMA AND Alpaca

技术报告&#xff1a;Efficient and Effective Text Encoding for Chinese LLaMA AND Alpaca IntroductionChinese LLaMAChinese AlpacaLora-Fine-tuning实验7Bpre- trainingInstruction-Tuning 13BPre-TrainingInstruct-Tuning Introduction 首先作者说了最近ChatGPT等模型在…