DolphinScheduler minio(S3支持)开启资源中心

news2024/10/6 5:01:15

DolphinScheduler 如果是在3.0.5 及之前的版本,没办法支持 S3 的协议的

当你按照文档配置之后,运行启动之后,在master 和 worker 节点,都会出现 缺包的依赖问题。

那这个问题在什么版本修复了呢?

3.0.6...

那 3.0.6 按照文档中描述的配置,可以启动资源中心么?

答案是否定的,因为文档中的配置信息和代码压根对应不上,为此我去翻了下源码,查了大概1个小时,看明白版本迭代后带来的变化了

笔者主要是采用的 K8S 的方式进行部署的,配置文件则需要依据 K8S部署的标准进行处理,主要可以参考官方的文档的一些要求。

对于资源中心这块的配置,按照以前的文档,则会是这样的一个版本:

3.0.5 之前的版本,核心配置:

resource.storage.type: S3

fs.defaultFS: s3a://dolphinscheduler

aws.access.key.id: flink_minio_root

aws.secret.access.key: flink_minio_123456

aws.region: us-east-1

aws.endpoint: http://10.233.7.78:9000

conf:

common:

# user data local directory path, please make sure the directory exists and have read write permissions

data.basedir.path: /tmp/dolphinscheduler



# resource storage type: HDFS, S3, NONE

resource.storage.type: S3



# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended

resource.upload.path: /dolphinscheduler



# whether to startup kerberos

hadoop.security.authentication.startup.state: false



# java.security.krb5.conf path

java.security.krb5.conf.path: /opt/krb5.conf



# login user from keytab username

login.user.keytab.username: hdfs-mycluster@ESZ.COM



# login user from keytab path

login.user.keytab.path: /opt/hdfs.headless.keytab



# kerberos expire time, the unit is hour

kerberos.expire.time: 2

# resource view suffixs

#resource.view.suffixs: txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js

# if resource.storage.type: HDFS, the user must have the permission to create directories under the HDFS root path

hdfs.root.user: hdfs

# if resource.storage.type: S3, the value like: s3a://dolphinscheduler; if resource.storage.type: HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir

fs.defaultFS: s3a://dolphinscheduler

aws.access.key.id: flink_minio_root

aws.secret.access.key: flink_minio_123456

aws.region: us-east-1

aws.endpoint: http://10.233.7.78:9000

下段的公共配置信息,

RESOURCE_STORAGE_TYPE: "S3"

RESOURCE_UPLOAD_PATH: "/dolphinscheduler"

FS_DEFAULT_FS: "s3a://dolphinscheduler"

FS_S3A_ENDPOINT: "http://10.233.7.78:9000"

FS_S3A_ACCESS_KEY: "flink_minio_root"

FS_S3A_SECRET_KEY: "flink_minio_123456"
common:

## Configmap

configmap:

DOLPHINSCHEDULER_OPTS: ""

DATA_BASEDIR_PATH: "/tmp/dolphinscheduler"

RESOURCE_STORAGE_TYPE: "S3"

RESOURCE_UPLOAD_PATH: "/dolphinscheduler"

FS_DEFAULT_FS: "s3a://dolphinscheduler"

FS_S3A_ENDPOINT: "http://10.233.7.78:9000"

FS_S3A_ACCESS_KEY: "flink_minio_root"

FS_S3A_SECRET_KEY: "flink_minio_123456"

HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE: "false"

JAVA_SECURITY_KRB5_CONF_PATH: "/opt/krb5.conf"

LOGIN_USER_KEYTAB_USERNAME: "hdfs@HADOOP.COM"

LOGIN_USER_KEYTAB_PATH: "/opt/hdfs.keytab"

KERBEROS_EXPIRE_TIME: "2"

HDFS_ROOT_USER: "hdfs"

RESOURCE_MANAGER_HTTPADDRESS_PORT: "8088"

YARN_RESOURCEMANAGER_HA_RM_IDS: ""

YARN_APPLICATION_STATUS_ADDRESS: "http://ds1:%s/ws/v1/cluster/apps/%s"

YARN_JOB_HISTORY_STATUS_ADDRESS: "http://ds1:19888/ws/v1/history/mapreduce/jobs/%s"

DATASOURCE_ENCRYPTION_ENABLE: "false"

DATASOURCE_ENCRYPTION_SALT: "!@#$%^&*"

SUDO_ENABLE: "true"

# dolphinscheduler env

HADOOP_HOME: "/opt/soft/hadoop"

HADOOP_CONF_DIR: "/opt/soft/hadoop/etc/hadoop"

SPARK_HOME1: "/opt/soft/spark1"

SPARK_HOME2: "/opt/soft/spark2"

PYTHON_HOME: "/usr/bin/python"

JAVA_HOME: "/usr/local/openjdk-8"

HIVE_HOME: "/opt/soft/hive"

FLINK_HOME: "/opt/soft/flink"

DATAX_HOME: "/opt/soft/datax/bin/datax.py"

以上的配置信息,如果从3.0.6 的src 中的配置文件,去查阅相关的 values.yaml 配置信息,发现是一模一样!

但是,当你部署后,开始启动,就报错了,会告知你,无法启动资源中心服务,因为 空指针...

嗯,到这里,我也懵了,啥都是按照文档配置的,为毛会出现空指针?

ok,以下就是我翻阅源码的过程

 

然后翻阅这个常量信息

常量路径

dolphinscheduler/dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/TaskConstants.java

主要配置:

/**

* aws config

*/

public static final String AWS_ACCESS_KEY_ID = "resource.aws.access.key.id";

public static final String AWS_SECRET_ACCESS_KEY = "resource.aws.secret.access.key";

public static final String AWS_REGION = "resource.aws.region";

发现和文档完全对不上,咋办?参数都不一样了....

没什么好办法,继续找文档支持

 

新款的配置信息,很容易就看到,支持了一堆

源码地址:

https://github.com/apache/dolphinscheduler/blob/7973324229826d1b9c7db81e14c89c8b5d621c28/deploy/kubernetes/dolphinscheduler/values.yaml#L152

  • S3
  • OSS
  • GCS
  • ABS
conf:

common:

# user data local directory path, please make sure the directory exists and have read write permissions

data.basedir.path: /tmp/dolphinscheduler



# resource storage type: HDFS, S3, OSS, GCS, ABS, NONE

resource.storage.type: S3



# resource store on HDFS/S3 path, resource file will store to this base path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended

resource.storage.upload.base.path: /dolphinscheduler



# The AWS access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required

resource.aws.access.key.id: minioadmin



# The AWS secret access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required

resource.aws.secret.access.key: minioadmin



# The AWS Region to use. if resource.storage.type=S3 or use EMR-Task, This configuration is required

resource.aws.region: ca-central-1



# The name of the bucket. You need to create them by yourself. Otherwise, the system cannot start. All buckets in Amazon S3 share a single namespace; ensure the bucket is given a unique name.

resource.aws.s3.bucket.name: dolphinscheduler



# You need to set this parameter when private cloud s3. If S3 uses public cloud, you only need to set resource.aws.region or set to the endpoint of a public cloud such as S3.cn-north-1.amazonaws.com.cn

resource.aws.s3.endpoint: http://minio:9000



# alibaba cloud access key id, required if you set resource.storage.type=OSS

resource.alibaba.cloud.access.key.id: <your-access-key-id>



# alibaba cloud access key secret, required if you set resource.storage.type=OSS

resource.alibaba.cloud.access.key.secret: <your-access-key-secret>



# alibaba cloud region, required if you set resource.storage.type=OSS

resource.alibaba.cloud.region: cn-hangzhou



# oss bucket name, required if you set resource.storage.type=OSS

resource.alibaba.cloud.oss.bucket.name: dolphinscheduler



# oss bucket endpoint, required if you set resource.storage.type=OSS

resource.alibaba.cloud.oss.endpoint: https://oss-cn-hangzhou.aliyuncs.com



# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path

resource.hdfs.root.user: hdfs



# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir

resource.hdfs.fs.defaultFS: hdfs://mycluster:8020



# whether to startup kerberos

hadoop.security.authentication.startup.state: false



# java.security.krb5.conf path

java.security.krb5.conf.path: /opt/krb5.conf



# login user from keytab username

login.user.keytab.username: hdfs-mycluster@ESZ.COM



# login user from keytab path

login.user.keytab.path: /opt/hdfs.headless.keytab



# kerberos expire time, the unit is hour

kerberos.expire.time: 2



# resourcemanager port, the default value is 8088 if not specified

resource.manager.httpaddress.port: 8088



# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty

yarn.resourcemanager.ha.rm.ids: 192.168.xx.xx,192.168.xx.xx



# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname

yarn.application.status.address: http://ds1:%s/ws/v1/cluster/apps/%s



# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)

yarn.job.history.status.address: http://ds1:19888/ws/v1/history/mapreduce/jobs/%s



# datasource encryption enable

datasource.encryption.enable: false



# datasource encryption salt

datasource.encryption.salt: '!@#$%^&*'



# data quality option

data-quality.jar.name: dolphinscheduler-data-quality-dev-SNAPSHOT.jar



# Whether hive SQL is executed in the same session

support.hive.oneSession: false



# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions

sudo.enable: true



# development state

development.state: false



# rpc port

alert.rpc.port: 50052



# set path of conda.sh

conda.path: /opt/anaconda3/etc/profile.d/conda.sh



# Task resource limit state

task.resource.limit.state: false



# mlflow task plugin preset repository

ml.mlflow.preset_repository: https://github.com/apache/dolphinscheduler-mlflow



# mlflow task plugin preset repository version

ml.mlflow.preset_repository_version: "main"



# way to collect applicationId: log, aop

appId.collect: log

未发布的文档片段

https://github.com/apache/dolphinscheduler/blob/7973324229826d1b9c7db81e14c89c8b5d621c28/docs/docs/zh/guide/resource/configuration.md?plain=1#L40

 

以上的这些信息并不代表3.0.6的版本支持了,只代表了新版的能够支持,继续寻找对应版本可以支持的参数,终于在配置列表中找到了

https://github.com/apache/dolphinscheduler/blob/3.0.6-release/docs/docs/zh/architecture/configuration.md?plain=1

 

于是按照以上寻到的各种配置,叠加在一起后,就变成下面的样子:

conf:

common:

# user data local directory path, please make sure the directory exists and have read write permissions

data.basedir.path: /dolphinscheduler/tmp



# resource storage type: HDFS, S3, NONE

resource.storage.type: S3



# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended

resource.upload.path: /dolphinscheduler



# whether to startup kerberos

hadoop.security.authentication.startup.state: false



# java.security.krb5.conf path

java.security.krb5.conf.path: /opt/krb5.conf



# login user from keytab username

login.user.keytab.username: hdfs-mycluster@ESZ.COM



# login user from keytab path

login.user.keytab.path: /opt/hdfs.headless.keytab



# kerberos expire time, the unit is hour

kerberos.expire.time: 2

# resource view suffixs

#resource.view.suffixs: txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js

# if resource.storage.type: HDFS, the user must have the permission to create directories under the HDFS root path

hdfs.root.user: hdfs

# if resource.storage.type: S3, the value like: s3a://dolphinscheduler; if resource.storage.type: HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir

resource.storage.upload.base.path: /dolphinscheduler

fs.defaultFS: s3a://dolphinscheduler

resource.aws.access.key.id: flink_minio_root

resource.aws.secret.access.key: flink_minio_123456

resource.aws.region: us-east-1

resource.aws.s3.bucket.name: dolphinscheduler

resource.aws.s3.endpoint: http://10.233.7.78:9000

# resourcemanager port, the default value is 8088 if not specified

resource.manager.httpaddress.port: 8088

# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty

yarn.resourcemanager.ha.rm.ids: 192.168.xx.xx,192.168.xx.xx

# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname

yarn.application.status.address: http://ds1:%s/ws/v1/cluster/apps/%s

# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)

yarn.job.history.status.address: http://ds1:19888/ws/v1/history/mapreduce/jobs/%s



# datasource encryption enable

datasource.encryption.enable: false



# datasource encryption salt

datasource.encryption.salt: '!@#$%^&*'



# data quality option

data-quality.jar.name: dolphinscheduler-data-quality-3.0.6.jar



#data-quality.error.output.path: /tmp/data-quality-error-data

common:

## Configmap

configmap:

DOLPHINSCHEDULER_OPTS: ""

DATA_BASEDIR_PATH: "/dolphinscheduler/tmp"

RESOURCE_STORAGE_TYPE: "S3"

RESOURCE_UPLOAD_PATH: "/dolphinscheduler"

FS_DEFAULT_FS: "s3a://dolphinscheduler"

FS_S3A_ENDPOINT: "http://10.233.7.78:9000"

FS_S3A_ACCESS_KEY: "flink_minio_root"

FS_S3A_SECRET_KEY: "flink_minio_123456"

HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE: "false"

JAVA_SECURITY_KRB5_CONF_PATH: "/opt/krb5.conf"

LOGIN_USER_KEYTAB_USERNAME: "hdfs@HADOOP.COM"

LOGIN_USER_KEYTAB_PATH: "/opt/hdfs.keytab"

KERBEROS_EXPIRE_TIME: "2"

HDFS_ROOT_USER: "hdfs"

RESOURCE_MANAGER_HTTPADDRESS_PORT: "8088"

YARN_RESOURCEMANAGER_HA_RM_IDS: ""

YARN_APPLICATION_STATUS_ADDRESS: "http://ds1:%s/ws/v1/cluster/apps/%s"

YARN_JOB_HISTORY_STATUS_ADDRESS: "http://ds1:19888/ws/v1/history/mapreduce/jobs/%s"

DATASOURCE_ENCRYPTION_ENABLE: "false"

DATASOURCE_ENCRYPTION_SALT: "!@#$%^&*"

SUDO_ENABLE: "true"

# dolphinscheduler env

HADOOP_HOME: "/opt/soft/hadoop"

HADOOP_CONF_DIR: "/opt/soft/hadoop/etc/hadoop"

SPARK_HOME1: "/opt/soft/spark1"

SPARK_HOME2: "/opt/soft/spark2"

PYTHON_HOME: "/usr/bin/python"

JAVA_HOME: "/usr/local/openjdk-8"

HIVE_HOME: "/opt/soft/hive"

FLINK_HOME: "/opt/soft/flink"

DATAX_HOME: "/opt/soft/datax"

ok,完成以上的配置后,资源中心就可以正式开启了,而且也确定不会报错了,十分不容易

我只想说,DolphinScheduler 文档,代码这块的管理比较乱,PMC 的大佬们似乎没太多精力关心这个事情。但是,不管是参与者,还是初学者,都希望有一份靠谱的文档,让大家少走弯路....

希望 DolphinScheduler 的开源团队能加强这块的建设,引入更好的一些可以良性循环的机制,解决这个问题。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/751362.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

每个前端开发者都应知道的10个实用网站

微信搜索 【大迁世界】, 我会第一时间和你分享前端行业趋势&#xff0c;学习途径等等。 本文 GitHub https://github.com/qq449245884/xiaozhi 已收录&#xff0c;有一线大厂面试完整考点、资料以及我的系列文章。 快来免费体验ChatGpt plus版本的&#xff0c;我们出的钱 体验地…

sprinboot企业客户信息反馈平台

企业客户信息反馈平台的开发运用java技术&#xff0c;MIS的总体思想&#xff0c;以及MYSQL等技术的支持下共同完成了该平台的开发&#xff0c;实现了企业客户信息反馈管理的信息化&#xff0c;使用户体验到更优秀的企业客户信息反馈管理&#xff0c;管理员管理操作将更加方便&a…

canal番外篇-otter

前置知识点 主从复制binlogcanal正则dockerjava 前置工具 dockerotter-all 场景描述&#xff08;增量同步&#xff09; 目前项目中使用的是mysql5.5&#xff0c;计划升级为mysql8.1&#xff0c;版本跨度较大&#xff0c;市面上可靠工具选择较少。otter符合预期&#xff0c…

3Ds max入门教程:为男性角色创建服装T 恤

推荐&#xff1a; NSDT场景编辑器助你快速搭建可二次开发的3D应用场景 3ds Max 角色服装教程 在本 3ds Max 教程中&#xff0c;我们将为角色模型创建一个简单的 T 恤。我们提供了一个“human_figure.obj”文件供您导入模型。因此&#xff0c;本教程将重点介绍的是创建服装&…

【VTK】VTK 显示小球例子,在 Windows 上使用 Visual Studio 配合 Qt 构建 VTK

知识不是单独的&#xff0c;一定是成体系的。更多我的个人总结和相关经验可查阅这个专栏&#xff1a;Visual Studio。 关于更多此例子的资料&#xff0c;可以参考&#xff1a;【Visual Studio】在 Windows 上使用 Visual Studio 配合 Qt 构建 VTK。 文章目录 版本环境VTKTest.…

【机器人模拟-01】使用URDF在中创建模拟移动机器人

一、说明 在本教程中,我将向您展示如何使用通用机器人描述格式 (URDF)(机器人建模的标准 ROS 格式)创建模拟移动机器人。 机器人专家喜欢在构建机器人之前对其进行模拟,以测试不同的算法。您可以想象,使用物理机器人犯错的成本可能很高(例如,将移动机器人高速…

SPSS方差分析

参考文章 导入准备好的数据 选择分析方法 选择参数 选择对比&#xff0c;把组别放入因子框中&#xff0c;把红细胞增加数放进因变量列表 勾选“多项式”&#xff0c;等级取默认“线性” &#xff0c;继续 接着点击“事后比较”&#xff0c;弹出对话框&#xff0c;勾选“LSD” …

华为OD机试真题 JavaScript 实现【分糖果】【2022Q2 200分】,附详细解题思路

目录 专栏导读一、题目描述二、输入描述三、输出描述四、解题思路五、JavaScript算法源码六、效果展示 专栏导读 本专栏收录于《华为OD机试&#xff08;JavaScript&#xff09;真题&#xff08;A卷B卷&#xff09;》。 刷的越多&#xff0c;抽中的概率越大&#xff0c;每一题都…

Windows Bat实现延时功能的几种常见方式

文章目录 1. 使用ping命令实现延时2. 使用timeout命令实现延时3. 使用choice命令实现延时4. 使用for循环实现延时5. 使用sleep命令实现延时6. 使用VBScript.sleep实现延时总结 在 bat批处理中实现延时功能的几种常用方式 1. 使用ping命令实现延时 使用ping命令可以实现延时的…

mysql备份,视图

一、备份与还原 1.设计样例表 CREATE DATABASE booksDB; use booksDB; --创建表book2 CREATE TABLE books(bk_id INT NOT NULL PRIMARY KEY,bk_title VARCHAR(50) NOT NULL,copyright YEAR NOT NULL); --创建表authors CREATE TABLE authors(auth_id INT NOT NULL PRIM…

java学习路程之篇六、知识点、算数运算符、自增自减运算符、类型转换

文章目录 1、算术运算符2、自增自减运算符3、类型转换 1、算术运算符 2、自增自减运算符 3、类型转换

Serverless是什么?如何使用?有哪些优势?国内外有哪些Serverless平台?

111. Serverless是什么&#xff1f;如何使用&#xff1f;有哪些优势&#xff1f;国内外有哪些Serverless平台&#xff1f; 一、 Serverless是什么&#xff1f; 百度百科 Serverless 是云计算的一种模型。以平台即服务&#xff08;PaaS&#xff09;为基础&#xff0c;无服务器…

【沐风老师】3DMAX灯光放置插件LightPlacer使用方法教程

3DMAX灯光放置插件LightPlacer使用方法 3DMAX灯光放置插件LightPlacer&#xff0c;用于3dMax放置和管理灯光的插件&#xff0c;可以在3dMax中一键制作所需的灯光&#xff0c;且通过插件创建出来的灯光属性可以在该面板下进行直接修改&#xff0c;并不需要切换至堆栈。该插件的有…

接口自动化测试实践指导(下):接口自动化测试断言设置思路

1 断言设置思路 这里总结了我在项目中常用的5种断言方式&#xff0c;基本可能满足90%以上的断言场景&#xff0c;具体参见如下脑图&#xff1a; 下面分别解释一下图中的五种思路&#xff1a; 1&#xff09; 响应码 对于http类接口&#xff0c;有时开发人员只是通过设置接口响…

IDEA的火焰图简单使用

1. 火焰图是什么&#xff1f; 简单来说就是用来查看程序耗时的一张图 如何读懂火焰图&#xff1f; 2. mac上如何生成火焰图 找了一圈&#xff0c;原来idea原本就支持… 3. 测试代码 package org.example;import java.util.ArrayList; import java.util.List; import java.…

QT Quick初学笔记---第一篇

1、对Qt Quick的初步认识 Qt Quick是Qt5界面开发技术的统称&#xff0c;是以下几种技术的集合&#xff1a; QML&#xff1a;界面标记语言JavaScript&#xff1a;动态脚本语言QT C&#xff1a;跨平台C封装库 QML是与HTML类似的一种标记语言。 QML文件采用.qml作为文件格式后…

C语言、C++和C#:区别与特点的比较

C语言、C和C#是三种不同的编程语言&#xff0c;它们在以下几个方面存在区别&#xff1a; 设计宗旨&#xff1a;C语言是一种过程式编程语言&#xff0c;旨在提供高效的系统级编程。C是在C语言基础上发展而来的&#xff0c;既支持过程式编程&#xff0c;也支持面向对象编程。C#是…

C++图形开发(14):游戏完善——无限空中起跳解决

文章目录 1.问题描述2.如何解决&#xff1f;3.整段代码 1.问题描述 在游玩过程中&#xff0c;我们肯定发现了之前所给出的游戏源码中的一个小bug&#xff1a; 小球可以空中无限起跳&#xff01;&#xff01;&#xff01;&#xff01;&#xff01;&#xff01;&#xff01;&…

Mysql进阶-

1、存储引擎 1.1 MySQL体系结构 连接层 最上层是一些客户端和链接服务,主要完成一些类似于连接处理、授权认证、及相关的安全方案。服务器也会为安全接入的每个客户端验证它所具有的操作权限。服务层 第二层架构主要完成大多数的核心服务功能&#xff0c;如SQL接口,并完成缓存…

安装VMtools

VM17上安装VMtools 遇到的问题: 安装VMware tools是灰色的 解决办法 关闭虚拟机,编辑虚拟机设置 最后点击确定&#xff0c;开启虚拟机 虚拟机->重新安装VMtools&#xff0c;点击即可&#xff0c;若没有点击&#xff0c;那就需要关机&#xff0c;再开机了 正常情况下&am…