Apache DolphinScheduler 如何实现自动化打包+单机/集群部署?

news2024/11/24 4:56:34

file

Apache DolphinScheduler 是一款开源的分布式任务调度系统,旨在帮助用户实现复杂任务的自动化调度和管理。DolphinScheduler 支持多种任务类型,可以在单机或集群环境下运行。下面将介绍如何实现 DolphinScheduler 的自动化打包和单机/集群部署。

自动化打包

所需环境:maven、jdk

执行以下shell完成代码拉取及打包,打包路径:/opt/action/dolphinscheduler/dolphinscheduler-dist/target/apache-dolphinscheduler-dev-SNAPSHOT-bin.tar.gz

sudo su - root <<EOF
cd /opt/action
git clone git@github.com:apache/dolphinscheduler.git
cd Dolphinscheduler
git fetch origin dev
git checkout -b dev origin/dev
#git log --oneline
EOF
}

# 打包
build(){
sudo su - root <<EOF
cd /opt/action/Dolphinscheduler
mvn -B clean install -Prelease -Dmaven.test.skip=true -Dcheckstyle.skip=true -Dmaven.javadoc.skip=true
EOF
}

单机部署

1、DolphinScheduler运行所需环境

所需环境jdk、zookeeper、mysql

初始化zookeeper(高版本zookeeper推荐使用v3.8及以上版本)环境

安装包官网下载地址:https://zookeeper.apache.org/

sudo su - root <<EOF
#进入/opt目录下(安装目录自行选择)
cd /opt
#解压缩
tar -xvf apache-zookeeper-3.8.0-bin.tar.gz
#修改文件名称
sudo mv apache-zookeeper-3.8.0-bin zookeeper
#进入zookeeper目录
cd zookeeper/
#在 /opt/zookeeper 目录下创建目录 zkData,用来存放 zookeeper 的数据文件
mkdir zkData
#进入conf文件夹
cd conf/
#修改配置文件,复制 zoo_sample.cfg 文件并重命名为 zoo.cfg因为zookeeper只能识别 zoo.cfg 配置文件
cp zoo_sample.cfg zoo.cfg
#修改 zoo.cfg 的配置
sed  -i 's/\/tmp\/zookeeper/\/opt\/zookeeper\/conf/g' zoo.cfg
#停止之前的zk服务
ps -ef|grep QuorumPeerMain|grep -v grep|awk '{print "kill -9 " $2}' |sh
#使用 vim zoo.cfg 命令修改 zoo.cfg 的配置
sh /opt/zookeeper/bin/zkServer.sh start
EOF
}

jdk、mysql这里不做过多赘述。

2、初始化配置

2.1 配置文件初始化

初始化文件要放到指定目录(本文章以/opt/action/tool举例)

  • 2.1.1新建文件夹
    mkdir -p /opt/action/tool
    mkdir -p /opt/Dsrelease
  • 2.1.2新建初始化文件common.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# user data local directory path, please make sure the directory exists and have read write permissions
data.basedir.path=/tmp/dolphinscheduler

# resource storage type: HDFS, S3, NONE
resource.storage.type=HDFS

# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resource.upload.path=/dolphinscheduler

# whether to startup kerberos
hadoop.security.authentication.startup.state=false

# java.security.krb5.conf path
java.security.krb5.conf.path=/opt/krb5.conf

# login user from keytab username
login.user.keytab.username=hdfs-mycluster@ESZ.COM

# login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab

# kerberos expire time, the unit is hour
kerberos.expire.time=2
# resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
hdfs.root.user=root
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=file:///
aws.access.key.id=minioadmin
aws.secret.access.key=minioadmin
aws.region=us-east-1
aws.endpoint=http://localhost:9000
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace aws2 to actual resourcemanager hostname
yarn.application.status.address=http://aws2:%s/ws/v1/cluster/apps/%s
# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://aws2:19888/ws/v1/history/mapreduce/jobs/%s

# datasource encryption enable
datasource.encryption.enable=false

# datasource encryption salt
datasource.encryption.salt=!@#$%^&*

# data quality option
data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar

#data-quality.error.output.path=/tmp/data-quality-error-data

# Network IP gets priority, default inner outer

# Whether hive SQL is executed in the same session
support.hive.oneSession=false

# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true

# network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=

# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default

# system env path
#dolphinscheduler.env.path=dolphinscheduler_env.sh

# development state
development.state=false

# rpc port
alert.rpc.port=50052

# Url endpoint for zeppelin RESTful API
zeppelin.rest.url=http://localhost:8080
  • 2.1.3新建初始化文件core-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://aws1</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>aws1:2181</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
  • 2.1.4新建初始化文件hdfs-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/data</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>aws2:50090</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/secondary</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>aws1</value>
</property>
<property>
<name>dfs.ha.namenodes.aws1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.aws1.nn1</name>
<value>aws1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.aws1.nn2</name>
<value>aws2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.aws1.nn1</name>
<value>aws1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.aws1.nn2</name>
<value>aws2:50070</value>
</property>
<property>


<property>
<name>dfs.datanode.address</name>
<value>aws1:50010</value>
</property>
<property>
<name>dfs.datanode.ipc.address</name>
<value>aws1:50020</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>aws1:50075</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>aws1:50475</value>
</property>

<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://aws1:8485;aws2:8485;aws3:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/jn</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.aws1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
  • 2.1.5上传初始化jar包mysql-connector-java-8.0.16.jar
  • 2.1.6上传初始化jar包ojdbc8.jar

2.2 初始化文件替换

cd /opt/Dsrelease
sudo rm -r $today/

echo "rm -r $today"

cd /opt/release

cp $packge_tar /opt/Dsrelease

cd /opt/Dsrelease

tar -zxvf $packge_tar
mv $packge  $today
p_api_lib=/opt/Dsrelease/$today/api-server/libs/
p_master_lib=/opt/Dsrelease/$today/master-server/libs/
p_worker_lib=/opt/Dsrelease/$today/worker-server/libs/
p_alert_lib=/opt/Dsrelease/$today/alert-server/libs/
p_tools_lib=/opt/Dsrelease/$today/tools/libs/
p_st_lib=/opt/Dsrelease/$today/standalone-server/libs/


p_api_conf=/opt/Dsrelease/$today/api-server/conf/
p_master_conf=/opt/Dsrelease/$today/master-server/conf/
p_worker_conf=/opt/Dsrelease/$today/worker-server/conf/
p_alert_conf=/opt/Dsrelease/$today/alert-server/conf/
p_tools_conf=/opt/Dsrelease/$today/tools/conf/
p_st_conf=/opt/Dsrelease/$today/standalone-server/conf/


cp $p0 $p4 $p_api_lib
cp $p0 $p4 $p_master_lib
cp $p0 $p4 $p_worker_lib
cp $p0 $p4 $p_alert_lib
cp $p0 $p4 $p_tools_lib
cp $p0 $p4 $p_st_lib

echo "cp $p0 $p_api_lib"

cp $p1 $p2 $p3 $p_api_conf
cp $p1 $p2 $p3 $p_master_conf
cp $p1 $p2 $p3 $p_worker_conf
cp $p1 $p2 $p3 $p_alert_conf
cp $p1 $p2 $p3 $p_tools_conf
cp $p1 $p2 $p3 $p_st_conf



echo "cp $p1 $p2 $p3 $p_api_conf"
}

define_param(){

packge_tar=apache-dolphinscheduler-dev-SNAPSHOT-bin.tar.gz
packge=apache-dolphinscheduler-dev-SNAPSHOT-bin
p0=/opt/action/tool/mysql-connector-java-8.0.16.jar
p1=/opt/action/tool/common.properties
p2=/opt/action/tool/core-site.xml
p3=/opt/action/tool/hdfs-site.xml
p4=/opt/action/tool/ojdbc8.jar

today=`date +%m%d`


}

2.3 配置文件内容替换

sed  -i 's/spark2/spark/g' /opt/Dsrelease/$today/worker-server/conf/dolphinscheduler_env.sh

cd /opt/Dsrelease/$today/bin/env/
sed -i '$a\export SPRING_PROFILES_ACTIVE=permission_shiro' dolphinscheduler_env.sh
sed -i '$a\export DATABASE="mysql"' dolphinscheduler_env.sh
sed -i '$a\export SPRING_DATASOURCE_DRIVER_CLASS_NAME="com.mysql.jdbc.Driver"' dolphinscheduler_env.sh
#自定义修改mysql配置
sed -i '$a\export SPRING_DATASOURCE_URL="jdbc:mysql://ctyun6:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&allowPublicKeyRetrieval=true"' dolphinscheduler_env.sh
sed -i '$a\export SPRING_DATASOURCE_USERNAME="root"' dolphinscheduler_env.sh
sed -i '$a\export SPRING_DATASOURCE_PASSWORD="root@123"' dolphinscheduler_env.sh
echo "替换jdbc配置成功"
#自定义修改zookeeper配置
sed -i '$a\export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}' dolphinscheduler_env.sh
sed -i '$a\export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-ctyun6:2181}' dolphinscheduler_env.sh
echo "替换zookeeper配置成功"
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/master-server/conf/common.properties
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/worker-server/conf/common.properties
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/alert-server/conf/common.properties
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/api-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/master-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/worker-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/alert-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/api-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/master-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/worker-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/alert-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/api-server/conf/common.properties
sed -i '$a\resource.hdfs.fs.defaultFS=file:///' /opt/Dsrelease/$today/api-server/conf/common.properties
echo "替换common.properties配置成功"
# 替换master worker内存 api alert也可进行修改,具体根据当前服务器硬件配置而定,但要遵循Xms=Xmx=2Xmn的规律
cd /opt/Dsrelease/$today/
sed -i 's/Xms4g/Xms2g/g' worker-server/bin/start.sh
sed -i 's/Xmx4g/Xmx2g/g' worker-server/bin/start.sh
sed -i 's/Xmn2g/Xmn1g/g' worker-server/bin/start.sh
sed -i 's/Xms4g/Xms2g/g' master-server/bin/start.sh
sed -i 's/Xmx4g/Xmx2g/g' master-server/bin/start.sh
sed -i 's/Xmn2g/Xmn1g/g' master-server/bin/start.sh
echo "master worker内存修改完成"

}

3、删除HDFS配置

echo "开始删除hdfs配置"
sudo rm /opt/Dsrelease/$today/api-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/api-server/conf/hdfs-site.xml
sudo rm /opt/Dsrelease/$today/worker-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/worker-server/conf/hdfs-site.xml
sudo rm /opt/Dsrelease/$today/master-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/master-server/conf/hdfs-site.xml
sudo rm /opt/Dsrelease/$today/alert-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/alert-server/conf/hdfs-site.xml
echo "结束删除hdfs配置"
}

4、MySQL初始化

init_mysql(){

sql_path="/opt/Dsrelease/$today/tools/sql/sql/dolphinscheduler_mysql.sql"
sourceCommand="source $sql_path"
echo $sourceCommand
echo "开始source:"
mysql -hlocalhost -uroot -proot@123 -D "dolphinscheduler" -e "$sourceCommand"
echo "结束source:"
}

5、启动DolphinScheduler服务

stop_all_server(){
cd /opt/Dsrelease/$today
./bin/dolphinscheduler-daemon.sh stop api-server
./bin/dolphinscheduler-daemon.sh stop master-server
./bin/dolphinscheduler-daemon.sh stop worker-server
./bin/dolphinscheduler-daemon.sh stop alert-server
ps -ef|grep api-server|grep -v grep|awk '{print "kill -9 " $2}' |sh
ps -ef|grep master-server |grep -v grep|awk '{print "kill -9 " $2}' |sh
ps -ef|grep worker-server |grep -v grep|awk '{print "kill -9 " $2}' |sh
ps -ef|grep alert-server |grep -v grep|awk '{print "kill -9 " $2}' |sh
}

run_all_server(){
cd /opt/Dsrelease/$today
./bin/dolphinscheduler-daemon.sh start api-server
./bin/dolphinscheduler-daemon.sh start master-server
./bin/dolphinscheduler-daemon.sh start worker-server
./bin/dolphinscheduler-daemon.sh start alert-server
}

集群部署

1、开放mysql和zookeeper对外端口 2、集群部署及启动

复制完成初始化的文件夹到指定的服务器,启动指定服务即可完成集群部署,要连同一个Zookeeper和MySQL。

本文由 白鲸开源科技 提供发布支持!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1004483.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【Oracle】数据库导入导出

Oracle数据库导入导出 文章目录 Oracle数据库导入导出一、expdp导出1、管理员身份登录2、删除以前测试的用户及对应的数据3、创建表空间&#xff08;源表--待导出的表&#xff09;4、创建用户&#xff0c;给用户设置默认表空间和临时表空间5、给用户授权&#xff08;创建表和视…

Unity+百度文心大模型驱动AI小姐姐数字人

1.简述 最近看到新闻&#xff0c;说是百度、字节、商汤、百川、智普等几家企业及机构所发布的生成式大语言模型&#xff0c;通过了《生成式人工智能服务管理暂行办法》&#xff0c;成为首批获得官方备案的大语言模型服务提供商。虽然一直在使用包括文心一言、chatglm这些大语言…

怒赞了,阿里P8面试官推荐的Java高并发核心编程文档

前言 学完阿里P8面试官推荐的Java高并发核心编程文档后&#xff0c;终于拿到了蚂蚁p6的offer&#xff0c;这份文档包含的内容有点多。 Java高并发核心编程文档《尼恩Java高并发三部曲》获读者怒赞&#xff01;获取方式见文末 文章目录 前言尼恩Java高并发三部曲卷1&#xff1…

适合引流的运动步数打卡抽奖小程序源码开发

要健康也要瘦&#xff1f;那么有一个可以让你悄悄改变还可以获取奖品的小程序简直不要太入心。用运动步数兑换奖品&#xff0c;每天运动一下&#xff0c;换点小礼品&#xff0c;简直不要太惬意。 运动步数兑换小程序核心亮点&#xff1a; 小程序与微信运动做了关联&#xff…

Android环境配置笔记

文章目录 一、各环境文档二、参考 一、各环境文档 Gradle官方的兼容性文档&#xff1a;Java Compatibility 更新日期&#xff1a;2023.9.12 Android Gradle插件版本&#xff1a;Android Gradle Plugin 二、参考 参考文章&#xff1a;Android问题记录

SS928搭建NNN环境

环境要求&#xff1a;ubuntu18.04 参考文件&#xff1a; 《ATC工具使用指南》《应用开发指南》《驱动和开发环境安装指南》 《昇腾模型压缩工具使用指南&#xff08;ONNX&#xff09;》 交叉编译器的安装-----------------------------------------------------------------…

C语言“牵手”淘宝商品评论数据方法,淘宝商品评论接口,淘宝商品评价接口,淘宝API接口申请指南

淘宝商品评论API是淘宝开放平台为开发者提供的一套应用程序编程接口&#xff0c;通过该接口&#xff0c;开发者可以获取到店铺所有商品的评价数据。 淘宝商品评论API包含以下接口&#xff1a; taobao.item.reviews.get&#xff1a;用于获取指定商品的评价数据&#xff0c;输入…

凯迪正大— 氧化锌避雷器检测仪

一、概述 RBZ-3B氧化锌避雷器直流参数测试仪是专门用于检测10kV及6KV电力系统用无间隙氧化锌避雷器MOA阀电间接触不良的内部缺陷&#xff0c;根据《电力设备预防性试验规程》DL/T596-1996中14.2的规定&#xff0c;发电厂、变电所在每年雷雨季前和必要时应该对金属氧化物避雷器…

基于ssm的商场管理信息系统的设计与实现

基于ssm的商场管理信息系统的设计与实现 前言 这个项目适合初学者熟悉框架的项目系统&#xff0c;前端框架采用layui全新回归版本2.8&#xff0c;界面更加丝滑。需要的记得扣我发源码哦&#xff01; 项目脑图 项目技术 前端技术&#xff1a;layui框架&#xff0c;JavaScrip…

USB适配器应用芯片 国产GP232RL软硬件兼容替代FT232RL DPU02直接替代CP2102

USB适配器&#xff0c;是英文Universal Serial Bus(通用串行总线)的缩写&#xff0c;而其中文简称为“通串线”&#xff0c;是一个外部总线标准&#xff0c;用于规范电脑与外部设备的连接和通讯。是应用在PC领域的接口技术&#xff0c; 移动PC由于没有电池&#xff0c;电源适配…

如何在群晖中,正确配置 docker 的 ipv6 地址

参考 2023年9月12日 https://synocommunity.com/ https://github.com/wangliangliang2/fix_synology_docker_ipv6 https://post.smzdm.com/p/an3np8m7/ 正文 关于这个话题&#xff0c;国内搜索引擎得到的结果出奇的一致&#xff0c;且过时。 &#xff08;看的我脑壳痛&#…

神经反馈设备使用感受2:采集Muse的EEG原始数据(转自知乎)

神经反馈设备使用感受2&#xff1a;采集Muse的EEG原始数据 转自知乎&#xff0c;内容很好&#xff0c;怕之后找不到 想了一下&#xff0c;单写一部分来介绍一下Muse在数据采集方面的操作。同时也解释一下我自己的EEG数据是从哪里采集的。 关于Muse EEG数据的精度&#xff0c;在…

避免90%以上IT故障,医院运维效率狂飙

一、故障发现到解决&#xff0c;仅用15分钟 一、问题描述 上午11点半左右&#xff0c;平台接到医院某软件PACS数据库离线和CPU使用率异常告警。 &#xff08;告警信息&#xff09; &#xff08;告警详情&#xff09; 二、查找问题的原因 cpu使用率时序图 从CPU使用率时序图中…

2022年全国研究生数学建模竞赛华为杯E题草原放牧策略研究求解全过程文档及程序

2022年全国研究生数学建模竞赛华为杯 E题 草原放牧策略研究 原题再现&#xff1a; 一、背景介绍   草原作为世界上分布最广的重要的陆地植被类型之一&#xff0c;分布面积广泛。中国的草原面积为3.55亿公顷&#xff0c;是世界草原总面积的6%~8%&#xff0c;居世界第二。此外…

Windows安装Neo4j

图数据库概述 图数据库是基于图论实现的一种NoSQL数据库&#xff0c;其数据存储结构和数据查询方式都是以图论&#xff08;它以图为研究对象图论中的图是由若干给定的点及连接两点的线所构成的图形&#xff09;为基础的&#xff0c; 图数据库主要用于存储更多的连接数据。 Neo…

1.Zigbee开发,环境搭建

一。环境搭建 1.开发环境 1.IAR开发环境搭建 2.TI官方必备软件安装 &#xff08;安装此文件&#xff0c;类似Cubemx不同型号stm32的固件库&#xff09;&#xff08;这是协议栈&#xff09; 3.仿真器及USB串口驱动安装 &#xff08;就是使用串口烧录到板子上所需要的软件&#…

PyTorch实现注意力机制及使用方法汇总,附30篇attention论文

还记得鼎鼎大名的《Attention is All You Need》吗&#xff1f;不过我们今天要聊的重点不是transformer&#xff0c;而是注意力机制。 注意力机制最早应用于计算机视觉领域&#xff0c;后来也逐渐在NLP领域广泛应用&#xff0c;它克服了传统的神经网络的的一些局限&#xff0c…

【IBMMQ】搭建测试队列

一、安装IBMMQ 网上有教程&#xff0c;可以学习 我用的IBMMQ7.5&#xff0c;安装教程 二、创建测试队列 进入工作台&#xff1a; 右击队列管理器&#xff0c;新建队列管理器 写队列管理器名称 点击下一步 点击下一步 点击下一步 端口默认为1414&#xff0c;建议换一个 注…

短视频引爆销售:TikTok如何改变跨境电商游戏规则

随着数字时代的到来&#xff0c;跨境电商行业正经历着前所未有的变革。在这个变革的浪潮中&#xff0c;TikTok&#xff08;抖音国际版&#xff09;作为一款全球短视频社交应用&#xff0c;正逐渐成为跨境电商领域的巨大推动力。它不仅改变了品牌的推广方式&#xff0c;还提供了…

ATFX汇市:离岸人民币大幅升值,昨日盘中跌破7.3关口

ATFX汇市&#xff1a;美国CPI数据即将公布之际&#xff0c;周一美元指数大跌&#xff0c;带动离岸人民币升值0.85%&#xff0c;实现3月14日以来的最大单日升值幅度&#xff0c;当日汇率&#xff08;USDCNH&#xff09;最低触及7.292&#xff0c;突破7.3000关口。消息面上&#…