hue 4.11容器化部署,已结合Hive与Hadoop

news2025/1/16 4:48:52

配合《Hue 部署过程中的报错处理》食用更佳

官方配置说明页面:
https://docs.gethue.com/administrator/configuration/connectors/
官方配置hue.ini页面
https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini

docker部署

注意

  • 初次部署需要先注释hue.ini配置文件的映射,待到部署成功后,将配置文件拷贝到指定目录后,再取消注释进行部署即可。
    或者到该地址下去复制并新建hue.ini,放到/data/hue/目录下(推荐)。https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini
  • 如果已部署hive与hadoop,最好部署hue容器之前将文件映射配置修改完整
  • 如果未部署hive与Hadoop,忽略即可,后续有需要时再回来修改配置并重新部署hue容器即可
version: '3.3'  # 指定Docker Compose的版本
services:  # 定义服务列表
  hue:  # 服务的名称
    image: gethue/hue:4.11.0  # 指定使用的镜像及其版本
    container_name: hue  # 设置容器的名称
    restart: always  # 配置容器的重启策略为总是重启
    privileged: true  # 给予容器特权模式,拥有更多权限
    hostname: hue  # 设置容器的主机名
    ports:  # 端口映射
      - "9898:8888"  # 将容器的8888端口映射到宿主机的9898端口
    environment:  # 环境变量设置
      - TZ=Asia/Shanghai  # 设置时区为中国上海
    volumes:  # 数据卷配置
      - /data/hue/hue.ini:/usr/share/hue/desktop/conf/hue.ini  # 将宿主机的hue.ini配置文件映射到容器内的指定位置
      - /etc/localtime:/etc/localtime  # 将宿主机的本地时间设置映射到容器内,确保容器与宿主机时间一致
      - /opt/hive-3.1.3:/opt/hive-3.1.3 # 将宿主机的hive目录映射到容器内
      - /opt/hadoop-3.3.0:/opt/hadoop-3.3.0 # 将宿主机的hadoop目录映射到容器内
    networks:
      hue_default:
        ipv4_address: 172.15.0.2  # 指定静态IP地址

networks:  # 定义网络
  hue_default:
    driver: bridge  # 使用bridge驱动
    ipam:
      config:
        - subnet: 172.15.0.0/16  # 指定子网范围
# 1. 创建hue部署文件
vi docker-compose-hue.yml

# 2. 将上方的部署内容复制粘贴到docker-compose-hue.yml中
# 3. 使用docker compose部署hue
docker compose -f docker-compose-hue.yml  up -d

# 4. 检验是否部署成功,同一局域网下的浏览器访问hue
# 由于这是您第一次登录,请选择任意用户名和密码。请务必记住这些,因为它们将成为您的 Hue 超级用户凭据。
主机ip:9898

hue初始化

  1. hue.ini 配置文件修改
vi /data/hue/hue.ini
# 修改Webserver监听地址
 http_host=127.0.0.1

# 修改时区
 time_zone=Asia/Shanghai

# 配置hue元数据数据库存储
[[database]]
 engine=mysql
 host=192.168.10.75
 port=3306
 user=root
 password=HoMf@123
 name=hue
 
# 配置hue数据库连接,用于查询Mysql数据库
[[interpreters]]
# Define the name and how to connect and execute the language.
# https://docs.gethue.com/administrator/configuration/editor/

 [[[mysql]]]
   name = MySQL
   interface=sqlalchemy
#   ## https://docs.sqlalchemy.org/en/latest/dialects/mysql.html
   options='{"url": "mysql://root:HoMf@123@192.168.10.75:3306/hue_meta"}'
#   ## options='{"url": "mysql://${USER}:${PASSWORD}@localhost:3306/hue"}'
  1. 创建数据库
# 进入数据库
mysql -uroot -pHoMf@123

# 创建hue数据库
mysql> create database `hue` default character set utf8mb4 default collate utf8mb4_general_ci;
Query OK, 1 row affected (0.00 sec)

# 创建hue_meta数据库
mysql> create database `hue_meta` default character set utf8mb4 default collate utf8mb4_general_ci;
Query OK, 1 row affected (0.00 sec)
  1. 应用配置文件,重启容器,进入容器,数据库初始化
# 重启hue容器
docker restart hue

# 进入hue容器
docker exec -it hue bash

# 执行数据库初始化
/usr/share/hue/build/env/bin/hue syncdb
/usr/share/hue/build/env/bin/hue migrate

# 退出容器,ctrl + D,或
exit

  • 数据库初始化详细操作及返回结果
hue@hue:/usr/share/hue$ /usr/share/hue/build/env/bin/hue syncdb
[22/Nov/2024 15:38:07 +0800] settings     INFO     Welcome to Hue 4.11.0
[22/Nov/2024 15:38:08 +0800] conf         WARNING  enable_extract_uploaded_archive is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  enable_new_create_table is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  force_hs2_metadata is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  show_table_erd is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[21/Nov/2024 23:38:08 -0800] backend      WARNING  mozilla_django_oidc module not found
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: BEGIN LOG
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: Using django-axes version 5.13.0
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: blocking by IP only.
[21/Nov/2024 23:38:09 -0800] api3         WARNING  simple_salesforce module not found
[21/Nov/2024 23:38:09 -0800] jdbc         WARNING  Failed to import py4j
[21/Nov/2024 23:38:10 -0800] schemas      INFO     Resource 'XMLSchema.xsd' is already loaded
No changes detected
hue@hue:/usr/share/hue$ /usr/share/hue/build/env/bin/hue migrate
[22/Nov/2024 15:38:33 +0800] settings     INFO     Welcome to Hue 4.11.0
[22/Nov/2024 15:38:33 +0800] conf         WARNING  enable_extract_uploaded_archive is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  enable_new_create_table is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  force_hs2_metadata is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  show_table_erd is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[21/Nov/2024 23:38:33 -0800] backend      WARNING  mozilla_django_oidc module not found
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: BEGIN LOG
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: Using django-axes version 5.13.0
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: blocking by IP only.
[21/Nov/2024 23:38:34 -0800] api3         WARNING  simple_salesforce module not found
[21/Nov/2024 23:38:34 -0800] jdbc         WARNING  Failed to import py4j
[21/Nov/2024 23:38:35 -0800] schemas      INFO     Resource 'XMLSchema.xsd' is already loaded
Operations to perform:
  Apply all migrations: auth, authtoken, axes, beeswax, contenttypes, desktop, jobsub, oozie, pig, sessions, sites, useradmin
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying auth.0012_alter_user_first_name_max_length... OK
  Applying authtoken.0001_initial... OK
  Applying authtoken.0002_auto_20160226_1747... OK
  Applying authtoken.0003_tokenproxy... OK
  Applying axes.0001_initial... OK
  Applying axes.0002_auto_20151217_2044... OK
  Applying axes.0003_auto_20160322_0929... OK
  Applying axes.0004_auto_20181024_1538... OK
  Applying axes.0005_remove_accessattempt_trusted... OK
  Applying axes.0006_remove_accesslog_trusted... OK
  Applying axes.0007_add_accesslog_trusted... OK
  Applying axes.0008_remove_accesslog_trusted... OK
  Applying beeswax.0001_initial... OK
  Applying beeswax.0002_auto_20200320_0746... OK
  Applying beeswax.0003_compute_namespace... OK
  Applying desktop.0001_initial... OK
  Applying desktop.0002_initial... OK
  Applying desktop.0003_initial... OK
  Applying desktop.0004_initial... OK
  Applying desktop.0005_initial... OK
  Applying desktop.0006_initial... OK
  Applying desktop.0007_initial... OK
  Applying desktop.0008_auto_20191031_0704... OK
  Applying desktop.0009_auto_20191202_1056... OK
  Applying desktop.0010_auto_20200115_0908... OK
  Applying desktop.0011_document2_connector... OK
  Applying desktop.0012_connector_interface... OK
  Applying desktop.0013_alter_document2_is_trashed... OK
  Applying jobsub.0001_initial... OK
  Applying oozie.0001_initial... OK
  Applying oozie.0002_initial... OK
  Applying oozie.0003_initial... OK
  Applying oozie.0004_initial... OK
  Applying oozie.0005_initial... OK
  Applying oozie.0006_auto_20200714_1204... OK
  Applying oozie.0007_auto_20210126_2113... OK
  Applying oozie.0008_auto_20210216_0216... OK
  Applying pig.0001_initial... OK
  Applying pig.0002_auto_20200714_1204... OK
  Applying sessions.0001_initial... OK
  Applying sites.0001_initial... OK
  Applying sites.0002_alter_domain_unique... OK
  Applying useradmin.0001_initial... OK
  Applying useradmin.0002_userprofile_json_data... OK
  Applying useradmin.0003_auto_20200203_0802... OK
  Applying useradmin.0004_userprofile_hostname... OK
[21/Nov/2024 23:38:42 -0800] models       INFO     HuePermissions: 34 added, 0 updated, 0 up to date, 0 stale, 0 deleted
  1. 查看数据库表是否创建成功,判断数据库是否初始化完成
# 进入数据库
mysql -uroot -pHoMf@123

# 进入hue数据库
mysql> use hue;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed

# 查看hue数据库内的数据库表
mysql> show tables;
+--------------------------------+
| Tables_in_hue                  |
+--------------------------------+
| auth_group                     |
| auth_group_permissions         |
| auth_permission                |
| auth_user                      |
| auth_user_groups               |
| auth_user_user_permissions     |
| authtoken_token                |
| axes_accessattempt             |
| axes_accesslog                 |
| beeswax_compute                |
| beeswax_metainstall            |
| beeswax_namespace              |
| beeswax_queryhistory           |
| beeswax_savedquery             |
| beeswax_session                |
| defaultconfiguration_groups    |
| desktop_connector              |
| desktop_defaultconfiguration   |
| desktop_document               |
| desktop_document2              |
| desktop_document2_dependencies |
| desktop_document2permission    |
| desktop_document_tags          |
| desktop_documentpermission     |
| desktop_documenttag            |
| desktop_settings               |
| desktop_userpreferences        |
| django_content_type            |
| django_migrations              |
| django_session                 |
| django_site                    |
| documentpermission2_groups     |
| documentpermission2_users      |
| documentpermission_groups      |
| documentpermission_users       |
| jobsub_checkforsetup           |
| jobsub_jobdesign               |
| jobsub_jobhistory              |
| jobsub_oozieaction             |
| jobsub_ooziedesign             |
| jobsub_ooziejavaaction         |
| jobsub_ooziemapreduceaction    |
| jobsub_ooziestreamingaction    |
| oozie_bundle                   |
| oozie_bundledcoordinator       |
| oozie_coordinator              |
| oozie_datainput                |
| oozie_dataoutput               |
| oozie_dataset                  |
| oozie_decision                 |
| oozie_decisionend              |
| oozie_distcp                   |
| oozie_email                    |
| oozie_end                      |
| oozie_fork                     |
| oozie_fs                       |
| oozie_generic                  |
| oozie_history                  |
| oozie_hive                     |
| oozie_java                     |
| oozie_job                      |
| oozie_join                     |
| oozie_kill                     |
| oozie_link                     |
| oozie_mapreduce                |
| oozie_node                     |
| oozie_pig                      |
| oozie_shell                    |
| oozie_sqoop                    |
| oozie_ssh                      |
| oozie_start                    |
| oozie_streaming                |
| oozie_subworkflow              |
| oozie_workflow                 |
| pig_document                   |
| pig_pigscript                  |
| useradmin_grouppermission      |
| useradmin_huepermission        |
| useradmin_ldapgroup            |
| useradmin_userprofile          |
+--------------------------------+
80 rows in set (0.01 sec)

部署检验

此时应可以正常进入hue界面,并且Sources列表内能够看到MySql
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

配置hive

请先添加好文件映射,确保在容器内也可以访问到hive的配置文件目录
修改hue.ini配置文件,修改后重启容器即可
配置成功后即可在hue界面内的Sources列表内能够看到Hive
在这里插入图片描述

# 修改hue配置文件
vi /data/hue/hue.ini
# 取消注释即可
 [[[hive]]]
   name=Hive
   interface=hiveserver2

[beeswax]

# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
# hive主机地址
 hive_server_host=192.168.10.75

# Binary thrift port for HiveServer2.
# hive-site.xml中的hive.server2.thrift.port配置
 hive_server_port=10000
 
 # Hive configuration directory, where hive-site.xml is located
 # hive配置文件目录,若为容器部署,则需要配置文件映射
 hive_conf_dir=/opt/hive-3.1.3/conf

# Timeout in seconds for thrift calls to Hive service
 server_conn_timeout=120
 
 # Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.
 auth_username=hue
 auth_password=root
 
 [metastore]
# Flag to turn on the new version of the create table wizard.
 enable_new_create_table=true
  • 附上我的hive-site.xml配置
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<configuration>
        <property>
                <name>hive.metastore.warehouse.dir</name>
                <value>/user/hive-3.1.3/warehouse</value>
                <description/>
        </property>

        <property>
                <name>javax.jdo.option.ConnectionURL</name>
                <value>jdbc:mysql://Linux-Master:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value>
                <description>数据库连接</description>
        </property>

        <property>
                <name>javax.jdo.option.ConnectionDriverName</name>
                <value>com.mysql.jdbc.Driver</value>
                <description/>
        </property>

        <property>
                <name>javax.jdo.option.ConnectionUserName</name>
                <value>root</value>
                <description/>
        </property>

        <property>
                <name>javax.jdo.option.ConnectionPassword</name>
                <value>HoMf@123</value>
                <description/>
        </property>

        <property>
                <name>hive.querylog.location</name>
                <value>/home/hadoop/logs/hive-3.1.3/job-logs/${user.name}</value>
                <description>Location of Hive run time structured log file</description>
        </property>

        <property>
                <name>hive.exec.scratchdir</name>
                <value>/user/hive-3.1.3/tmp</value>
        </property>
       
        <property>
                <name>hive.server2.thrift.port</name>
                <value>11000</value>
        </property>
</configuration>

配置 Hadoop

配置 HDFS

修改hue.ini

# 修改hue配置文件
vi /data/hue/hue.ini
# hdfs集群配置
[[hdfs_clusters]]
  # HA support by using HttpFs

[[[default]]]
 fs_defaultfs=hdfs://192.168.10.75:9000
 webhdfs_url=http://192.168.10.75:9870/webhdfs/v1
 hadoop_conf_dir=/opt/hadoop-3.3.0/etc/hadoop
 
 
# yarn集群配置
[[yarn_clusters]]

[[[default]]]
# Enter the host on which you are running the ResourceManager
 resourcemanager_host=192.168.10.75

# The port where the ResourceManager IPC listens on
 resourcemanager_port=8032

# Whether to submit jobs to this cluster
submit_to=True

# Resource Manager logical name (required for HA)
## logical_name=

# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false

# URL of the ResourceManager API
 resourcemanager_api_url=http://192.168.10.75:8088

# URL of the ProxyServer API
 proxy_api_url=http://192.168.10.75:8088

# URL of the HistoryServer API
 history_server_api_url=http://localhost:19888

修改hadoop的core-site.xml

<!—允许通过 httpfs 方式访问 hdfs 的主机名 -->
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<!—允许通过 httpfs 方式访问 hdfs 的用户组 -->
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
  • 这里附上我的core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                        <value>/opt/hadoop-3.3.0/tmp</value>
                        <description>Abase for other temporary directories.</description>
        </property>

        <property>
                <name>fs.defaultFS</name>
                        <!-- IP地址为主节点服务器地址 -->
                        <value>hdfs://192.168.10.75:9000</value>
        </property>

        <property>
                <name>hadoop.proxyuser.root.hosts</name>
                <value>*</value>
        </property>

        <property>
                <name>hadoop.proxyuser.root.groups</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.hue.hosts</name>
                <value>*</value>
        </property>

        <property>
                <name>hadoop.proxyuser.hue.groups</name>
                <value>*</value>
        </property>
</configuration>

修改hadoop的hdfs-site.xml

<property>
 <name>dfs.webhdfs.enabled</name>
 <value>true</value>
</property>
  • 这里附上我的hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
        <!-- nn web 端访问地址-->
        <property>
                <name>dfs.namenode.http-address</name>
                <!-- master-ubuntu需要改为主节点机器名-->
                <value>linux-master:9870</value>
        </property>

        <!-- 2nn web 端访问地址-->
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <!-- slave1-ubuntu需要改为第一从节点台机器名-->
                <value>linux-slave01:9868</value>
        </property>

        <property>
                <name>dfs.permissions.enabled</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.webhdfs.enable</name>
                <value>true</value>
        </property>
</configuration>

配置yarn集群

  • 配置hadoop的yarn-site.xml
        <!-- 开启日志聚集功能 -->
        <property>
                <name>yarn.log-aggregation-enable</name>
                <value>true</value>
        </property>

        <!-- 设置日志聚集服务器地址 -->
        <property>
                <name>yarn.log.server.url</name>
                <!-- IP地址为主节点机器地址 -->
                <value>http://192.168.10.75:19888/jobhistory/logs</value>
        </property>

        <!-- 设置日志保留时间为 7 天 -->
        <property>
                <name>yarn.log-aggregation.retain-seconds</name>
                <value>604800</value>
        </property>
  • 这里附上我的yarn-site.xml
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

        <!-- 指定 MR 走 shuffle -->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>

        <!-- 指定 ResourceManager 的地址-->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <!-- master-ubuntu需要改为为主节点机器名-->
                <value>linux-master</value>
        </property>

        <!-- 环境变量的继承 -->
        <property>
                <name>yarn.nodemanager.env-whitelist</name>
                <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CO NF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
        </property>

        <!-- 开启日志聚集功能 -->
        <property>
                <name>yarn.log-aggregation-enable</name>
                <value>true</value>
        </property>

        <!-- 设置日志聚集服务器地址 -->
        <property>
                <name>yarn.log.server.url</name>
                <!-- IP地址为主节点机器地址 -->
                <value>http://192.168.10.75:19888/jobhistory/logs</value>
        </property>

        <!-- 设置日志保留时间为 7 天 -->
        <property>
                <name>yarn.log-aggregation.retain-seconds</name>
                <value>604800</value>
        </property>
</configuration>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2250679.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

51单片机快速入门之中断的应用 2024/11/23 串口中断

51单片机快速入门之中断的应用 基本函数: void T0(void) interrupt 1 using 1 { 这里放入中断后需要做的操作 } void T0(void)&#xff1a; 这是一个函数声明&#xff0c;表明函数 T0 不接受任何参数&#xff0c;并且不返回任何值。 interrupt 1&#xff1a; 这是关键字和参…

ffmpeg安装(windows)

ffmpeg安装-windows 前言ffmpeg安装路径安装说明 前言 ffmpeg的安装也是开箱即用的,并没有小码哥说的那么难 ffmpeg安装路径 这就下载好了! 安装说明 将上面的bin目录加入到环境变量,然后在cmd中测试一下: C:\Users\12114\Desktop\test\TaskmgrPlayer\x64\Debug>ffmpe…

HCIA笔记6--路由基础

0. 概念 自治系统&#xff1a;一个统一管理的大型网络&#xff0c;由路由器组成的集合。 路由器隔离广播域&#xff0c;交换机隔离冲突域。 1.路由器工作原理 路由器根据路由表进行转发数据包&#xff1b; 路由表中没有路由&#xff0c;直接丢弃该数据包路由表中只有一条路…

IDEA如何快速地重写方法,如equals、toString等

前言 大家好&#xff0c;我是小徐啊。我们在使用IDEA的时候&#xff0c;有时候是需要重写equals和toString等方法的。这在IDEA中已经很方便的给我们准备好了快速的操作了。今天就来讲解一下。 如何重写 首先&#xff0c;打开要重写方法的文件&#xff0c;让鼠标定位到这个文…

跨境物流市场风云变幻,集运企业需精准决策以应对挑战

跨境物流市场&#xff0c;作为全球经济的重要纽带&#xff0c;正经历着前所未有的风云变幻。据最新数据显示&#xff0c;全球海运贸易在2023年实现了2.4%的增长&#xff0c;并预计在今年年底前将再增长2%。这一增长态势虽然积极&#xff0c;但背后却隐藏着诸多挑战与不确定性。…

2024年首届数证杯 初赛wp

“数证杯”电子数据取证分析大赛致力于成为全国第一大电子数据取证分析大赛&#xff0c;面向所有网络安全从业人员公开征集参赛选手。参赛选手根据所属行业报名参赛赛道&#xff0c;比赛设置冠军、亚军、季军奖。所涉及行业包括能源、金融、通信、取证、安全等企业以及各类司法…

电机瞬态分析基础(3):空间矢量

1. 空间矢量 空间矢量的概念在交流电机分析与控制中具有非常重要的作用。将各相的电压、电流、磁链等电磁量用空间矢量表达&#xff0c;可以使三相感应电机的动态方程表达更简洁&#xff0c;为电机的分析与控制带来方便&#xff0c;并有助于对交流电机的矢量控制、直接转矩控制…

C/C++绘制爱心

系列文章 序号直达链接1C/C爱心代码2C/C跳动的爱心3C/C李峋同款跳动的爱心代码4C/C满屏飘字表白代码5C/C大雪纷飞代码6C/C烟花代码7C/C黑客帝国同款字母雨8C/C樱花树代码9C/C奥特曼代码10C/C精美圣诞树11C/C俄罗斯方块12C/C贪吃蛇13C/C孤单又灿烂的神-鬼怪14C/C闪烁的爱心15C/…

LLM应用-prompt提示:RAG query重写、相似query生成 加强检索准确率

参考&#xff1a; https://zhuanlan.zhihu.com/p/719510286 1、query重写 你是一名AI助手&#xff0c;负责在RAG&#xff08;知识库&#xff09;系统中通过重构用户查询来提高检索效果。根据原始查询&#xff0c;将其重写得更具体、详细&#xff0c;以便更有可能检索到相关信…

MTK主板_小型联发科安卓主板_行业智能终端主板基于联发科方案

MTK安卓主板是一款小巧而高效的科技产品&#xff0c;其尺寸仅为43.4mm x 57.6mm。采用了先进的联发科12nm制程工艺&#xff0c;这款主板搭载四核或八核64位A53架构的CPU&#xff0c;主频高达2.0GHz&#xff0c;不但保证了出色的计算能力&#xff0c;还实现了超低功耗的特点。系…

递归---汉诺塔

问题描述 有三根相邻的柱子&#xff0c;标号为A,B,C&#xff0c;A柱子上从下到上按金字塔状叠放着n个不同大小的圆盘&#xff0c;要把所有盘子一个一个移动到柱子B上&#xff0c;并且每次移动&#xff0c;同一根柱子上都不能出现大盘子在小盘子上方&#xff0c;输出每次的移动。…

HTML飞舞的爱心(完整代码)

写在前面 HTML语言实现飞舞的爱心完整代码。 完整代码 <!DOCTYPE html> <html lang="en"><head><meta charset="UTF-8"><title>飞舞爱心</title><style>* {margin: 0;padding: 0;}html,body {overflow: hidd…

修改训练策略,无损提升性能

&#x1f3e1;作者主页&#xff1a;点击&#xff01; &#x1f916;编程探索专栏&#xff1a;点击&#xff01; ⏰️创作时间&#xff1a;2024年11月29日15点40分 神秘男子影, 秘而不宣藏。 泣意深不见, 男子自持重, 子夜独自沉。 论文链接 点击开启你的论文编程之旅…

@bytemd/vue掘金markdown插件预览内容有误

vue项目使用bytemd/vue 来预览字符串格式的markdown内容&#xff0c;总会多出如图的一段代码&#xff0c; 请问有没有大佬知道为什么&#xff1f; 很急&#xff0c;求教&#xff01;&#xff01;&#xff01;&#xff01;&#xff01;

Web 毕设篇-适合小白、初级入门练手的 Spring Boot Web 毕业设计项目:电影院后台管理系统(前后端源码 + 数据库 sql 脚本)

&#x1f525;博客主页&#xff1a; 【小扳_-CSDN博客】 ❤感谢大家点赞&#x1f44d;收藏⭐评论✍ 文章目录 1.0 项目介绍 2.0 用户登录功能 3.0 用户管理功能 4.0 影院管理功能 5.0 电影管理功能 6.0 影厅管理功能 7.0 电影排片管理功能 8.0 用户评论管理功能 9.0 用户购票功…

window.structuredClone 深拷贝

概述&#xff1a; structuredClone 是一种新的 JavaScript 原生方法&#xff0c;用于创建一个对象的深拷贝。与传统的浅拷贝方法&#xff08;如 Object.assign 或数组的 slice&#xff09;不同&#xff0c;structuredClone 可以递归地拷贝对象&#xff0c;包括其中的嵌套对象、…

java全栈day10--后端Web基础(基础知识)

引言&#xff1a;只要能通过浏览器访问的网站全是B/S架构&#xff0c;其中最常用的服务器就是Tomcat 在浏览器与服务器交互的时候采用的协议是HTTP协议 一、Tomcat服务器 1.1介绍 官网地址&#xff1a;Apache Tomcat - Welcome! 1.2基本使用(网上有安装教程&#xff0c;建议…

java:拆箱和装箱,缓存池概念简单介绍

1.基本数据类型及其包装类&#xff1a; 举例子&#xff1a; Integer i 10; //装箱int n i; //拆箱 概念&#xff1a; 装箱就是自动将基本数据类型转换为包装器类型&#xff1b; 拆箱就是自动将包装器类型转换为基本数据类型&#xff1b; public class Main {public s…

保持角色一致性!flux新模型redux用法(含模型与工作流)

​ 目录 redux模型是什么&#xff0c;能干啥&#xff1f; 用到的工具有哪些&#xff1f; 工具和模型文件在哪里下载&#xff1f; 整合包&#xff1a; 下载后需要分别放到指定目录&#xff1a; redux模型怎么用&#xff1f; 加载工作流 上传图片和输入提示词 生成结果…

通信原理实验:抽样定理实验

目录 一、实验目的和要求 二、实验内容和原理 实验器材 实验原理 三、实验步骤 (一)实验项目一:抽样信号观测及抽样定理验证 四、实验记录与处理 结论: 辅助学习资料: 五、实验结果及分析 一、实验目的和要求 了解抽样定理在通信系统中的重要性。掌握自然抽样及…