Clickhouse 配置参考

news2025/4/19 7:03:42

Clickhouse 配置参考

适用版本 21.3.9.84

config.xml 配置

<?xml version="1.0"?>
<!--
  NOTE: User and query level settings are set up in "users.xml" file.
-->
<yandex>
    <access_control_path>/data/clickhouse/clickhouse-server/access/</access_control_path>
    <logger>
        <!-- Possible levels: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105 -->
        <level>trace</level>
        <log>/data/clickhouse/clickhouse-server/logs/clickhouse-server.log</log>
        <errorlog>/data/clickhouse/clickhouse-server/logs/clickhouse-server.err.log</errorlog>
        <size>1000M</size>
        <count>10</count>
        <!-- <console>1</console> -->
        <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->
    </logger>
    <!--display_name>production</display_name-->
    <!-- It is the name that will be shown in the client -->
    <http_port>8123</http_port>
    <tcp_port>9000</tcp_port>
    <mysql_port>9004</mysql_port>
    <!-- For HTTPS and SSL over native protocol. -->
    <!--
    <https_port>8443</https_port>
    <tcp_port_secure>9440</tcp_port_secure>
    -->
    <!-- Port for communication between replicas. Used for data exchange. -->
    <interserver_http_port>9009</interserver_http_port>
    <!-- Hostname that is used by other replicas to request this server.
         If not specified, than it is determined analoguous to 'hostname -f' command.
         This setting could be used to switch replication to another network interface.
      -->
    <!--
    <interserver_http_host>example.yandex.ru</interserver_http_host>
    -->
    <!-- Listen specified host. use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere. -->
    <!-- <listen_host>::</listen_host> -->
    <!-- Same for hosts with disabled ipv6: -->
    <!-- <listen_host>0.0.0.0</listen_host> -->
    <!-- Default values - try listen localhost on ipv4 and ipv6: -->
    <!--
    <listen_host>::1</listen_host>
    -->
    <!-- Don't exit if ipv6 or ipv4 unavailable, but listen_host with this protocol specified -->
    <!-- <listen_try>0</listen_try> -->
    <!-- Allow listen on same address:port -->
    <!-- <listen_reuse_port>0</listen_reuse_port> -->
    <!-- <listen_backlog>64</listen_backlog> -->
    <max_connections>4096</max_connections>
    <keep_alive_timeout>120</keep_alive_timeout>
    <!-- Maximum number of concurrent queries. -->
    <max_concurrent_queries>100</max_concurrent_queries>
    <!-- Set limit on number of open files (default: maximum). This setting makes sense on Mac OS X because getrlimit() fails to retrieve
         correct maximum value. -->
    <!-- <max_open_files>262144</max_open_files> -->
    <!-- Size of cache of uncompressed blocks of data, used in tables of MergeTree family.
         In bytes. Cache is single for server. Memory is allocated only on demand.
         Cache is used when 'use_uncompressed_cache' user setting turned on (off by default).
         Uncompressed cache is advantageous only for very short queries and in rare cases.
      -->
    <uncompressed_cache_size>8589934592</uncompressed_cache_size>
    <!-- Approximate size of mark cache, used in tables of MergeTree family.
         In bytes. Cache is single for server. Memory is allocated only on demand.
         You should not lower this value.
      -->
    <mark_cache_size>5368709120</mark_cache_size>
    <!-- Path to data directory, with trailing slash. -->
    <path>/data/clickhouse/clickhouse-server/</path>
    <!-- Path to temporary data for processing hard queries. -->
    <tmp_path>/data/clickhouse/clickhouse-server/tmp/</tmp_path>
    <!-- Policy from the <storage_configuration> for the temporary files.
         If not set <tmp_path> is used, otherwise <tmp_path> is ignored.

         Notes:
         - move_factor              is ignored
         - keep_free_space_bytes    is ignored
         - max_data_part_size_bytes is ignored
         - you must have exactly one volume in that policy
    -->
    <!-- <tmp_policy>tmp</tmp_policy> -->
    <storage_configuration>
        <disks>
            <default>
                <keep_free_space_bytes>10737418240</keep_free_space_bytes>
            </default>
        </disks>
    </storage_configuration>
    <!-- Directory with user provided files that are accessible by 'file' table function. -->
    <user_files_path>/data/clickhouse/clickhouse-server/user_files/</user_files_path>
    <!-- Path to configuration file with users, access rights, profiles of settings, quotas. -->
    <users_config>users.xml</users_config>
    <!-- Default profile of settings. -->
    <default_profile>default</default_profile>
    <!-- System profile of settings. This settings are used by internal processes (Buffer storage, Distibuted DDL worker and so on). -->
    <!-- <system_profile>default</system_profile> -->
    <!-- Default database. -->
    <default_database>default</default_database>
    <!-- Server time zone could be set here.

         Time zone is used when converting between String and DateTime types,
          when printing DateTime in text formats and parsing DateTime from text,
          it is used in date and time related functions, if specific time zone was not passed as an argument.

         Time zone is specified as identifier from IANA time zone database, like UTC or Africa/Abidjan.
         If not specified, system time zone at server startup is used.

         Please note, that server could display time zone alias instead of specified name.
         Example: W-SU is an alias for Europe/Moscow and Zulu is an alias for UTC.
    -->
    <timezone>Asia/Shanghai</timezone>
    <!-- You can specify umask here (see "man umask"). Server will apply it on startup.
         Number is always parsed as octal. Default umask is 027 (other users cannot read logs, data files, etc; group can only read).
    -->
    <!-- <umask>022</umask> -->
    <!-- Perform mlockall after startup to lower first queries latency
          and to prevent clickhouse executable from being paged out under high IO load.
         Enabling this option is recommended but will lead to increased startup time for up to a few seconds.
    -->
    <mlock_executable>true</mlock_executable>
    <!-- Configuration of clusters that could be used in Distributed tables.
         https://clickhouse.tech/docs/en/operations/table_engines/distributed/
      -->
    <remote_servers incl="clickhouse_remote_servers"/>
    <zookeeper incl="zookeeper-servers" optional="true"/>
    <!-- Substitutions for parameters of replicated tables.
          Optional. If you don't use replicated tables, you could omit that.

         See https://clickhouse.yandex/docs/en/table_engines/replication/#creating-replicated-tables
      -->
    <macros incl="macros" optional="true"/>
    <!-- Reloading interval for embedded dictionaries, in seconds. Default: 3600. -->
    <builtin_dictionaries_reload_interval>3600</builtin_dictionaries_reload_interval>
    <!-- Maximum session timeout, in seconds. Default: 3600. -->
    <max_session_timeout>3600</max_session_timeout>
    <!-- Default session timeout, in seconds. Default: 60. -->
    <default_session_timeout>60</default_session_timeout>
    <!-- Serve endpoint fot Prometheus monitoring. -->
    <!--
NaN        port - port to setup server. If not defined or 0 than http_port used
        metrics - send data from table system.metrics
        events - send data from table system.events
        asynchronous_metrics - send data from table system.asynchronous_metrics
    -->
    <prometheus>
        <endpoint>/metrics</endpoint>
        <port>9363</port>
        <metrics>true</metrics>
        <events>true</events>
        <asynchronous_metrics>true</asynchronous_metrics>
    </prometheus>
    <!-- Query log. Used only for queries with setting log_queries = 1. -->
    <query_log>
        <!-- What table to insert data. If table is not exist, it will be created.
             When query log structure is changed after system update,
              then old table will be renamed and new table will be created automatically.
        -->
        <database>system</database>
        <table>query_log</table>
        <!--
            PARTITION BY expr https://clickhouse.yandex/docs/en/table_engines/custom_partitioning_key/
            Example:
                event_date
                toMonday(event_date)
                toYYYYMM(event_date)
                toStartOfHour(event_time)
        -->
        <partition_by>toYYYYMM(event_date)</partition_by>
        <!-- Instead of partition_by, you can provide full engine expression (starting with ENGINE = ) with parameters,
             Example: <engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
          -->
        <!-- Interval of flushing data. -->
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        <ttl>event_date + INTERVAL 30 DAY DELETE</ttl>
    </query_log>
    <!-- Trace log. Stores stack traces collected by query profilers.
         See query_profiler_real_time_period_ns and query_profiler_cpu_time_period_ns settings. -->
    <trace_log>
        <database>system</database>
        <table>trace_log</table>
        <partition_by>toYYYYMM(event_date)</partition_by>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        <ttl>event_date + INTERVAL 30 DAY DELETE</ttl>
    </trace_log>
    <!-- Query thread log. Has information about all threads participated in query execution.
         Used only for queries with setting log_query_threads = 1. -->
    <query_thread_log>
        <database>system</database>
        <table>query_thread_log</table>
        <partition_by>toYYYYMM(event_date)</partition_by>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        <ttl>event_date + INTERVAL 30 DAY DELETE</ttl>
    </query_thread_log>
    <!-- Uncomment if use part log.
         Part log contains information about all actions with parts in MergeTree tables (creation, deletion, merges, downloads).
    <part_log>
        <database>system</database>
        <table>part_log</table>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </part_log>
    -->
    <compression incl="clickhouse_compression">
        <!--
        <!- - Set of variants. Checked in order. Last matching case wins. If nothing matches, lz4 will be used. - ->
        <case>

            <!- - Conditions. All must be satisfied. Some conditions may be omitted. - ->
            <min_part_size>10000000000</min_part_size>        <!- - Min part size in bytes. - ->
            <min_part_size_ratio>0.01</min_part_size_ratio>   <!- - Min size of part relative to whole table size. - ->

            <!- - What compression method to use. - ->
            <method>zstd</method>
        </case>
    -->
    </compression>
    <!-- Allow to execute distributed DDL queries (CREATE, DROP, ALTER, RENAME) on cluster.
         Works only if ZooKeeper is enabled. Comment it if such functionality isn't required. -->
    <distributed_ddl>
        <!-- Path in ZooKeeper to queue with DDL queries -->
        <path>/clickhouse/task_queue/ddl</path>
        <!-- Settings from this profile will be used to execute DDL queries -->
        <!-- <profile>default</profile> -->
    </distributed_ddl>
    <!-- Settings to fine tune MergeTree tables. See documentation in source code, in MergeTreeSettings.h -->
    <!--
    <merge_tree>
        <max_suspicious_broken_parts>5</max_suspicious_broken_parts>
    </merge_tree>
    -->

    <merge_tree>
        <parts_to_delay_insert>5000</parts_to_delay_insert>
        <parts_to_throw_insert>5000</parts_to_throw_insert>
        <max_delay_to_insert>2</max_delay_to_insert>
        <max_suspicious_broken_parts>5</max_suspicious_broken_parts>
        <max_parts_in_total>100000</max_parts_in_total>
    </merge_tree>
    
 

    





    <!-- Protection from accidental DROP.
         If size of a MergeTree table is greater than max_table_size_to_drop (in bytes) than table could not be dropped with any DROP query.
         If you want do delete one table and don't want to change clickhouse-server config, you could create special file <clickhouse-path>/flags/force_drop_table and make DROP once.
         By default max_table_size_to_drop is 50GB; max_table_size_to_drop=0 allows to DROP any tables.
         The same for max_partition_size_to_drop.
         Uncomment to disable protection.
    -->
    <!-- <max_table_size_to_drop>0</max_table_size_to_drop> -->
    <!-- <max_partition_size_to_drop>0</max_partition_size_to_drop> -->
    <!-- Please do not remove this line. -->
    <listen_host>0.0.0.0</listen_host>
    <zookeeper incl="zookeeper-servers" optional="true"/>
    <macros incl="macros" optional="true"/>
    <include_from>/etc/clickhouse-server/metrika.xml</include_from>
    <max_table_size_to_drop>0</max_table_size_to_drop>
</yandex>

metrika.xml 配置

<?xml version="1.0" encoding="UTF-8"?>
<yandex>
    <clickhouse_remote_servers>
        <default_cluster>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>127.0.1.2</host>
                    <port>9000</port>
                    <user>xxx</user>
                    <password>xxx</password>
                </replica>
                <replica>
                    <host>127.0.0.3</host>
                    <port>9000</port>
                    <user>xxx</user>
                    <password>xxx</password>
                </replica>
            </shard>
        </default_cluster>
    </clickhouse_remote_servers>
    <zookeeper-servers>
        <node>
            <host>127.1.1.15</host>
            <port>2181</port>
        </node>
        <node>
            <host>127.1.1.16</host>
            <port>2181</port>
        </node>
        <node>
            <host>127.1.1.17</host>
            <port>2181</port>
        </node>
    </zookeeper-servers>
</yandex>

users.xml配置

<yandex>
    <!-- Profiles of settings. -->
    <profiles>
        <!-- Default settings. -->
        <default>
            <!-- Maximum memory usage for processing single query, in bytes. -->
            <max_memory_usage>250000000000</max_memory_usage>
            <!--
             <max_memory_usage_for_all_queries>100000000000</max_memory_usage_for_all_queries>
             -->
            <!-- Use cache of uncompressed blocks of data. Meaningful only for processing many of very short queries. -->
            <use_uncompressed_cache>0</use_uncompressed_cache>
            <!-- How to choose between replicas during distributed query processing.
                  random - choose random replica from set of replicas with minimum number of errors
                  nearest_hostname - from set of replicas with minimum number of errors, choose replica
                   with minimum number of different symbols between replica's hostname and local hostname
                   (Hamming distance).
                  in_order - first live replica is chosen in specified order.
                  first_or_random - if first replica one has higher number of errors, pick a random one from replicas with minimum number of errors.
             -->
            <load_balancing>random</load_balancing>
            <max_partitions_per_insert_block>0</max_partitions_per_insert_block>
            <background_pool_size>32</background_pool_size>
            <max_compress_block_size>10485760</max_compress_block_size>
            <min_insert_block_size_rows>10000000</min_insert_block_size_rows>
            <min_insert_block_size_bytes>1024000000</min_insert_block_size_bytes>
        </default>
        <!-- Profile that allows only read queries. -->
        <readonly>
            <readonly>1</readonly>
        </readonly>
    </profiles>
    <!-- Users and ACL. -->
    <users>
        <!-- If user name was not specified, 'default' user is used. -->
        <root>
            <!-- Password could be specified in plaintext or in SHA256 (in hex format).

                  If you want to specify password in plaintext (not recommended), place it in 'password' element.
                  Example: <password>qwerty</password>.
                  Password could be empty.

                  If you want to specify SHA256, place it in 'password_sha256_hex' element.
                  Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
                  Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019).

                  If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
                  Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>

                  How to generate decent password:
                  Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
                  In first line will be password and in second - corresponding SHA256.

                  How to generate double SHA1:
                  Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
                  In first line will be password and in second - corresponding double SHA1.
             -->
            <password>xxxxx</password>
            <!-- List of networks with open access.

                  To open access from everywhere, specify:
                     <ip>::/0</ip>

                  To open access only from localhost, specify:
                     <ip>::1</ip>
                     <ip>127.0.0.1</ip>

                  Each element of list has one of the following forms:
                  <ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0
                      2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.
                  <host> Hostname. Example: server01.yandex.ru.
                      To check access, DNS query is performed, and all received addresses compared to peer address.
                  <host_regexp> Regular expression for host names. Example, ^serverdd-dd-d.yandex.ru$
                      To check access, DNS PTR query is performed for peer address and then regexp is applied.
                      Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
                      Strongly recommended that regexp is ends with $
                  All results of DNS requests are cached till server restart.
             -->
            <networks incl="networks" replace="replace">
                <ip>::/0</ip>
            </networks>
            <!-- Settings profile for user. -->
            <profile>default</profile>
            <!-- Quota for user. -->
            <quota>default</quota>
            <access_management>1</access_management>
        </default>
    </users>
    <!-- Quotas. -->
    <quotas>
        <!-- Name of quota. -->
        <default>
            <!-- Limits for time interval. You could specify many intervals with different limits. -->
            <interval>
                <!-- Length of interval. -->
                <duration>3600</duration>
                <!-- No limits. Just calculate resource usage for time interval. -->
                <queries>0</queries>
                <errors>0</errors>
                <result_rows>0</result_rows>
                <read_rows>0</read_rows>
                <execution_time>0</execution_time>
            </interval>
        </default>
    </quotas>
</yandex>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2337798.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

利用deepseek+Mermaid画流程图

你是一个产品经理&#xff0c;请绘制一个流程图&#xff0c;要求生成符合Mermaid语法的代码&#xff0c;要求如下&#xff1a; 用户下载文件、上传文件、删除文件的流程过程符合安全规范细节具体到每一步要做什么 graph LRclassDef startend fill:#F5EBFF,stroke:#BE8FED,str…

leetcode0146. LRU 缓存-medium

1 题目&#xff1a;LRU 缓存 官方标定难度&#xff1a;中 请你设计并实现一个满足 LRU (最近最少使用) 缓存 约束的数据结构。 实现 LRUCache 类&#xff1a; LRUCache(int capacity) 以 正整数 作为容量 capacity 初始化 LRU 缓存 int get(int key) 如果关键字 key 存在于缓…

SuperMap iClient3D for WebGL 如何加载WMTS服务

在 SuperMap iClient3D for WebGL 中加载WMTS服务时&#xff0c;参数配置很关键&#xff01;下面我们详细介绍如何正确填写参数&#xff0c;确保影像服务完美加载。 一、数据制作 对于上述视频中的地图制作&#xff0c;此处不做讲述&#xff0c;如有需要可访问&#xff1a;Onl…

组件自身如何向外暴露一个子组件

最近在开发是遇到一个问题,原本是在组件内的一个功能被ui设计稿给搞到了外面,产品也不同意放在子组件内。于是一个问题就来,抽出来放到外面的部分依赖的也是组件内部的数据和逻辑,所以如果外面再重写这一部分,显然浪费感情,并且又要把依赖关系挪出去,也不划算。 于是,…

《软件设计师》复习笔记(11.4)——处理流程设计、系统设计、人机界面设计

目录 一、业务流程建模 二、流程设计工具 三、业务流程重组&#xff08;BPR&#xff09; 四、业务流程管理&#xff08;BPM&#xff09; 真题示例&#xff1a; 五、系统设计 1. 主要目的 2. 设计方法 3. 主要内容 4. 设计原则 真题示例&#xff1a; 六、人机界面设…

深入解析B站androidApp接口:从bilibili.api.ticket.v1.Ticket/GetTicket到SendMsg的技术分析

前言 最近一段时间&#xff0c;我对B站的App接口进行了深入分析&#xff0c;特别是关注了认证机制和私信功能的实现。通过逆向工程和网络抓包&#xff0c;发现了B站移动端API的底层工作原理&#xff0c;包括设备标识生成机制、认证流程和消息传输协议。本文将分享这些研究成果…

UWP发展历程

通用Windows平台(UWP)发展历程 引言 通用Windows平台(Universal Windows Platform, UWP)是微软为实现"一次编写&#xff0c;处处运行"的愿景而打造的现代应用程序平台。作为微软统一Windows生态系统的核心战略组成部分&#xff0c;UWP代表了从传统Win32应用向现代应…

数据库相关概念,关系型数据库的核心要素,MySQL(特点,安装,环境变量配置,启动,停止,客户端连接),数据模型

目录 数据库相关概念 MySQL&#xff08;特点&#xff0c;安装&#xff0c;环境变量配置&#xff0c;启动和停止&#xff0c;客户端连接&#xff09; MySQL数据库的特点 Windows下安装MySQL MySQL 8.0.36&#xff08;安装版&#xff09; MySQL安装 配置Path环境变量 MySQ…

Facebook隐私保护:从技术到伦理的探索

在这个数字化时代&#xff0c;隐私保护已成为公众关注的焦点。Facebook&#xff0c;作为全球最大的社交媒体平台之一&#xff0c;其用户隐私保护问题更是引起了广泛的讨论。本文将从技术层面和伦理层面探讨 Facebook 在隐私保护方面的努力和挑战。 技术层面的隐私保护 在技术…

香港服务器CPU对比:Intel E3与E5系列核心区别与使用场景

香港服务器的 CPU 配置(核心数与主频)直接决定了其并发处理能力和数据运算效率&#xff0c;例如高频多核处理器可显著提升多线程任务响应速度。在实际业务场景中&#xff0c;不同负载需求对 CPU 架构的要求存在显著差异——以 Intel E3 和 E5 系列为例&#xff0c;由于两者在性…

ChatGPT-o3辅助学术大纲效果如何?

目录 1 引言 2 背景综述 2.1 自动驾驶雷达感知 2.2 生成模型演进&#xff1a;从 GAN 到 Diffusion 3 相关工作 3.1 雷达点云增强与超分辨率 3.2 扩散模型在数据增广中的应用 4 方法论 4.1 问题定义与总览 4.2 数据预处理与雷达→体素表示 4.3 潜在体素扩散网络&…

AI大模型API文档的核心内容概述,以通用框架和典型实现为例

以下是AI大模型API文档的核心内容概述&#xff0c;以通用框架和典型实现为例&#xff1a; 一、API基础架构 1. 基础信息 API类型&#xff1a;RESTful API或gRPC&#xff08;如阿里云通义千问支持HTTPS接口&#xff09;请求方式&#xff1a;通常为POST方法基础URL&#xff1a…

使用pnpm第一次运行项目报错 ERR_PNPM_NO_PKG_MANIFEST No package.json found in E:\

开始用unibestpnpm写一个小程序 运行pnpm init报错 如标题所示没有package.json这个文件 博主犯了一个很愚蠢的错误。。 准备方案手动创建一个json文件 此时才发现没到根目录下&#xff0c;创建了一个项目之后就没有切入文件夹里。 切入根目录再下载就成功啦

手持式三维扫描设备赋能智能汽车制造

随着电动化与智能化趋势的加速&#xff0c;传统逆向工程手段已难以满足复杂零部件的建模需求。 ‌3D逆向建模‌技术&#xff0c;为汽车制造企业提供高效、精准的数字化解决方案。 传统汽车零部件的尺寸检测与建模依赖三坐标测量机&#xff08;CMM&#xff09;或人工测绘&#…

Hutool之DateUtil:让Java日期处理变得更加简单

前言 在Java开发中&#xff0c;日期和时间的处理是一个常见问题。为了简化这个过程&#xff0c;许多开发者会使用第三方工具包&#xff0c;如Hutool。Hutool是一个Java工具包&#xff0c;提供了许多实用的功能&#xff0c;其中之一就是日期处理。日期时间工具类是Hutool的核心包…

Ambari 中移除/重装 yarn 集群中的 NodeManager 节点

文章目录 背景分析解决分析:现有 NodeManager 情况移除:240 服务器上的 NodeManager重新安装:240 服务器上的安装 NodeManager疑问为什么直接添加就可以运行?参考背景 项目中有Spark应用,主要在 yarn 集群中部署。 现在发现 yarn 集群中的节点资源过剩,需要将部分节点移…

小程序在 skyline 下如何开启多行省略

参考&#xff1a;https://developers.weixin.qq.com/community/develop/doc/000a648baacca06e83f1034d66c000 前言 小程序在 skyline 下不支持 line-clamp&#xff0c;想要开启多行省略使用 text 组件的 max-lines 结合 overflow 属性。 解决办法&#xff1a;skyline 下不支…

《MySQL:MySQL数据类型分类》

数据类型分类 数值类 tinyint类型 数值越界测试。 在MySQL中&#xff0c;整型可以指定是有符号的和无符号的&#xff0c;默认是有符号的。 可以通过UNSIGNED来说明某个字段是无符号的。 无符号整型数值越界测试。 如果我们向mysql特定的类型中插入不合法的数据&#xff0c;my…

ZYNQ笔记(八):UART 串口中断

版本&#xff1a;Vivado2020.2&#xff08;Vitis&#xff09; 任务&#xff1a;UART串口中断实验&#xff0c;实现串口中断数据回环&#xff08;接收数据并发送出去&#xff09; 目录 一、介绍 二、硬件设计 三、软件设计 四、效果 一、介绍 ZYNQ 的 UART&#xff08;Unive…

生态篇|多总线融合与网关设计

引言 1. 车内多总线概览 2. 主流车载总线技术对比 3. 网关设计原则与架构 4. 协议转换与映射策略 5. 安全与诊断功能集成