【Iceberg分析】Spark集成Iceberg采集输出

news2024/11/25 4:59:28

Spark集成Iceberg采集输出

文章目录

  • Spark集成Iceberg采集输出
    • Iceberg提供了两类指标和提供了两类指标输出器
      • ScanReport
      • CommitReport
    • LoggingMetricsReporter
    • RESTMetricsReporter
    • 验证示例
      • 相关环境配置
      • 结果说明

Iceberg提供了两类指标和提供了两类指标输出器

ScanReport

包含在对给定表进行扫描规划期间收集到的指标。除了涉及表的一些一般信息(如快照 id 或表名)外,它还包括以下指标:

  • 扫描规划总持续时间
  • 结果中包含的数据,删除文件数量
  • 扫描/跳过的数据,删除清单数量
  • 扫描/跳过的数据,删除文件数
  • 扫描的相等,位置删除文件数

CommitReport

载有在提交对表的更改(又称生成快照)后收集的指标。除了涉及表的一些一般信息(如快照 id 或表名)外,它还包括以下指标:

  • 总持续时间
  • 提交成功所需的尝试次数
  • 添加/删除的数据,删除文件数
  • 添加/删除的相等,位置删除文件数
  • 添加/删除的相等,位置删除文件数

LoggingMetricsReporter

日志指标输出器,输出在日志文件中。

RESTMetricsReporter

Rest指标输出器,发送至Rest服务中

只能在使用restcatalog时,才能使用该指标输出器。

验证示例

在这里插入图片描述

相关环境配置

iceberg-demo相关配置

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.donny.demo</groupId>
    <artifactId>iceberg-demo</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>iceberg-demo</name>
    <url>http://maven.apache.org</url>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <spark.version>3.4.2</spark.version>
        <iceberg.version>1.6.1</iceberg.version>
        <parquet.version>1.13.1</parquet.version>
        <avro.version>1.11.3</avro.version>
        <parquet.hadoop.bundle.version>1.8.1</parquet.hadoop.bundle.version>
    </properties>

    <dependencies>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>${spark.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.avro</groupId>
                    <artifactId>avro</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.12</artifactId>
            <version>${spark.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.parquet</groupId>
                    <artifactId>parquet-column</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.parquet</groupId>
                    <artifactId>parquet-hadoop-bundle</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.parquet</groupId>
                    <artifactId>parquet-hadoop</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>org.apache.iceberg</groupId>
            <artifactId>iceberg-core</artifactId>
            <version>${iceberg.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.iceberg</groupId>
            <artifactId>iceberg-spark-3.4_2.12</artifactId>
            <version>${iceberg.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.iceberg</groupId>
            <artifactId>iceberg-spark-extensions-3.4_2.12</artifactId>
            <version>${iceberg.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.antlr</groupId>
                    <artifactId>antlr4</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.antlr</groupId>
                    <artifactId>antlr4-runtime</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.parquet</groupId>
            <artifactId>parquet-column</artifactId>
            <version>${parquet.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.parquet</groupId>
            <artifactId>parquet-hadoop</artifactId>
            <version>${parquet.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.parquet</groupId>
            <artifactId>parquet-hadoop-bundle</artifactId>
            <version>${parquet.hadoop.bundle.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.avro</groupId>
            <artifactId>avro</artifactId>
            <version>${avro.version}</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

重写日志输出配置文件log4j2.properties,将指标日志输出至指标日志文件。spark的默认日志配置文件来自spark-core包,org.apache.spark.log4j2-defaults.properties

# Set everything to be logged to the console
rootLogger.level = info
rootLogger.appenderRef.stdout.ref = console
logger.icebergMetric.appenderRef.file.ref = RollingFile
logger.icebergMetric.appenderRef.stdout.ref = console

appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} %p %c{1}: %m%n%ex

appender.CUSTOM.type = RollingFile
appender.CUSTOM.name = RollingFile
appender.CUSTOM.fileName = logs/iceberg_metrics.log
appender.CUSTOM.filePattern = logs/iceberg_metrics.%d{yyyy-MM-dd}-%i.log.gz
appender.CUSTOM.layout.type = PatternLayout
appender.CUSTOM.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} %-5p %c{1}:%L - %m%n
appender.CUSTOM.strategy.type = DefaultRolloverStrategy
appender.CUSTOM.strategy.delete.type = Delete
appender.CUSTOM.strategy.delete.basePath = logs
appender.CUSTOM.strategy.delete.0.type = IfFileName
appender.CUSTOM.strategy.delete.0.regex = iceberg_metrics.*.log.gz
appender.CUSTOM.strategy.delete.1.type = IfLastModified
appender.CUSTOM.strategy.delete.1.age = P15D
appender.CUSTOM.policy.type = TimeBasedTriggeringPolicy

# Settings to quiet third party logs that are too verbose
logger.jetty.name = org.sparkproject.jetty
logger.jetty.level = warn
logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
logger.jetty2.level = error
logger.repl1.name = org.apache.spark.repl.SparkIMain$exprTyper
logger.repl1.level = info
logger.repl2.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreter
logger.repl2.level = info

# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
logger.repl.name = org.apache.spark.repl.Main
logger.repl.level = warn

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs
# in SparkSQL with Hive support
logger.metastore.name = org.apache.hadoop.hive.metastore.RetryingHMSHandler
logger.metastore.level = fatal
logger.hive_functionregistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistry
logger.hive_functionregistry.level = error

# Parquet related logging
logger.parquet.name = org.apache.parquet.CorruptStatistics
logger.parquet.level = error
logger.parquet2.name = parquet.CorruptStatistics
logger.parquet2.level = error

# Custom logger for your application
logger.icebergMetric.name = org.apache.iceberg.metrics.LoggingMetricsReporter
logger.icebergMetric.level = Info
logger.icebergMetric.additivity = false

Java主类,主要为表配置指标输出类,才能进行指标输出。

package com.donny.demo;

import org.apache.iceberg.FileScanTask;
import org.apache.iceberg.TableScan;
import org.apache.iceberg.io.CloseableIterable;
import org.apache.iceberg.metrics.LoggingMetricsReporter;
import org.apache.iceberg.spark.Spark3Util;
import org.apache.spark.sql.AnalysisException;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

import java.io.IOException;


/**
 * @author 1792998761@qq.com
 * @version 1.0
 */
public class IcebergSparkDemo {

    public static void main(String[] args) throws AnalysisException, IOException, InterruptedException {
        SparkSession spark = SparkSession
                .builder()
                .master("local")
                .appName("Iceberg spark example")
                .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
                .config("spark.sql.catalog.local", "org.apache.iceberg.spark.SparkCatalog")
                .config("spark.sql.catalog.local.type", "hadoop") //指定catalog 类型
                .config("spark.sql.catalog.local.warehouse", "iceberg_warehouse")
                .getOrCreate();

        spark.sql("CREATE TABLE local.iceberg_db.table2( id bigint, data string, ts timestamp) USING iceberg PARTITIONED BY (day(ts))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (1, 'a', cast(1727601585 as timestamp)),(2, 'b', cast(1724923185 as timestamp)),(3, 'c', cast(1724919585 as timestamp))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (4, 'd', cast(1727605185 as timestamp)),(5, 'e', cast(1725963585 as timestamp)),(6, 'f', cast(1726827585 as timestamp))");
        spark.sql("DELETE FROM local.iceberg_db.table2  where id in (2)");

        org.apache.iceberg.Table table = Spark3Util.loadIcebergTable(spark, "local.iceberg_db.table2");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (4, 'd', cast(1724750385 as timestamp)),(5, 'e', cast(1724663985 as timestamp)),(6, 'f', cast(1727342385 as timestamp))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (7, 'h', cast(1727601585 as timestamp)),(8, 'i', cast(1724923185 as timestamp)),(9, 'j', cast(1724836785 as timestamp))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (10, 'k', cast(1727601585 as timestamp)),(11, 'l', cast(1724923185 as timestamp)),(12, 'm', cast(1724836785 as timestamp))");
        // 配置表的指标输出器
        table.updateProperties()
                .set("metrics.reporters", LoggingMetricsReporter.class.getName())
                .commit();
        // 主动表扫描
        TableScan tableScan =
                table.newScan();
        try (CloseableIterable<FileScanTask> fileScanTasks = tableScan.planFiles()) {
        }

        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (30, 't', cast(1727605185 as timestamp)),(31, 'y', cast(1725963585 as timestamp)),(32, 'i', cast(1726827585 as timestamp))");

        Dataset<Row> result = spark.sql("SELECT * FROM local.iceberg_db.table2 where ts >= '2024-09-20'");
        result.show();
        spark.close();
    }
}

结果说明

目前验证的时候只发现是需要主动调用scan,输出的指标(主动输出指标)

2024-10-07 09:38:11.903 INFO  LoggingMetricsReporter:38 - Received metrics report: ScanReport{
tableName=local.iceberg_db.table2,
snapshotId=3288641599702333945,
filter=true,
schemaId=0,
projectedFieldIds=[1, 2, 3],
projectedFieldNames=[id, data, ts],
scanMetrics=ScanMetricsResult{
	totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.294853952S, count=1}, 
	resultDataFiles=CounterResult{unit=COUNT, value=0},
    resultDeleteFiles=CounterResult{unit=COUNT, value=0},
    totalDataManifests=CounterResult{unit=COUNT, value=6},
    totalDeleteManifests=CounterResult{unit=COUNT, value=0},
    scannedDataManifests=CounterResult{unit=COUNT, value=0},
    skippedDataManifests=CounterResult{unit=COUNT, value=0},
    totalFileSizeInBytes=CounterResult{unit=BYTES, value=0},
    totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=0},
    skippedDataFiles=CounterResult{unit=COUNT, value=0},
    skippedDeleteFiles=CounterResult{unit=COUNT, value=0},
    scannedDeleteManifests=CounterResult{unit=COUNT, value=0},
    skippedDeleteManifests=CounterResult{unit=COUNT, value=0},
    indexedDeleteFiles=CounterResult{unit=COUNT, value=0},
    equalityDeleteFiles=CounterResult{unit=COUNT, value=0},
    positionalDeleteFiles=CounterResult{unit=COUNT, value=0}},
metadata={
	engine-version=3.4.2, 
	iceberg-version=Apache Iceberg 1.6.1 (commit 8e9d59d299be42b0bca9461457cd1e95dbaad086), 
	app-id=local-1728265088818, 
	engine-name=spark}}

删除语句触发scan指标,(被动指标输出)

2024-10-07 11:15:54.708 INFO  LoggingMetricsReporter:38 - Received metrics report: ScanReport{
tableName=local.iceberg_db.table2,
snapshotId=7181960343136679052,
filter=ref(name="id") == "(1-digit-int)",
schemaId=0,
projectedFieldIds=[1, 2, 3],
projectedFieldNames=[id, data, ts],
scanMetrics=ScanMetricsResult{
	totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.098792497S, count=1},
	resultDataFiles=CounterResult{unit=COUNT, value=1},
    resultDeleteFiles=CounterResult{unit=COUNT, value=0},
    totalDataManifests=CounterResult{unit=COUNT, value=2},
    totalDeleteManifests=CounterResult{unit=COUNT, value=0},
    scannedDataManifests=CounterResult{unit=COUNT, value=2},
    skippedDataManifests=CounterResult{unit=COUNT, value=0},
    totalFileSizeInBytes=CounterResult{unit=BYTES, value=898},
    totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=0},
    skippedDataFiles=CounterResult{unit=COUNT, value=4},
    skippedDeleteFiles=CounterResult{unit=COUNT, value=0},
    scannedDeleteManifests=CounterResult{unit=COUNT, value=0},
    skippedDeleteManifests=CounterResult{unit=COUNT, value=0},
    indexedDeleteFiles=CounterResult{unit=COUNT, value=0},
    equalityDeleteFiles=CounterResult{unit=COUNT, value=0},
    positionalDeleteFiles=CounterResult{unit=COUNT, value=0}}, 
metadata={
	engine-version=3.4.2, 
	iceberg-version=Apache Iceberg 1.6.1 (commit 8e9d59d299be42b0bca9461457cd1e95dbaad086), 
	app-id=local-1728270940331, 
	engine-name=spark}}

insert触发的commit指标,(被动指标输出)

2024-10-06 15:48:47 INFO  LoggingMetricsReporter:38 - Received metrics report: 
CommitReport{
	tableName=local.iceberg_db.table2, 
	snapshotId=3288641599702333945, 
	sequenceNumber=6, 
	operation=append, 
	commitMetrics=CommitMetricsResult{
		totalDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.430784537S, count=1}, 
		attempts=CounterResult{unit=COUNT, value=1}, 
		addedDataFiles=CounterResult{unit=COUNT, value=3}, 
		removedDataFiles=null, 
		totalDataFiles=CounterResult{unit=COUNT, value=14}, 
		addedDeleteFiles=null,
        addedEqualityDeleteFiles=null,
        addedPositionalDeleteFiles=null,
        removedDeleteFiles=null,
        removedEqualityDeleteFiles=null,
        removedPositionalDeleteFiles=null,
        totalDeleteFiles=CounterResult{unit=COUNT, value=0}, addedRecords=CounterResult{unit=COUNT, value=3}, 
        removedRecords=null, 
        totalRecords=CounterResult{unit=COUNT, value=14}, 
        addedFilesSizeInBytes=CounterResult{unit=BYTES, value=2646}, 
        removedFilesSizeInBytes=null, 
        totalFilesSizeInBytes=CounterResult{unit=BYTES, value=12376}, 
        addedPositionalDeletes=null,
        removedPositionalDeletes=null, 
        totalPositionalDeletes=CounterResult{unit=COUNT, value=0}, 
        addedEqualityDeletes=null, 
        removedEqualityDeletes=null, 
        totalEqualityDeletes=CounterResult{unit=COUNT, value=0}}, 
    metadata={
        engine-version=3.4.2, 
        app-id=local-1728200916879, 
        engine-name=spark, 
        iceberg-version=Apache Iceberg 1.6.1 (commit 8e9d59d299be42b0bca9461457cd1e95dbaad086)}}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2194725.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

基于SpringBoot+Uniapp的家庭记账本微信小程序系统设计与实现

项目运行截图 展示效果图 展示效果图 展示效果图 展示效果图 展示效果图 5. 技术框架 5.1 后端采用SpringBoot框架 Spring Boot 是一个用于快速开发基于 Spring 框架的应用程序的开源框架。它采用约定大于配置的理念&#xff0c;提供了一套默认的配置&#xff0c;让开发者可以更…

Three.js基础内容(二)

目录 一、模型 1.1、组对象Group和层级模型(树结构) 1.2、递归遍历模型树结构、查询具体模型节点(楼房案例) 1.3、本地(局部)坐标和世界坐标 1.4、改变模型相对局部坐标原点位置 1.5、移除对象.remove() 1.6、模型隐藏与显示 二、纹理 2.1、创建纹理贴图(TextureLoade…

005集—— 用户交互之CAD窗口选择图元实体(CAD—C#二次开发入门)

如下图&#xff1a;根据提示选择若干图形要素&#xff0c;空格或右键结束选择&#xff0c;返回图元的objectid&#xff0c;以便进一步操作图元实体。 代码如下&#xff1a; using Autodesk.AutoCAD.DatabaseServices; using Autodesk.AutoCAD.EditorInput; using Autodesk.Aut…

嘉兴儿童自闭症寄宿学校独特教育模式探秘

自闭症&#xff08;孤独症&#xff09;儿童的教育问题一直是社会关注的焦点。如何为这些特殊的孩子提供一个安全、稳定且充满爱的成长环境&#xff0c;成为了许多家庭的期盼。在众多自闭症儿童教育机构中&#xff0c;广州的星贝育园自闭症儿童寄宿制学校以其独特的教育模式和全…

Keycloak 获取token 用户信息

进入Clients菜单&#xff0c;选择Client ID找到Access settings 》Direct access grants 将Direct access grants勾选Postmans URL输入地址&#xff1a;{IP}:{prot}/realms/{ realms }/protocol/openid-connect/token 例&#xff1a;http://10.18.11.3:31873/realms/master/pro…

Centos7 NTP客户端

目录 1. NTP客户端1.1 安装1.2 启动1.3 同步状态异常1.4 更改/etc/chrony.conf配置文件1.5 同步状态正常 1. NTP客户端 1.1 安装 如果chrony没有安装&#xff0c;可以使用以下命令安装 sudo yum install chrony1.2 启动 启动并设置开机自启 sudo systemctl start chronyd …

【Matlab学习日记】② 常用滤波以及噪声分析方法(上)

关注星标公众号&#xff0c;不错过精彩内容 作者 | 量子君 微信公众号 | 极客工作室 【Matlab学习日记】专栏目录 第一章 ① Sinmulink自动代码生成教程 第二章 ② 常用滤波以及噪声分析方法&#xff08;上&#xff09; 文章目录 前言一、使用滤波的目的二、常见的几种噪声和表…

棋牌灯控计时计费系统软件免费试用版怎么下载 佳易王计时收银管理系统操作教程

一、前言 【试用版软件下载&#xff0c;可以点击本文章最下方官网卡片】 棋牌灯控计时计费系统软件免费试用版怎么下载 佳易王计时收银管理系统操作教程 棋牌计时计费软件的应用也提升了顾客的服务体验&#xff0c;顾客可以清晰的看到自己的消费时间和费用。增加了消费的透明…

免费高可用软件

高可用软件是指那些能够提供高可用性、高可靠性的软件&#xff0c;它们在各种应用场景下都能确保系统的稳定运行。以下是四款免费的高可用软件&#xff0c;它们在不同领域都表现出色&#xff0c;能够满足各种高可用性需求。 一、PanguHA PanguHA是一款专为Windows平台设计的双…

数据分析之Spark框架介绍

文章目录 概述一、发展历程与背景二、核心特点三、生态系统与组件四、应用场景五、与其他大数据技术的比较 核心概念1. 弹性分布式数据集&#xff08;RDD, Resilient Distributed Dataset&#xff09;2. 转换&#xff08;Transformations&#xff09;和动作&#xff08;Actions…

python jpg 简单研究 1

起因&#xff0c; 目的: 就是想看看 jpg 里面有什么。 其实&#xff0c;我最开始的想法是&#xff0c;自己来写一个文件格式&#xff0c;只能我自己才能打开。 然后看了 jpg 相关的内容&#xff0c;发现太复杂&#xff0c;只能罢了。 1. jpg 的魔法头数字&#xff08;File Ma…

蝶形激光器驱动(温控精度0.002°C 激光电流分辨率5uA)

蝶形半导体激光器驱动电流的稳定性直接决定了其输出波长的稳定性,进而影响检测精度.为了满足气体浓度检测中对激光器输出波长稳定可调的要求,设计了数字与模拟电路混合的恒流驱动电路.STM32为主控芯片数控模块完成扫描AD/DA转换;模拟电路主要由负反馈运算放大、高精度CMOS管和反…

《向量数据库指南》揭秘:Mlivus Cloud如何借Fivetran Partner SDK实现数据飞跃

哈哈,各位向量数据库领域的同仁们,今天咱们来聊聊 Fivetran 的 Partner SDK 如何助力技术供应商构建自定义连接器和目标,特别是与 Mlivus Cloud 的集成,这可是个热门话题啊! Fivetran 的 Partner SDK,简直就是为技术供应商量身打造的“神器”。有了它,你就可以轻松地为…

LeetCode讲解篇之300. 最长递增子序列

文章目录 题目描述题解思路题解代码题目链接 题目描述 题解思路 这题我们可以通过动态规划求解&#xff0c;使用一个数组f&#xff0c;数组f的i号元素表示[0, i]范围内最长递增子序列的长度 状态转移方程&#xff1a;f[i] max(f[j] 1)&#xff0c;其中 0 < j < i 题…

node高版本报错: digital envelope routines::unsupported

node高版本报错&#xff1a; digital envelope routines::unsupported 解决方案&#xff1a; package.json中&#xff0c;启动命令前加上&#xff1a; set NODE_OPTIONS--openssl-legacy-provider &&

Windows安装Linux子系统报错:WslRegisterDistribution failed with error: 0x8007019e

WslRegisterDistribution failed with error: 0x8007019e 报错截图如下图&#xff1a; 该处是由于没有安装Linux内核&#xff0c;因此需要安装。可前往官网查看详情&#xff1a;https://aka.ms/wslinstall 需要解决该问题&#xff0c;可参照官网方法&#xff08;我没试过官网…

pip丢了怎么办!不用怕,教你用get-pip.py来下载

1.下载get-pip.py进行安装 我们有的时候环境中会找不到或者误删丢失了pip模块&#xff0c;就没办法安装新的python模组&#xff0c;那怎么办呢&#xff0c;官方提供了get-pip.py模块可以帮我们快速安装pip。 get-pip.py网站 bootstrap.pypa.io/get-pip.py 直接将文件下载下来…

已解决:AttributeError: ‘str‘ object has no attribute ‘decode‘

已解决&#xff1a;AttributeError: ‘str’ object has no attribute ‘decode’ 文章目录 写在前面问题描述报错原因分析 解决思路解决办法1. 确保只对 bytes 对象调用 decode()2. 将 Python 2 的旧代码迁移到 Python 33. 检查数据来源4. 处理编码不一致的问题5. 使用 six 库…

Java第二阶段---11封装---第四节 static 修饰符

1.static 修饰符应用范围 static修饰符只能用来修饰类中定义的成员变量、成员方法、代码块以及内部类(内部类有专门章节进行讲解)。 2.static 修饰成员变量 static 修饰的成员变量称之为类变量。属于该类所有成员共享。 示例 package cn.lyxq.test04;public class Chinese…

系统架构设计师论文《论企业应用系统的分层架构风格》精选试读

论文真题 软件架构风格是描述一类特定应用领域中系统组织方式的惯用模式&#xff0c;反映了领域中诸多系统所共有的结构特征和语义特征&#xff0c;并指导如何将各个模块和子系统有效组织成一个完整的系统。分层架构是一种常见的软件架构风格&#xff0c;能够有效简化设计&…