自定义序列化数据类型

news2025/2/6 14:09:22

目录

  • 1. WritableComparable
    • 1.1 Writable
    • 1.2 Comparable
    • 1.3 IntWritable
  • 2. 自定义序列化数据类型RectangleWritable
  • 3. 矩形面积计算
    • 3.1 Map
    • 3.2 Reduce
  • 4. 代码和结果
    • 4.1 pom.xml中依赖配置
    • 4.2 工具类util
    • 4.3 矩形面积计算
    • 4.4 结果
  • 参考

  本文引用的Apache Hadoop源代码基于Apache许可证 2.0,详情请参阅 Apache许可证2.0。

1. WritableComparable

  自定义序列化数据类型需要继承org.apache.hadoop.io.WritableComparable,该类的源代码如下。从源代码中可以看出WritableComparable的父类有两个:org.apache.hadoop.io.Writablejava.lang.Comparable

/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.hadoop.io;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/**
 * A {@link Writable} which is also {@link Comparable}. 
 *
 * <p><code>WritableComparable</code>s can be compared to each other, typically 
 * via <code>Comparator</code>s. Any type which is to be used as a 
 * <code>key</code> in the Hadoop Map-Reduce framework should implement this
 * interface.</p>
 *
 * <p>Note that <code>hashCode()</code> is frequently used in Hadoop to partition
 * keys. It's important that your implementation of hashCode() returns the same 
 * result across different instances of the JVM. Note also that the default 
 * <code>hashCode()</code> implementation in <code>Object</code> does <b>not</b>
 * satisfy this property.</p>
 *  
 * <p>Example:</p>
 * <blockquote><pre>
 *     public class MyWritableComparable implements
 *      WritableComparable{@literal <MyWritableComparable>} {
 *       // Some data
 *       private int counter;
 *       private long timestamp;
 *       
 *       public void write(DataOutput out) throws IOException {
 *         out.writeInt(counter);
 *         out.writeLong(timestamp);
 *       }
 *       
 *       public void readFields(DataInput in) throws IOException {
 *         counter = in.readInt();
 *         timestamp = in.readLong();
 *       }
 *       
 *       public int compareTo(MyWritableComparable o) {
 *         int thisValue = this.value;
 *         int thatValue = o.value;
 *         return (thisValue &lt; thatValue ? -1 : (thisValue==thatValue ? 0 : 1));
 *       }
 *
 *       public int hashCode() {
 *         final int prime = 31;
 *         int result = 1;
 *         result = prime * result + counter;
 *         result = prime * result + (int) (timestamp ^ (timestamp &gt;&gt;&gt; 32));
 *         return result
 *       }
 *     }
 * </pre></blockquote>
 */
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface WritableComparable<T> extends Writable, Comparable<T> {
}

1.1 Writable

  Writable中主要有两个方法:void write(DataOutput out)void readFields(DataInput in),其中readFields负责序列化读入,而write负责序列化写出。由于Writable是WritableComparable的父类,因此自定义序列化数据类型必须实现这两个方法。

/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.hadoop.io;

import java.io.DataOutput;
import java.io.DataInput;
import java.io.IOException;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/**
 * A serializable object which implements a simple, efficient, serialization 
 * protocol, based on {@link DataInput} and {@link DataOutput}.
 *
 * <p>Any <code>key</code> or <code>value</code> type in the Hadoop Map-Reduce
 * framework implements this interface.</p>
 * 
 * <p>Implementations typically implement a static <code>read(DataInput)</code>
 * method which constructs a new instance, calls {@link #readFields(DataInput)} 
 * and returns the instance.</p>
 * 
 * <p>Example:</p>
 * <blockquote><pre>
 *     public class MyWritable implements Writable {
 *       // Some data
 *       private int counter;
 *       private long timestamp;
 *
 *       // Default constructor to allow (de)serialization
 *       MyWritable() { }
 *
 *       public void write(DataOutput out) throws IOException {
 *         out.writeInt(counter);
 *         out.writeLong(timestamp);
 *       }
 *
 *       public void readFields(DataInput in) throws IOException {
 *         counter = in.readInt();
 *         timestamp = in.readLong();
 *       }
 *
 *       public static MyWritable read(DataInput in) throws IOException {
 *         MyWritable w = new MyWritable();
 *         w.readFields(in);
 *         return w;
 *       }
 *     }
 * </pre></blockquote>
 */
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface Writable {
  /** 
   * Serialize the fields of this object to <code>out</code>.
   * 
   * @param out <code>DataOuput</code> to serialize this object into.
   * @throws IOException any other problem for write.
   */
  void write(DataOutput out) throws IOException;

  /** 
   * Deserialize the fields of this object from <code>in</code>.  
   * 
   * <p>For efficiency, implementations should attempt to re-use storage in the 
   * existing object where possible.</p>
   * 
   * @param in <code>DataInput</code> to deseriablize this object from.
   * @throws IOException any other problem for readFields.
   */
  void readFields(DataInput in) throws IOException;
}

1.2 Comparable

  Comparable是WritableComparable的另一个父类,它的源代码如下。它只有一个方法int compareTo(T var1),该方法负责比较两个T类型的大小(一般在继承Comparable的子类中)。自定义序列化数据类型也必须实现这个方法。

// Source code is decompiled from a .class file using FernFlower decompiler.
package java.lang;

public interface Comparable<T> {
   int compareTo(T var1);
}

1.3 IntWritable

  IntWritable是官方定义的序列化数据类型,自定义序列化数据类型时可以参考它的源代码,其中除了上面提到的write,readFields和compareTo方法之外,还建议实现toString方法(public String toString())。

/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.hadoop.io;


import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/** A WritableComparable for ints. */
@InterfaceAudience.Public
@InterfaceStability.Stable
public class IntWritable implements WritableComparable<IntWritable> {
  private int value;

  public IntWritable() {}

  public IntWritable(int value) { set(value); }

  /**
   * Set the value of this IntWritable.
   * @param value input value.
   */
  public void set(int value) { this.value = value; }

  /**
   * Return the value of this IntWritable.
   * @return value of this IntWritable.
   */
  public int get() { return value; }

  @Override
  public void readFields(DataInput in) throws IOException {
    value = in.readInt();
  }

  @Override
  public void write(DataOutput out) throws IOException {
    out.writeInt(value);
  }

  /** Returns true iff <code>o</code> is a IntWritable with the same value. */
  @Override
  public boolean equals(Object o) {
    if (!(o instanceof IntWritable))
      return false;
    IntWritable other = (IntWritable)o;
    return this.value == other.value;
  }

  @Override
  public int hashCode() {
    return value;
  }

  /** Compares two IntWritables. */
  @Override
  public int compareTo(IntWritable o) {
    int thisValue = this.value;
    int thatValue = o.value;
    return (thisValue<thatValue ? -1 : (thisValue==thatValue ? 0 : 1));
  }

  @Override
  public String toString() {
    return Integer.toString(value);
  }

  /** A Comparator optimized for IntWritable. */ 
  public static class Comparator extends WritableComparator {
    public Comparator() {
      super(IntWritable.class);
    }
    
    @Override
    public int compare(byte[] b1, int s1, int l1,
                       byte[] b2, int s2, int l2) {
      int thisValue = readInt(b1, s1);
      int thatValue = readInt(b2, s2);
      return (thisValue<thatValue ? -1 : (thisValue==thatValue ? 0 : 1));
    }
  }

  static {                                        // register this comparator
    WritableComparator.define(IntWritable.class, new Comparator());
  }
}


2. 自定义序列化数据类型RectangleWritable

  自定义序列化数据类型时候,如果额外实现了public int hashCode()public boolean equals(Object o)public String toString(),则该自定义序列化数据类型可以作为Mapper输出的键或值、Reducer输入和输出的键或值(写入结果时会调用toString方法来写入该类型),而且还可以使用MapReduce自带的哈希分区。此外,实现hashCode方法时,要产生一个基于该类型内属性值的哈希函数;而实现compareTo方法,要避免该类型的不同实例实际不同而被比较后得到的结果是相同的影响。
  下面是RectangleWritable的代码,该类的功能是处理矩形的长和宽。同时我认为长为6,宽为3和长为3,宽为6是两种不同的矩形,基于这个想法实现了compareTo方法。

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.Objects;

import org.apache.hadoop.io.WritableComparable;

public class RectangleWritable implements WritableComparable<RectangleWritable>  {
    private int length, width;

    public int getLength() {
        return length;
    }

    public void setLength(int length) {
        this.length = length;
    }

    public int getWidth() {
        return width;
    }

    public void setWidth(int width) {
        this.width = width;
    }
    
    public RectangleWritable() {
        super();
    }

    public RectangleWritable(int length, int width) {
        super();
        setLength(length);
        setWidth(width);
    }

    @Override
    public String toString() {
        return String.format("%d\t%d", getLength(), getWidth());
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeInt(getLength());
        out.writeInt(getWidth());
    }

    @Override
    public int hashCode() {
        return Objects.hash(getLength(), getWidth());
    }

    @Override
    public boolean equals(Object o) {
        if (this == o)
            return true;
        if (!(o instanceof RectangleWritable))
            return false;
        RectangleWritable other = (RectangleWritable) o;
        return other.getLength() == getLength() && other.getWidth() == getWidth();
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        this.length = in.readInt();
        this.width = in.readInt();
    }

    @Override
    public int compareTo(RectangleWritable o) {
        int res = Integer.compare(getLength(), o.getLength());
        return res == 0 ? Integer.compare(getWidth(), o.getWidth()) : res;
    }
}

3. 矩形面积计算

  输入文本如下。

9 9
3 27
7 8
1 1
3 6
6 3

3.1 Map

在这里插入图片描述

3.2 Reduce

在这里插入图片描述

4. 代码和结果

4.1 pom.xml中依赖配置

	<dependencies>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.11</version>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-common</artifactId>
			<version>3.3.6</version>
			<exclusions>
				<exclusion>
					<groupId>org.slf4j</groupId>
					<artifactId>slf4j-log4j12</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-mapreduce-client-core</artifactId>
			<version>3.3.6</version>
			<type>pom</type>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
			<version>3.3.6</version>
		</dependency>
	</dependencies>

4.2 工具类util

import java.net.URI;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;


public class util {
    public static FileSystem getFileSystem(String uri, Configuration conf) throws Exception {
        URI add = new URI(uri);
        return FileSystem.get(add, conf);
    }

    public static void removeALL(String uri, Configuration conf, String path) throws Exception {
        FileSystem fs = getFileSystem(uri, conf);
        if (fs.exists(new Path(path))) {
            boolean isDeleted = fs.delete(new Path(path), true);
            System.out.println("Delete Output Folder? " + isDeleted);
        }
    }

    public static void  showResult(String uri, Configuration conf, String path) throws Exception {
        FileSystem fs = getFileSystem(uri, conf);
        String regex = "part-r-";
        Pattern pattern = Pattern.compile(regex);

        if (fs.exists(new Path(path))) {
            FileStatus[] files = fs.listStatus(new Path(path));
            for (FileStatus file : files) {
                Matcher matcher = pattern.matcher(file.getPath().toString());
                if (matcher.find()) {
                    System.out.println(file.getPath() + ":");
                    FSDataInputStream openStream = fs.open(file.getPath());
                    IOUtils.copyBytes(openStream, System.out, 1024);
                    openStream.close();
                }
            }
        }
    }
}

4.3 矩形面积计算

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class App {
	public static class MyMapper extends Mapper<LongWritable, Text, RectangleWritable, NullWritable> {
		public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {  
			String[] splitStr = value.toString().split(" ");
			RectangleWritable keyOut = new RectangleWritable(Integer.parseInt(splitStr[0]), Integer.parseInt(splitStr[1]));
			context.write(keyOut, NullWritable.get());
		}
	}

	public static class MyReducer extends Reducer<RectangleWritable, NullWritable, RectangleWritable, IntWritable> {
		public void reduce(RectangleWritable key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
			IntWritable area = new IntWritable(key.getLength() * key.getWidth());
			context.write(key, area);
		}
	}

	public static void main(String[] args) throws Exception { 
		Configuration conf = new Configuration();
		String[] myArgs = {
			"file:///home/developer/CodeArtsProjects/CalRectangleArea/rectangle.txt",
			"hdfs://localhost:9000/user/developer/CalRectangleArea/output"
		};
		util.removeALL("hdfs://localhost:9000", conf, myArgs[myArgs.length - 1]);
		Job job = Job.getInstance(conf, "CalRectangleArea");
		job.setJarByClass(App.class);
		job.setMapperClass(MyMapper.class);
		job.setReducerClass(MyReducer.class);
		job.setMapOutputKeyClass(RectangleWritable.class);
		job.setMapOutputValueClass(NullWritable.class);
		job.setOutputKeyClass(RectangleWritable.class);
		job.setOutputValueClass(IntWritable.class);
		for (int i = 0; i < myArgs.length - 1; i++) {
			FileInputFormat.addInputPath(job, new Path(myArgs[i]));
		}
		FileOutputFormat.setOutputPath(job, new Path(myArgs[myArgs.length - 1]));
		int res = job.waitForCompletion(true) ? 0 : 1;
		if (res == 0) {
			System.out.println("计算结果为:");
			util.showResult("hdfs://localhost:9000", conf, myArgs[myArgs.length - 1]);
		}
		System.exit(res);
	}
}

4.4 结果

在这里插入图片描述

参考

吴章勇 杨强著 大数据Hadoop3.X分布式处理实战

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2293819.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【Linux网络编程】:URL(encode),HTTP协议,telnet工具

&#x1f381;个人主页&#xff1a;我们的五年 &#x1f50d;系列专栏&#xff1a;Linux网络编程 &#x1f337;追光的人&#xff0c;终会万丈光芒 &#x1f389;欢迎大家点赞&#x1f44d;评论&#x1f4dd;收藏⭐文章 ​ Linux网络编程笔记&#xff1a; https://mp.csdn…

C语言基础系列【3】VSCode使用

前面我们提到过VSCode有多么的好用&#xff0c;本文主要介绍如何使用VSCode编译运行C语言代码。 安装 首先去官网&#xff08;https://code.visualstudio.com/&#xff09;下载安装包&#xff0c;点击Download for Windows 获取安装包后&#xff0c;一路点击Next就可以。 配…

学前端框架之前,你需要先理解 MVC

MVC 软件架构设计模式鼎鼎大名&#xff0c;相信你已经听说过了&#xff0c;但你确定自己已经完全理解到 MVC 的精髓了吗&#xff1f; 如果你是新同学&#xff0c;没听过 MVC&#xff0c;那可以到网上搜一些文章来看看&#xff0c;不过你要有心理准备&#xff0c;那些文章大多都…

Mysql:数据库

Mysql 一、数据库概念&#xff1f;二、MySQL架构三、SQL语句分类四、数据库操作4.1 数据库创建4.2 数据库字符集和校验规则4.3 数据库修改4.4 数据库删除4.4 数据库备份和恢复其他 五、表操作5.1 创建表5.2 修改表5.3 删除表 六、表的增删改查6.1 Create(创建):数据新增1&#…

熟练掌握Http协议

目录 基本概念请求数据Get请求方式和Post请求方式 响应数据响应状态码 基本概念 Http协议全称超文本传输协议(HyperText Transfer Protocol)&#xff0c;是网络通信中应用层的协议&#xff0c;规定了浏览器和web服务器数据传输的格式和规则 Http应用层协议具有以下特点&#…

C++的 I/O 流

本文把复杂的基类和派生类的作用和关系捋出来&#xff0c;具体的接口请参考相关文档 C的 I/O 流相关的类&#xff0c;继承关系如下图所示 https://zh.cppreference.com/w/cpp/io I / O 的概念&#xff1a;内存和外设进行数据交互称为 I / O &#xff0c;例如&#xff1a;把数…

【PDF多区域识别】如何批量PDF指定多个区域识别改名,基于Windows自带的UWP的文字识别实现方案

海关在对进口货物进行查验时,需要核对报关单上的各项信息。对报关单 PDF 批量指定区域识别改名后,海关工作人员可以更高效地从文件名中获取关键信息,如货物来源地、申报价值等。例如文件名 “[原产国]_[申报价值].pdf”,有助于海关快速筛选重点查验对象,提高查验效率和监管…

【大数据技术】本机PyCharm远程连接虚拟机Python

本机PyCharm远程连接虚拟机Python 注意:本文需要使用PyCharm专业版。 pycharm-professional-2024.1.4VMware Workstation Pro 16CentOS-Stream-10-latest-x86_64-dvd1.iso写在前面 本文主要介绍如何使用本地PyCharm远程连接虚拟机,运行Python脚本,提高编程效率。 注意: …

数字化转型:概念性名词浅谈(第四讲)

​大家好&#xff0c;本篇文章是在新年之际写的&#xff0c;所以在这里先给大家拜个年。 今天要介绍的名词为ETL: ETL&#xff0c;是英文Extract-Transform-Load的缩写&#xff0c;用来描述将数据从来源端经过抽取&#xff08;extract&#xff09;、转换&#xff08;transfor…

【HarmonyOS之旅】基于ArkTS开发(三) -> 兼容JS的类Web开发(四) -> 常见组件(一)

目录 1 -> List 1.1 -> 创建List组件 1.2 -> 添加滚动条 1.3 -> 添加侧边索引栏 1.4 -> 实现列表折叠和展开 1.5 -> 场景示例 2 -> dialog 2.1 -> 创建Dialog组件 2.2 -> 设置弹窗响应 2.3 -> 场景示例 3 -> form 3.1 -> 创建…

Linux:文件系统(软硬链接)

目录 inode ext2文件系统 Block Group 超级块&#xff08;Super Block&#xff09; GDT&#xff08;Group Descriptor Table&#xff09; 块位图&#xff08;Block Bitmap&#xff09; inode位图&#xff08;Inode Bitmap&#xff09; i节点表&#xff08;inode Tabl…

深度学习系列--01.入门

一.深度学习概念 深度学习&#xff08;Deep Learning&#xff09;是机器学习的分支&#xff0c;是指使用多层的神经网络进行机器学习的一种手法抖音百科。它学习样本数据的内在规律和表示层次&#xff0c;最终目标是让机器能够像人一样具有分析学习能力&#xff0c;能够识别文字…

如何抓取酒店列表: 揭开秘密

搜索酒店列表是一种强大的工具&#xff0c;可以从各种在线资源中收集有关住宿、价格和可用性的综合数据。无论您是要比较价格、分析市场趋势&#xff0c;还是要创建个性化的旅行计划&#xff0c;搜索都能让您有效地汇编所需的信息。在本文中&#xff0c;我们将介绍如何搜索酒店…

深度剖析 C++17 中的 std::byte:解锁字节级编程新境界

文章目录 一、引入背景二、基本定义三、特性详解不可隐式转换为整型显式转换为unsigned char位运算支持字面量支持四、使用场景内存操作数据序列化与反序列化网络通信文件读写操作五、与其他数据类型的交互与字符类型的交互与整数类型的交互与指针类型的交互六、注意事项避免混…

【多线程】线程池核心数到底如何配置?

&#x1f970;&#x1f970;&#x1f970;来都来了&#xff0c;不妨点个关注叭&#xff01; &#x1f449;博客主页&#xff1a;欢迎各位大佬!&#x1f448; 文章目录 1. 前置回顾2. 动态线程池2.1 JMX 的介绍2.1.1 MBeans 介绍 2.2 使用 JMX jconsole 实现动态修改线程池2.2.…

三维空间全局光照 | 及各种扫盲

Lecture 6 SH for diffuse transport Lecture 7关于 SH for glossy transport 三维空间全局光照 diffuse case和glossy case的区别 在Lambertian模型中&#xff0c;BRDF是一个常数 diffuse case 跟outgoing point无关 glossy case 跟outgoing point有关 &#xff08;Gloss…

Polardb三节点集群部署安装--附虚拟机

1. 架构 PolarDB-X 采用 Shared-nothing 与存储计算分离架构进行设计&#xff0c;系统由4个核心组件组成。 计算节点&#xff08;CN, Compute Node&#xff09; 计算节点是系统的入口&#xff0c;采用无状态设计&#xff0c;包括 SQL 解析器、优化器、执行器等模块。负责数据…

java s7接收Byte字节,接收word转16位二进制

1图&#xff1a; 2.图&#xff1a; try {List list getNameList();//接收base64S7Connector s7Connector S7ConnectorFactory.buildTCPConnector().withHost("192.168.46.52").withPort(102).withTimeout(1000) //连接超时时间.withRack(0).withSlot(3).build()…

挑战项目 --- 微服务编程测评系统(在线OJ系统)

一、前言 1.为什么要做项目 面试官要问项目&#xff0c;考察你到底是理论派还是实战派&#xff1f; 1.希望从你的项目中看到你的真实能力和对知识的灵活运用。 2.展示你在面对问题和需求时的思考方式及解决问题的能力。 3.面试官会就你项目提出一些问题&#xff0c;或扩展需求…

基于springboot的体质测试数据分析及可视化设计

作者&#xff1a;学姐 开发技术&#xff1a;SpringBoot、SSM、Vue、MySQL、JSP、ElementUI、Python、小程序等 文末获取“源码数据库万字文档PPT”&#xff0c;支持远程部署调试、运行安装。 项目包含&#xff1a; 完整源码数据库功能演示视频万字文档PPT 项目编码&#xff1…