MapReduce小试牛刀

news2024/11/18 7:42:42

部署完hadoop单机版后,试下mapreduce是怎么分析处理数据的

Word Count

Word Count 就是"词语统计",这是 MapReduce 工作程序中最经典的一种。它的主要任务是对一个文本文件中的词语作归纳统计,统计出每个出现过的词语一共出现的次数。

Hadoop 中包含了许多经典的 MapReduce 示例程序,其中就包含 Word Count.

准备演示文件input.txt

# cat input.txt 
I LOVE GG
I LIKE YY
I LOVE UU
I LIKE RR

复制input.txt至hadoop中

# hdfs dfs -put input.txt /test
# hdfs dfs -ls /test
Found 4 items
drwxr-xr-x   - yunwei supergroup          0 2023-02-24 17:14 /test/a
-rw-r--r--   2 yunwei supergroup         51 2023-02-24 17:25 /test/b.txt
-rw-r--r--   2 yunwei supergroup         51 2023-02-24 17:21 /test/hello-hadoop.txt
-rw-r--r--   2 yunwei supergroup         40 2023-02-27 15:59 /test/input.txt

查看hadoop下的mapreduce包 

# ll $HADOOP_HOME/share/hadoop/mapreduce/
total 4876
-rw-rw-r-- 1 yunwei yunwei  526732 Oct  3  2016 hadoop-mapreduce-client-app-2.6.5.jar
-rw-rw-r-- 1 yunwei yunwei  686773 Oct  3  2016 hadoop-mapreduce-client-common-2.6.5.jar
-rw-rw-r-- 1 yunwei yunwei 1535776 Oct  3  2016 hadoop-mapreduce-client-core-2.6.5.jar
-rw-rw-r-- 1 yunwei yunwei  259326 Oct  3  2016 hadoop-mapreduce-client-hs-2.6.5.jar
-rw-rw-r-- 1 yunwei yunwei   27489 Oct  3  2016 hadoop-mapreduce-client-hs-plugins-2.6.5.jar
-rw-rw-r-- 1 yunwei yunwei   61309 Oct  3  2016 hadoop-mapreduce-client-jobclient-2.6.5.jar
-rw-rw-r-- 1 yunwei yunwei 1514166 Oct  3  2016 hadoop-mapreduce-client-jobclient-2.6.5-tests.jar
-rw-rw-r-- 1 yunwei yunwei   67762 Oct  3  2016 hadoop-mapreduce-client-shuffle-2.6.5.jar
-rw-rw-r-- 1 yunwei yunwei  292710 Oct  3  2016 hadoop-mapreduce-examples-2.6.5.jar
drwxrwxr-x 2 yunwei yunwei    4096 Oct  3  2016 lib
drwxrwxr-x 2 yunwei yunwei      30 Oct  3  2016 lib-examples
drwxrwxr-x 2 yunwei yunwei    4096 Oct  3  2016 sources

vi hadoop-mapreduce-examples-2.6.5.jar

hadoop的命令执行jar

# hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar  WordCount  input.txt output      
Unknown program 'WordCount' chosen.
Valid program names are:
  aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
  dbcount: An example job that count the pageview counts from a database.
  distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
  grep: A map/reduce program that counts the matches of a regex in the input.
  join: A job that effects a join over sorted, equally partitioned datasets
  multifilewc: A job that counts words from several files.
  pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
  pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
  randomwriter: A map/reduce program that writes 10GB of random data per node.
  secondarysort: An example defining a secondary sort to the reduce.
  sort: A map/reduce program that sorts the data written by the random writer.
  sudoku: A sudoku solver.
  teragen: Generate data for the terasort
  terasort: Run the terasort
  teravalidate: Checking results of terasort
  wordcount: A map/reduce program that counts the words in the input files.
  wordmean: A map/reduce program that counts the average length of the words in the input files.
  wordmedian: A map/reduce program that counts the median length of the words in the input files.
  wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

执行报错,虽然example.jar中有这个WordCount类,但填下类名没起作用。改为小写后,继续执行,报错。hdfs的目录下没有这个文件,将input.txt文件上传至hadoop中。

# hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar  wordcount  input.txt output               
23/02/27 15:58:58 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
23/02/27 15:58:58 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
23/02/27 15:58:58 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/tmp/hadoop-yunwei/mapred/staging/yunwei293247600/.staging/job_local293247600_0001
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://10.15.49.26:8020/user/yunwei/input.txt
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:302)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:319)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1297)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1294)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1294)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1315)
        at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

# hdfs dfs -put input.txt /test
# hdfs dfs -ls /test 

# hdfs dfs -put input.txt /test
# hdfs dfs -ls /test
Found 4 items
drwxr-xr-x   - yunwei supergroup          0 2023-02-24 17:14 /test/a
-rw-r--r--   2 yunwei supergroup         51 2023-02-24 17:25 /test/b.txt
-rw-r--r--   2 yunwei supergroup         51 2023-02-24 17:21 /test/hello-hadoop.txt
-rw-r--r--   2 yunwei supergroup         40 2023-02-27 15:59 /test/input.txt

# hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar  wordcount  /test/input.txt  output

解释一下含义:

hadoop jar从 jar 文件执行 MapReduce 任务,之后跟着的是示例程序包的路径。

wordcount表示执行示例程序包中的 Word Count 程序,之后跟这两个参数,第一个是输入文件,第二个是输出结果的目录名(因为输出结果是多个文件)。

执行之后,应该会输出一个文件夹 output,在这个文件夹里有两个文件:_SUCCESS 和 part-r-00000。

/test/output  上面命令如果指定output在hadoop的路径,相关执行结果便不会生成在默认位置,而是命令指定的位置。

其中 _SUCCESS 只是用于表达执行成功的空文件,part-r-00000 则是处理结果,当我们显示一下它的内容:

指定到hadoop目录下的input.txt文件

# hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar  wordcount  /test/input.txt output      
23/02/27 16:00:13 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
23/02/27 16:00:13 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
23/02/27 16:00:13 INFO input.FileInputFormat: Total input paths to process : 1
23/02/27 16:00:13 INFO mapreduce.JobSubmitter: number of splits:1
23/02/27 16:00:13 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1519269801_0001
23/02/27 16:00:13 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
23/02/27 16:00:13 INFO mapreduce.Job: Running job: job_local1519269801_0001
23/02/27 16:00:13 INFO mapred.LocalJobRunner: OutputCommitter set in config null
23/02/27 16:00:13 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
23/02/27 16:00:13 INFO mapred.LocalJobRunner: Waiting for map tasks
23/02/27 16:00:13 INFO mapred.LocalJobRunner: Starting task: attempt_local1519269801_0001_m_000000_0
23/02/27 16:00:13 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
23/02/27 16:00:13 INFO mapred.MapTask: Processing split: hdfs://10.15.49.26:8020/test/input.txt:0+40
23/02/27 16:00:14 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
23/02/27 16:00:14 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
23/02/27 16:00:14 INFO mapred.MapTask: soft limit at 83886080
23/02/27 16:00:14 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
23/02/27 16:00:14 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
23/02/27 16:00:14 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
23/02/27 16:00:14 INFO mapred.LocalJobRunner: 
23/02/27 16:00:14 INFO mapred.MapTask: Starting flush of map output
23/02/27 16:00:14 INFO mapred.MapTask: Spilling map output
23/02/27 16:00:14 INFO mapred.MapTask: bufstart = 0; bufend = 88; bufvoid = 104857600
23/02/27 16:00:14 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214352(104857408); length = 45/6553600
23/02/27 16:00:14 INFO mapred.MapTask: Finished spill 0
23/02/27 16:00:14 INFO mapred.Task: Task:attempt_local1519269801_0001_m_000000_0 is done. And is in the process of committing
23/02/27 16:00:14 INFO mapred.LocalJobRunner: map
23/02/27 16:00:14 INFO mapred.Task: Task 'attempt_local1519269801_0001_m_000000_0' done.
23/02/27 16:00:14 INFO mapred.LocalJobRunner: Finishing task: attempt_local1519269801_0001_m_000000_0
23/02/27 16:00:14 INFO mapred.LocalJobRunner: map task executor complete.
23/02/27 16:00:14 INFO mapred.LocalJobRunner: Waiting for reduce tasks
23/02/27 16:00:14 INFO mapred.LocalJobRunner: Starting task: attempt_local1519269801_0001_r_000000_0
23/02/27 16:00:14 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
23/02/27 16:00:14 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5cd45799
23/02/27 16:00:14 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334338464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10, memToMemMergeOutputsThreshold=10
23/02/27 16:00:14 INFO reduce.EventFetcher: attempt_local1519269801_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
23/02/27 16:00:14 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1519269801_0001_m_000000_0 decomp: 68 len: 72 to MEMORY
23/02/27 16:00:14 INFO reduce.InMemoryMapOutput: Read 68 bytes from map-output for attempt_local1519269801_0001_m_000000_0
23/02/27 16:00:14 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 68, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->68
23/02/27 16:00:14 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
23/02/27 16:00:14 INFO mapred.LocalJobRunner: 1 / 1 copied.
23/02/27 16:00:14 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
23/02/27 16:00:14 INFO mapred.Merger: Merging 1 sorted segments
23/02/27 16:00:14 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 63 bytes
23/02/27 16:00:14 INFO reduce.MergeManagerImpl: Merged 1 segments, 68 bytes to disk to satisfy reduce memory limit
23/02/27 16:00:14 INFO reduce.MergeManagerImpl: Merging 1 files, 72 bytes from disk
23/02/27 16:00:14 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
23/02/27 16:00:14 INFO mapred.Merger: Merging 1 sorted segments
23/02/27 16:00:14 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 63 bytes
23/02/27 16:00:14 INFO mapred.LocalJobRunner: 1 / 1 copied.
23/02/27 16:00:14 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
23/02/27 16:00:14 INFO mapred.Task: Task:attempt_local1519269801_0001_r_000000_0 is done. And is in the process of committing
23/02/27 16:00:14 INFO mapred.LocalJobRunner: 1 / 1 copied.
23/02/27 16:00:14 INFO mapred.Task: Task attempt_local1519269801_0001_r_000000_0 is allowed to commit now
23/02/27 16:00:14 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1519269801_0001_r_000000_0' to hdfs://xx.xx.xx.xx:xx/user/yunwei/output/_temporary/0/task_local1519269801_0001_r_000000
23/02/27 16:00:14 INFO mapred.LocalJobRunner: reduce > reduce
23/02/27 16:00:14 INFO mapred.Task: Task 'attempt_local1519269801_0001_r_000000_0' done.
23/02/27 16:00:14 INFO mapred.LocalJobRunner: Finishing task: attempt_local1519269801_0001_r_000000_0
23/02/27 16:00:14 INFO mapred.LocalJobRunner: reduce task executor complete.
23/02/27 16:00:14 INFO mapreduce.Job: Job job_local1519269801_0001 running in uber mode : false
23/02/27 16:00:14 INFO mapreduce.Job:  map 100% reduce 100%
23/02/27 16:00:14 INFO mapreduce.Job: Job job_local1519269801_0001 completed successfully
23/02/27 16:00:14 INFO mapreduce.Job: Counters: 38
        File System Counters
                FILE: Number of bytes read=585924
                FILE: Number of bytes written=1105104
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=80
                HDFS: Number of bytes written=38
                HDFS: Number of read operations=13
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=4
        Map-Reduce Framework
                Map input records=4
                Map output records=12
                Map output bytes=88
                Map output materialized bytes=72
                Input split bytes=103
                Combine input records=12
                Combine output records=7
                Reduce input groups=7
                Reduce shuffle bytes=72
                Reduce input records=7
                Reduce output records=7
                Spilled Records=14
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=0
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
                Total committed heap usage (bytes)=716177408
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=40
        File Output Format Counters 
                Bytes Written=38

查看执行结果,从上面执行的日志中,可以看到output文件生成的位置信息

执行之后,应该会输出一个文件夹 output,在这个文件夹里有两个文件:_SUCCESS 和 part-r-00000。

其中 _SUCCESS 只是用于表达执行成功的空文件,part-r-00000 则是处理结果

# hdfs dfs -lsr /user/yunwei
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxr-xr-x   - yunwei supergroup          0 2023-02-27 16:00 /user/yunwei/output
-rw-r--r--   2 yunwei supergroup          0 2023-02-27 16:00 /user/yunwei/output/_SUCCESS
-rw-r--r--   2 yunwei supergroup         38 2023-02-27 16:00 /user/yunwei/output/part-r-00000

 hdfs dfs -cat /user/yunwei/output/part-r-00000

# hdfs dfs -cat /user/yunwei/output/part-r-00000
GG      1
I       4
LIKE    2
LOVE    2
RR      1
UU      1
YY      1

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/375008.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【云原生kubernetes】k8s 常用调度策略使用详解

一、前言 通过之前的学习,我们了解到k8s集群中最小工作单位是pod,对于k8s集群来说,一个pod的完整生命周期是由一系列调度策略来控制,这些调度策略具体是怎么工作的呢?本文将详细讨论下这个问题。 二、k8s调度策略简介…

K8S---Pod进阶资源限制以及探针

目录 一、Pod 进阶 1、资源限制 2、Pod 和 容器 的资源请求和限制: 3、CPU 资源单位 4、内存 资源单位 5、实例操作 5.1 示例1 5.2 示例2 6、重启策略(restartPolicy) 6.1 示例1 二、健康检查:又称为探针(P…

华为OD机试题,用 Java 解【机器人走迷宫】问题

最近更新的博客 华为OD机试题,用 Java 解【停车场车辆统计】问题华为OD机试题,用 Java 解【字符串变换最小字符串】问题华为OD机试题,用 Java 解【计算最大乘积】问题华为OD机试题,用 Java 解【DNA 序列】问题华为OD机试 - 组成最大数(Java) | 机试题算法思路 【2023】使…

电动针阀流量控制和电气比例阀压力控制在液氮低温控制中的应用

摘要:为了解决室温至液氮温区温控系统中需要昂贵的低温电动阀门进行液氮介质流量调节的问题,本文提供了三种不同精度的液氮温区内的低温温度控制解决方案。解决方案的技术核心是通过采用电动针阀和电气比例阀在室温环境下来快速调节外部气源流量或压力大…

Nginx网站服务——编译安装、基于授权和客户端访问控制

文章目录一、Nginx概述1.1、Nginx的特点1.2、Nginx编译安装1.3、Nginx运行控制1.4、Nginx和Apache的区别二、编译安装Nginx服务的操作步骤2.1、关闭防火墙,将安装nginx所需软件包传到/opt目录下2.2、安装依赖包2.3、创建运行用户、组(Nginx 服务程序默认…

【2023全网最全教程】从0到1开发自动化测试框架(建议收藏)

一、序言 随着项目版本的快速迭代、APP测试有以下几个特点: 首先,功能点多且细,测试工作量大,容易遗漏;其次,代码模块常改动,回归测试很频繁,测试重复低效;最后&#x…

Java 多线程 --- 多线程的相关概念

Java 多线程 --- 多线程的相关概念Race Condition 问题并发编程的性质 --- 原子性, 可见性, 有序性上下文切换 (Context Switch)线程的一些故障 --- 死锁, 活锁, 饥饿死锁 (Deadlock)活锁(Livelock)死锁和活锁的区别饥饿(Starvation)背景: 操作系统 — 线程/进程 同步 Race Co…

Windows操作系统的体系结构、运行环境和运行状态

我是荔园微风,作为一名在IT界整整25年的老兵,今天我们来重新审视一下Windows这个我们熟悉的不能再熟悉的系统。说Windows操作系统的运行环境和运行状态,首先要介绍一下Windows操作系统的体系结构,然后再要说到最重要的两个概念:核…

【架构师】零基础到精通——微服务体系

博客昵称:架构师Cool 最喜欢的座右铭:一以贯之的努力,不得懈怠的人生。 作者简介:一名Coder,软件设计师/鸿蒙高级工程师认证,在备战高级架构师/系统分析师,欢迎关注小弟! 博主小留言…

Iterator和Genertator

一、Iterator迭代器和for of原理 * 遍历器(Iterator)是一种机制(接口):为各种不同的数据结构提供统一的访问机制,任何数据结构只要部署Iterator接口,就可以完成遍历操作「for of循环」,依次处理该数据结构的…

【C++入门(下篇)】C++引用,内联函数,auto关键字的学习

前言: 在上一期我们进行了C的初步认识,了解了一下基本的概念还学习了包括:命名空间,输入输出以及缺省参数等相关的知识。今天我们将进一步对C入门知识进行学习,主要还需要大家掌握我们接下来要学习的——引用&#xf…

基于SpringCloud的可靠消息最终一致性06:轮询事务消息

上一节把可靠消息最终一致性的正常逻辑代码顺序执行了一次,并且对于同一个事务消息,在正常情况下它要被发送至少两次。 这是因为在发送消息之前,TransactionMessageService就已经把消息保存到了数据库中。而在首次消费完消息后,TransactionMessageListener并没有从数据库中…

冯诺依曼体系结构与操作系统的概念及理解

一、 冯诺依曼体系结构1、概念2、内存的作用3、硬件原理解释软件行为二、操作系统的概念及基本作用1、概念2、设计操作系统的目的3、操作系统的主要作用4、什么是管理5、管理的目的6、操作系统如何为我们服务一、 冯诺依曼体系结构 我们常见的计算机,如笔记本。我们…

只需四步,手把手教你打造专属数字人

伴随ChatGPT的问世,在技术与商业运作上都日渐发展成熟的数字人产业正持续升温。去年9月,北京市发布了国内首个数字人产业专项支持政策,提出将依托国家文化专网将数字人纳入文化数据服务平台。以数字人、ChatGPT为代表的互联网3.0创新应用产业…

【2023】OAK智能深度相机用户实际应用项目(附开源代码)

编辑:OAK中国 首发:oakchina.cn 喜欢的话,请多多👍⭐️✍ 内容可能会不定期更新,官网内容都是最新的,请查看首发地址链接。 ▌前言 Hello,大家好,这里是OAK中国,我是助手…

Linux环境内存管理——分配内存和释放内存

我是荔园微风,作为一名在IT界整整25年的老兵,今天我们来重新审视一下Windows程序员如何学习Linux环境内存管理。由于很多程序在Windows环境下开发好后,还要部署到Linux服务器上去,所以作为Windows程序员有必要学习Linux环境的内存…

IntelliJ IDEA 实用插件推荐(包含使用教程)

IntelliJ IDEA 实用插件推荐 背景:电脑重装了,重新下载了最新版的IntelliJ IDEA,感觉默认模式有点枯燥,于是决定从网上下载一些实用美观的插件优化自己以后吃饭的工具,现在推荐的都是目前还能用的(亲身实践…

【java】Java 内存模型

文章目录前言什么是 Java 内存模型为什么需要 Java 内存模型顺序一致性内存模型Happens-Before 规则总结前言 在并发编程中,当多个线程同时访问同一个共享的可变变量时,会产生不确定的结果,所以要编写线程安全的代码,其本质上是对…

C语言青蛙跳台阶【图文详解】

青蛙跳台阶前言1. 题目介绍2. 解题思路3. 利用图片来演示青蛙跳台阶的原理4. 如何用C语言实现青蛙跳台阶前言 在本文,我们要与一只活泼可爱的小青蛙合作,带领着它跳上台阶,这个小家伙精力充沛,特别擅长于跳跃。我们要让它做我们的…

一个诡异的 Pulsar InterruptedException 异常

背景 今天收到业务团队反馈线上有个应用往 Pulsar 中发送消息失败了,经过日志查看得知是发送消息时候抛出了 java.lang.InterruptedException 异常。 和业务沟通后得知是在一个 gRPC 接口中触发的消息发送,大约持续了半个小时的异常后便恢复正常了&…