Spark-用IDEA编写wordcount demo

news2024/10/6 8:23:36

配置

Spark版本:3.2.0

Scala版本:2.12.12

JDK:1.8

Maven:3.6.3

pom文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.zzjz.Spark</groupId>
    <artifactId>Spark</artifactId>
    <version>1.0</version>

    <properties>
        <spark.version>3.2.0</spark.version>
        <scala.version>2.12</scala.version>
    </properties>


    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-mllib_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>

    </dependencies>

    <build>
        <plugins>

            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.6.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.19</version>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>

        </plugins>
    </build>






</project>

样例数据

9422850591,11603,39939,山西,邮件,人员
9422850591,116427,39911,山西,邮件,人员
9422850591,116437,39895,山西,邮件,人员

代码

import org.apache.spark.{SparkConf, SparkContext}  // 导入SparkConf和SparkContext类
object wcPerson {
  def main (args:Array[String]): Unit ={
    // 创建SparkConf对象,设置应用程序名称为"wcPerson",运行模式为本地模式,使用一个CPU核心
    val conf = new SparkConf().setAppName("wcPerson").setMaster("local[1]")  
    // 创建SparkContext对象,与Spark集群进行通信
    val sc = new SparkContext(conf) 
    // 加载文件,将每一行作为一个字符串元素,返回一个RDD 
    val inputFile = sc.textFile("D:\\workspace\\spark\\src\\main\\Data\\person")  
    // 对RDD应用flatMap转换操作,将每一行按","分割成多个单词,并将所有单词扁平化为一个RDD
    val wc = inputFile.flatMap(line => line.split(","))  
    // 对RDD应用map转换操作,将每个单词映射为(key, value)的元组,其中key为单词本身,value为1
      .map(word => (word,1)) 
    // 对相同key的元组进行聚合操作,将相同key的value相加 
      .reduceByKey((a,b) => a + b) 
    // 打印输出聚合结果 
    wc.foreach(println)  
  }
}

运行结果

D:\Java\jdk1.8.0_131\bin\java.exe "-javaagent:D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar=52283:D:\idea\IntelliJ IDEA 2021.1.3\bin" -Dfile.encoding=UTF-8 -classpath "D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar" com.intellij.rt.execution.CommandLineWrapper C:\Users\Administrator\AppData\Local\Temp\idea_classpath1156784809 wcPerson
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/spark/spark-3.2.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/Maven/Maven_repositories/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
23/07/11 10:00:01 INFO SparkContext: Running Spark version 3.2.0
23/07/11 10:00:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/07/11 10:00:02 INFO ResourceUtils: ==============================================================
23/07/11 10:00:02 INFO ResourceUtils: No custom resources configured for spark.driver.
23/07/11 10:00:02 INFO ResourceUtils: ==============================================================
23/07/11 10:00:02 INFO SparkContext: Submitted application: wcPerson
23/07/11 10:00:02 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/07/11 10:00:02 INFO ResourceProfile: Limiting resource is cpu
23/07/11 10:00:02 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/07/11 10:00:02 INFO SecurityManager: Changing view acls to: Administrator
23/07/11 10:00:02 INFO SecurityManager: Changing modify acls to: Administrator
23/07/11 10:00:02 INFO SecurityManager: Changing view acls groups to: 
23/07/11 10:00:02 INFO SecurityManager: Changing modify acls groups to: 
23/07/11 10:00:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(Administrator); groups with view permissions: Set(); users  with modify permissions: Set(Administrator); groups with modify permissions: Set()
23/07/11 10:00:07 INFO Utils: Successfully started service 'sparkDriver' on port 52323.
23/07/11 10:00:07 INFO SparkEnv: Registering MapOutputTracker
23/07/11 10:00:07 INFO SparkEnv: Registering BlockManagerMaster
23/07/11 10:00:07 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/07/11 10:00:07 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/07/11 10:00:07 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/07/11 10:00:07 INFO DiskBlockManager: Created local directory at C:\Users\Administrator\AppData\Local\Temp\blockmgr-0052575b-7c9f-457e-9ed5-fb50af59f965
23/07/11 10:00:07 INFO MemoryStore: MemoryStore started with capacity 623.4 MiB
23/07/11 10:00:07 INFO SparkEnv: Registering OutputCommitCoordinator
23/07/11 10:00:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/07/11 10:00:08 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://zzjz:4040
23/07/11 10:00:08 INFO Executor: Starting executor ID driver on host zzjz
23/07/11 10:00:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52339.
23/07/11 10:00:08 INFO NettyBlockTransferService: Server created on zzjz:52339
23/07/11 10:00:08 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/07/11 10:00:08 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:08 INFO BlockManagerMasterEndpoint: Registering block manager zzjz:52339 with 623.4 MiB RAM, BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:08 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:08 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:10 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 244.0 KiB, free 623.2 MiB)
23/07/11 10:00:10 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.4 KiB, free 623.1 MiB)
23/07/11 10:00:10 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on zzjz:52339 (size: 23.4 KiB, free: 623.4 MiB)
23/07/11 10:00:10 INFO SparkContext: Created broadcast 0 from textFile at wcPerson.scala:7
23/07/11 10:00:10 INFO FileInputFormat: Total input paths to process : 1
23/07/11 10:00:10 INFO SparkContext: Starting job: foreach at wcPerson.scala:10
23/07/11 10:00:11 INFO DAGScheduler: Registering RDD 3 (map at wcPerson.scala:9) as input to shuffle 0
23/07/11 10:00:11 INFO DAGScheduler: Got job 0 (foreach at wcPerson.scala:10) with 1 output partitions
23/07/11 10:00:11 INFO DAGScheduler: Final stage: ResultStage 1 (foreach at wcPerson.scala:10)
23/07/11 10:00:11 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
23/07/11 10:00:11 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
23/07/11 10:00:11 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at wcPerson.scala:9), which has no missing parents
23/07/11 10:00:11 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 6.9 KiB, free 623.1 MiB)
23/07/11 10:00:11 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.0 KiB, free 623.1 MiB)
23/07/11 10:00:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on zzjz:52339 (size: 4.0 KiB, free: 623.4 MiB)
23/07/11 10:00:11 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1427
23/07/11 10:00:11 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at wcPerson.scala:9) (first 15 tasks are for partitions Vector(0))
23/07/11 10:00:11 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
23/07/11 10:00:11 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (zzjz, executor driver, partition 0, PROCESS_LOCAL, 4503 bytes) taskResourceAssignments Map()
23/07/11 10:00:11 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
23/07/11 10:00:12 INFO HadoopRDD: Input split: file:/D:/workspace/spark/src/main/Data/person:0+135
23/07/11 10:00:12 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1325 bytes result sent to driver
23/07/11 10:00:12 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1022 ms on zzjz (executor driver) (1/1)
23/07/11 10:00:12 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
23/07/11 10:00:12 INFO DAGScheduler: ShuffleMapStage 0 (map at wcPerson.scala:9) finished in 1.263 s
23/07/11 10:00:12 INFO DAGScheduler: looking for newly runnable stages
23/07/11 10:00:12 INFO DAGScheduler: running: Set()
23/07/11 10:00:12 INFO DAGScheduler: waiting: Set(ResultStage 1)
23/07/11 10:00:12 INFO DAGScheduler: failed: Set()
23/07/11 10:00:12 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at wcPerson.scala:9), which has no missing parents
23/07/11 10:00:12 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.3 KiB, free 623.1 MiB)
23/07/11 10:00:12 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.1 KiB, free 623.1 MiB)
23/07/11 10:00:12 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on zzjz:52339 (size: 3.1 KiB, free: 623.4 MiB)
23/07/11 10:00:12 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1427
23/07/11 10:00:12 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at wcPerson.scala:9) (first 15 tasks are for partitions Vector(0))
23/07/11 10:00:12 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks resource profile 0
23/07/11 10:00:12 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1) (zzjz, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map()
23/07/11 10:00:12 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
23/07/11 10:00:12 INFO ShuffleBlockFetcherIterator: Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks
23/07/11 10:00:12 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 20 ms
(116437,1)
(39911,1)
(116427,1)
(9422850591,3)
(39895,1)
(山西,3)
(11603,1)
(39939,1)
(人员,3)
(邮件,3)
23/07/11 10:00:12 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1224 bytes result sent to driver
23/07/11 10:00:12 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 123 ms on zzjz (executor driver) (1/1)
23/07/11 10:00:12 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
23/07/11 10:00:12 INFO DAGScheduler: ResultStage 1 (foreach at wcPerson.scala:10) finished in 0.153 s
23/07/11 10:00:12 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
23/07/11 10:00:12 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
23/07/11 10:00:12 INFO DAGScheduler: Job 0 finished: foreach at wcPerson.scala:10, took 2.044775 s
23/07/11 10:00:12 INFO SparkContext: Invoking stop() from shutdown hook
23/07/11 10:00:12 INFO SparkUI: Stopped Spark web UI at http://zzjz:4040
23/07/11 10:00:12 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/07/11 10:00:12 INFO MemoryStore: MemoryStore cleared
23/07/11 10:00:12 INFO BlockManager: BlockManager stopped
23/07/11 10:00:12 INFO BlockManagerMaster: BlockManagerMaster stopped
23/07/11 10:00:12 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/07/11 10:00:12 INFO SparkContext: Successfully stopped SparkContext
23/07/11 10:00:12 INFO ShutdownHookManager: Shutdown hook called
23/07/11 10:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator\AppData\Local\Temp\spark-9f3eb32d-30f7-44d6-8751-f668b2710d89

Process finished with exit code 0

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/740698.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

小红书笔记为什么没有流量,归纳总结

我们都知道小红书是一个内容分享类平台。小红书笔记是平台的主要内容形式。但有时候&#xff0c;我们撰写了一篇笔记&#xff0c;却无法搜索到&#xff0c;今天为大家分享下小红书笔记为什么没有流量&#xff0c;归纳总结&#xff01; 一、小红书笔记不被收录的原因 当我们精心…

Java 递归和非递归方式实现二叉树的前、中、后序遍历

文章目录 Node结点定义前序遍历递归方式实现非递归方式实现图文解读 最终结果 中序遍历递归方式实现非递归方式实现图文解读 最终结果 后序遍历递归方式实现非递归方式实现图文解读 最终结果 结语 Node结点定义 private static class Node {public int value;public Node left;…

Vue子组件向父组件传递消息

父子组件之间的通信&#xff1a;props与emit 通常提到props&#xff0c;都会想到的是父组件给子组件传值&#xff1b;提到emit为子组件向父组件发送消息&#xff0c;但其实&#xff0c;props也可以使子组件向父组件传递消息 方式为在父组件中通过为子组件绑定属性&#xff0c…

Docker之centos7环境离线安装

一、docker简介 Docker是一个开源的应用容器引擎&#xff0c;可以让开发者将应用及其依赖打包在一个虚拟的容器中&#xff0c;方便地部署、移植、升级和管理。Docker可以运行在Linux、Windows和MacOS等操作系统上&#xff0c;并且可以在不同的平台之间进行交互和迁移。Docker的…

ES 性能调优,这可能是全网最详细的 Elasticsearch 性能调优指南

文章目录 1、通用优化策略1.1 通用最小化法则1.2 职责单一原则1.3 其他 2、写性能调优2.1 基本原则2.2 优化手段2.2.1 增加 flush 时间间隔&#xff0c;2.2.2 增加refresh_interval的参数值2.2.3 增加Buffer大小&#xff0c;2.2.4 关闭副本2.2.5 禁用swap2.2.6 使用多个工作线程…

小奇猫物语之产品经理篇(1)

小奇猫物语之产品经理篇&#xff08;1&#xff09; 喵喵提示&#xff1a;看到标题后面的&#xff08;1&#xff09;了嘛&#xff1f;没错&#xff01;关于产品经理这方面&#xff0c;小奇会出一个系列哟&#xff0c;感谢各位铲屎官们的观看&#xff0c;欢迎提出指正和批评哦&a…

Springboot设置并访问静态资源目录

目录​​​​​​​ 静态文件 application设置方法 配置详解 编写配置 优缺点 设置配置类方法 配置详解 编写配置 优缺点 总结 静态文件 静态资源&#xff0c;一般是网页端的&#xff1a;HTML文件、JavaScript文件和图片。尤其是设置图片的静态资源&#xff0c;尤其重…

浅谈消防应急照明和疏散指示系统在建筑物中的设计与应用

安科瑞 华楠 摘 要&#xff1a;在消防安全意识逐渐提高的背景下&#xff0c;安全疏散技术也取得了不断发展。基于这种认识&#xff0c;本文对建筑物消防应急照明和疏散指示系统进行了介绍&#xff0c;然后对系统设计与应用方法展开了探讨&#xff0c;为关注这一话题的人们提供…

Linux--查看常驻进程:ps

进程分为瞬时进程和常驻进程 瞬时进程&#xff1a;瞬间完成从加载到内存、显示在输出设备、退出过程 int main() {printf("hello world!\n");return 0; } 常驻进程&#xff1a;一直在内存中 int main() {while (1){printf("hello world!\n");sleep(1);…

Java 动态规划 Leetcode 931. 下降路径最小和

代码展示: class Solution {public int minFallingPathSum(int[][] matrix) {int nmatrix.length;//创建dp数组int[][]dpnew int[n1][n2];//初始化for(int i1;i<n;i){dp[i][0]dp[i][n1]Integer.MAX_VALUE;}//填充数组for(int i1;i<n;i){for(int j1;j<n;j){dp[i][j]Ma…

【网络安全】Burpsuite v2021.12.1安装激活配置快捷启动

Burpsuite v2021.12.1安装&激活&配置&快捷启动 一、下载激活包二、配置JDK11三、启动激活 一、下载激活包 需要下载的内容&#xff1a; Burp Suite jar包JDK11激活jar包汉化jar包 下面是已经下载好的&#xff0c;可以直接使用 BurpSuite网盘下载链接 提取码&#…

单键触摸开关/双键触摸式照明灯/触摸式延时照明灯电路设计

单键触摸开关 触摸式照明开关是一种非常实用的电子开关&#xff0c;用手触摸一下导电片&#xff0c;就能实现开关动作 &#xff0c;使用方便可靠、电路简单、性能稳定、寿命长、节电效果明显。适合于爱好者自制。 一、电路工作原理 电路原理如图 21 所示。 接通电源后&#…

RabbitMQ ---- Hello World

RabbitMQ ---- Hello World 1. 依赖2. 消息生产者3. 信息消费者 本节使用 Java 编写两个程序。发送单个消息的生产者和接收消息并打印出来的消费者。 1. 依赖 <!--指定 jdk 编译版本--><build><plugins><plugin><groupId>org.apache.maven.plu…

Scratch 随机平台跳跃

Scratch 随机平台跳跃 本程序转换为HTML后运行。程序在随机位置生成红、蓝平台各13个&#xff0c;通过W、A、S、D键控制角色移动&#xff0c;移动时把标记下落的变量设为1。该变量为1时角色下落&#xff0c;碰到边缘或平台时结束下落&#xff0c;变量设为0。这种方案的缺陷是角…

Databricks推出AI模型SDK,能自动生成SQL代码

近日一款AI模型的发布&#xff0c;或将有助于提高开发效率。据悉&#xff0c;最近Databricks发布大数据分析平台Spark所用的 AI 模型 SDK&#xff0c;开发者写代码时&#xff0c;可用英文下指令&#xff0c;编译器就会将英文指令转换为 PySpark 或 SQL 语言代码&#xff0c;以提…

偏振光的斯托克斯矢量表示法

《光纤偏振模色散原理 测量与自适应补充技术》张晓光 第二章

【Linux后端服务器开发】进程与地址空间概述

目录 一、进程创建 二、进程状态 1. 运行状态R 2. 睡眠状态S 3. 僵尸状态Z 4. 孤儿进程 三、进程优先级 PRI 四、地址空间的层次结构 五、虚拟地址和物理地址 一、进程创建 fork()函数创建子进程&#xff0c;若创建成功&#xff0c;则给父进程返回子进程的pid&#x…

Python基础教程:异常处理try...except语句

Python是一门非常灵活且易于学习的编程语言&#xff0c;在日常开发中被广泛应用。然而&#xff0c;由于各种原因&#xff0c;我们的代码可能会出现异常情况&#xff0c;例如输入错误、文件读写异常等等。Python异常处理是Python中重要的一部分&#xff0c;为了保证程序的稳定性…

用表格制作网页

利用表格制作网页 实验目标 该网页将利用设计页面的布局&#xff0c;制楚雄师院的页面简介。通过该种制作&#xff0c;我们可以体会到表格在布局设计中的实际运用&#xff0c;体会如何使用表格来控制页面中的元素对象&#xff0c;已达到自然&#xff0c;生动的配置构成元素…

Linux无法访问github解决方案【修改/etc/hosts文件,加上GitHub网站的IP地址】

ChatGPT神中神&#xff01; 省流&#xff1a;修改/etc/hosts文件&#xff0c;加上GitHub网站的IP地址。 "Failed connect to github.com:443; 拒绝连接" 错误通常表示你的系统无法建立与GitHub的安全连接。这可能是由于网络问题、防火墙设置或代理配置等原因引起的。…