windows本地开发Spark[不开虚拟机]

news2024/11/30 6:54:25

1. windows本地安装hadoop

hadoop 官网下载 hadoop2.9.1版本

1.1 解压缩至C:\XX\XX\hadoop-2.9.1

1.2 下载动态链接库和工具库

在这里插入图片描述

1.3 将文件winutils.exe放在目录C:\XX\XX\hadoop-2.9.1\bin下

1.4 将文件hadoop.dll放在目录C:\XX\XX\hadoop-2.9.1\bin下

1.5 将文件hadoop.dll放在目录C:\Windows\System32下

1.6 配置环境变量

添加方式变量名变量值
新建HADOOP_HOMEC:\XX\XX\hadoop-2.9.1
新建HADOOP_HOMEC:\XX\XX\hadoop-2.9.1
编辑(在Path中添加)Path%HADOOP_HOME%\bin

1.7 重启计算机

2. windows本地安装scala

scala 官网下载 scala 2.12.17 版本
在这里插入图片描述

  • 在上图中选择如下图所示下载
    在这里插入图片描述
  • 解压文件至 D:\Server\,并将文件夹名称由D:\Server\scala-2.12.17改为D:\Server\scala

2.1 配置环境变量

操作方式变量名变量值
新建SCALA_HOMED:\Server\scala
添加Path%SCALA_HOME%\bin
添加CLASSPATH.;%SCALA_HOME%\bin;%SCALA_HOME%\lib\dt.jar;%SCALA_HOME%\lib\tools.jar.;

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

2.2 CMD初体验

在这里插入图片描述

3. Spark应用开发

3.1 IDEA 创建maven项目

maven安装教程

3.2 创建scala文件夹

在这里插入图片描述

  • 文件夹名为scala
    在这里插入图片描述
  • 将scala文件夹标记为源文件目录
    在这里插入图片描述

3.3 创建scala文件

3.3.1 安装scala插件

在这里插入图片描述
在这里插入图片描述

3.3.2 添加框架支持

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

3.3.3 修改pom文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>cn.cmcst</groupId>
    <artifactId>spark</artifactId>
    <version>1.0</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.7</maven.compiler.source>
        <maven.compiler.target>1.7</maven.compiler.target>
        <hadoop.version>2.9.1</hadoop.version>
        <scala.version>2.12.17</scala.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-compiler</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-reflect</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>2.4.8</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.12.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>2.4</version>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>cn.cmcst.spark.WordCount</mainClass>
                        </manifest>
                    </archive>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

3.3.4 创建WordCount文件

package cn.cmcst.spark

import org.apache.spark.{SparkConf, SparkContext}

object WordCount {
  def main(args: Array[String]): Unit = {
    if(args.length < 2){
      System.err.println("Usage: WordCount <input> <output>")
      System.exit(0)
    }

    val input = args(0)
    val output = args(1)

    val conf = new SparkConf().setAppName("WordCount").setMaster("local[3]")
    val sc = new SparkContext(conf)

    val lines = sc.textFile(input)
    val result = lines.flatMap(_.split("\\s+")).map((_,1)).reduceByKey(_+_)

    result.collect().foreach(println)
//    result.saveAsTextFile(output)

    sc.stop()
  }
}

3.3.5 IDEA中输入参数

在这里插入图片描述

在这里插入图片描述

  • 红框中输入两个参数:程序中要读取文件输入目录 文件输出目录
    在这里插入图片描述
  • words.log文件内容为
flink flink flink
hadoop hadoop hadoop
spark spark spark
davinci davinci davinci
hive zookeeper sqoop mysql
hive zookeeper sqoop mysql
IDEA IDEA IDEA

2.3.6 输出结果

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
23/02/14 13:59:34 INFO SparkContext: Running Spark version 2.4.8
23/02/14 13:59:35 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/02/14 13:59:35 INFO SparkContext: Submitted application: WordCount
23/02/14 13:59:35 INFO SecurityManager: Changing view acls to: CMCST,root
23/02/14 13:59:35 INFO SecurityManager: Changing modify acls to: CMCST,root
23/02/14 13:59:35 INFO SecurityManager: Changing view acls groups to: 
23/02/14 13:59:35 INFO SecurityManager: Changing modify acls groups to: 
23/02/14 13:59:35 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(CMCST, root); groups with view permissions: Set(); users  with modify permissions: Set(CMCST, root); groups with modify permissions: Set()
23/02/14 13:59:36 INFO Utils: Successfully started service 'sparkDriver' on port 9229.
23/02/14 13:59:36 INFO SparkEnv: Registering MapOutputTracker
23/02/14 13:59:36 INFO SparkEnv: Registering BlockManagerMaster
23/02/14 13:59:36 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/02/14 13:59:36 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/02/14 13:59:36 INFO DiskBlockManager: Created local directory at C:\Users\CMCST\AppData\Local\Temp\blockmgr-87f53eb9-52f3-4323-83b0-ebc53bfc2073
23/02/14 13:59:36 INFO MemoryStore: MemoryStore started with capacity 897.6 MB
23/02/14 13:59:36 INFO SparkEnv: Registering OutputCommitCoordinator
23/02/14 13:59:36 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/02/14 13:59:36 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://DESKTOP-0DK2AAM:4040
23/02/14 13:59:36 INFO Executor: Starting executor ID driver on host localhost
23/02/14 13:59:37 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 9252.
23/02/14 13:59:37 INFO NettyBlockTransferService: Server created on DESKTOP-0DK2AAM:9252
23/02/14 13:59:37 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/02/14 13:59:37 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, DESKTOP-0DK2AAM, 9252, None)
23/02/14 13:59:37 INFO BlockManagerMasterEndpoint: Registering block manager DESKTOP-0DK2AAM:9252 with 897.6 MB RAM, BlockManagerId(driver, DESKTOP-0DK2AAM, 9252, None)
23/02/14 13:59:37 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, DESKTOP-0DK2AAM, 9252, None)
23/02/14 13:59:37 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, DESKTOP-0DK2AAM, 9252, None)
23/02/14 13:59:37 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 214.6 KB, free 897.4 MB)
23/02/14 13:59:37 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.4 KB, free 897.4 MB)
23/02/14 13:59:37 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on DESKTOP-0DK2AAM:9252 (size: 20.4 KB, free: 897.6 MB)
23/02/14 13:59:37 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:18
23/02/14 13:59:37 INFO FileInputFormat: Total input paths to process : 1
23/02/14 13:59:38 INFO SparkContext: Starting job: collect at WordCount.scala:21
23/02/14 13:59:38 INFO DAGScheduler: Registering RDD 3 (map at WordCount.scala:19) as input to shuffle 0
23/02/14 13:59:38 INFO DAGScheduler: Got job 0 (collect at WordCount.scala:21) with 2 output partitions
23/02/14 13:59:38 INFO DAGScheduler: Final stage: ResultStage 1 (collect at WordCount.scala:21)
23/02/14 13:59:38 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
23/02/14 13:59:38 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
23/02/14 13:59:38 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:19), which has no missing parents
23/02/14 13:59:38 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 5.8 KB, free 897.4 MB)
23/02/14 13:59:38 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.4 KB, free 897.4 MB)
23/02/14 13:59:38 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on DESKTOP-0DK2AAM:9252 (size: 3.4 KB, free: 897.6 MB)
23/02/14 13:59:38 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1184
23/02/14 13:59:38 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:19) (first 15 tasks are for partitions Vector(0, 1))
23/02/14 13:59:38 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
23/02/14 13:59:38 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7381 bytes)
23/02/14 13:59:38 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, executor driver, partition 1, PROCESS_LOCAL, 7381 bytes)
23/02/14 13:59:38 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
23/02/14 13:59:38 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
23/02/14 13:59:38 INFO HadoopRDD: Input split: file:/C:/WorkSpace/IDEA/BigData/spark/input/words.log:0+77
23/02/14 13:59:38 INFO HadoopRDD: Input split: file:/C:/WorkSpace/IDEA/BigData/spark/input/words.log:77+78
23/02/14 13:59:38 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1163 bytes result sent to driver
23/02/14 13:59:38 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1163 bytes result sent to driver
23/02/14 13:59:38 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 523 ms on localhost (executor driver) (1/2)
23/02/14 13:59:38 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 508 ms on localhost (executor driver) (2/2)
23/02/14 13:59:39 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
23/02/14 13:59:39 INFO DAGScheduler: ShuffleMapStage 0 (map at WordCount.scala:19) finished in 0.627 s
23/02/14 13:59:39 INFO DAGScheduler: looking for newly runnable stages
23/02/14 13:59:39 INFO DAGScheduler: running: Set()
23/02/14 13:59:39 INFO DAGScheduler: waiting: Set(ResultStage 1)
23/02/14 13:59:39 INFO DAGScheduler: failed: Set()
23/02/14 13:59:39 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at WordCount.scala:19), which has no missing parents
23/02/14 13:59:39 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 4.1 KB, free 897.4 MB)
23/02/14 13:59:39 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.5 KB, free 897.4 MB)
23/02/14 13:59:39 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on DESKTOP-0DK2AAM:9252 (size: 2.5 KB, free: 897.6 MB)
23/02/14 13:59:39 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1184
23/02/14 13:59:39 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at WordCount.scala:19) (first 15 tasks are for partitions Vector(0, 1))
23/02/14 13:59:39 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
23/02/14 13:59:39 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, executor driver, partition 0, ANY, 7141 bytes)
23/02/14 13:59:39 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, executor driver, partition 1, ANY, 7141 bytes)
23/02/14 13:59:39 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
23/02/14 13:59:39 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
23/02/14 13:59:39 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks including 2 local blocks and 0 remote blocks
23/02/14 13:59:39 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks including 2 local blocks and 0 remote blocks
23/02/14 13:59:39 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 9 ms
23/02/14 13:59:39 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 9 ms
23/02/14 13:59:39 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 1285 bytes result sent to driver
23/02/14 13:59:39 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 1355 bytes result sent to driver
23/02/14 13:59:39 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 195 ms on localhost (executor driver) (1/2)
23/02/14 13:59:39 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 199 ms on localhost (executor driver) (2/2)
23/02/14 13:59:39 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
23/02/14 13:59:39 INFO DAGScheduler: ResultStage 1 (collect at WordCount.scala:21) finished in 0.215 s
23/02/14 13:59:39 INFO DAGScheduler: Job 0 finished: collect at WordCount.scala:21, took 1.181680 s
23/02/14 13:59:39 INFO BlockManagerInfo: Removed broadcast_1_piece0 on DESKTOP-0DK2AAM:9252 in memory (size: 3.4 KB, free: 897.6 MB)
23/02/14 13:59:39 INFO SparkUI: Stopped Spark web UI at http://DESKTOP-0DK2AAM:4040
(hive,2)
(flink,3)
(davinci,3)
(zookeeper,2)
(mysql,2)
(sqoop,2)
(spark,3)
(hadoop,3)
(IDEA,3)
23/02/14 13:59:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/02/14 13:59:39 INFO MemoryStore: MemoryStore cleared
23/02/14 13:59:39 INFO BlockManager: BlockManager stopped
23/02/14 13:59:39 INFO BlockManagerMaster: BlockManagerMaster stopped
23/02/14 13:59:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/02/14 13:59:39 INFO SparkContext: Successfully stopped SparkContext
23/02/14 13:59:39 INFO ShutdownHookManager: Shutdown hook called
23/02/14 13:59:39 INFO ShutdownHookManager: Deleting directory C:\Users\CMCST\AppData\Local\Temp\spark-3a48779f-a780-43f5-9898-d8a95cbdbcec

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/344502.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Redis学习【5】之集合的底层实现原理

文章目录一 集合的底层实现原理1.1 两种实现的选择1.2 zipList【存在于Redis7.0之前的版本】1.3 listPack【Redis7.0中zipList的改进版】1.4 skipList1.4.1 skipList 原理1.4.2 skipList存在的问题与优化1.5 quickList1.5.1 quitList检索操作1.5.2 quitList插入操作1.5.3 quitL…

知识图谱概述

知识图谱 知识图谱本质上是一种大规模的语义网络&#xff0c;富含实体、概念及其之间的各种语义关系。 作为一种语义网络是大数据时代知识表示的重要方式之一。 作为一种技术体系&#xff0c;是大数据时代知识工程代表性进展。 领域知识图谱 领 域&#xff08;行业&#xf…

一篇文章带你熟练使用Ansible中的playbook

目录 一、Playbook的功能 二、YAML 1、简介 2、特点 3、语法简介 4、YAML 列表 5、YAML的字典 三、playbook执行命令 四、 Playbook的核心组件 五、vim 设定技巧 练习 一、Playbook的功能 playbook 是由一个或多个play组成的列表 Playboot 文件使用YAML来写的 二、…

Mysql5.7安装【Windows版】

文章目录一、下载二、添加到环境变量三、添加配置文件my.ini四、安装Mysql 修改密码一、下载 下载地址 滑倒最下面有一个MySQL Community Server 选择要下载的版本 二、添加到环境变量 下载好了之后开始解压 把bin目录添加到环境变量 可以点击进入bin目录&#xff0c;直接复…

低代码平台真的是企业的福音吗?

研究低代码平台已有3年&#xff0c;也算是个低代码资深用户了&#xff0c;下面基于个人理解给大家做一份2k字的深入介绍&#xff01;希望对大家在低代码方面有一定帮助。 开篇&#xff0c;先带大家来看企业为什么要布局低代码平台&#xff01;究竟有何优势&#xff1f; &…

认识钉钉小程序_搭建一个简单的小程序---钉钉小程序开发教程001

其实这里面开发的时候具体,应该有很多的坑,不过..因为暂时不需要具体做,我仅仅查了一下怎么做,记录一下,以后不用再查了. 感觉钉钉小程序开发比微信小程序开发要更便捷,简单一些.首先要注册一个开发者,其实登录上钉钉账号就可以了.然后可以看看,快速入门,我没看 然后下载开发工…

Java基础之多线程JUC全面学习笔记

目录初识多线程多线程的实现方式常见的成员方法线程安全的问题死锁生产者和消费者线程池自定义线程池初识多线程 什么是多线程? 线程 线程是操作系统能够进行运算调度的最小单位。线程被包含在进程之中&#xff0c;是进程中的实际运作单位。 简单理解:应用软件中互相独立&…

为什么西门子、美的等企业这样进行架构升级,看看改造效果就知道了

在工业领域&#xff0c; 生产、测试、运行阶段都可能会产生大量带有时间戳的传感器数据&#xff0c;这都属于典型的时序数据。时序数据主要由各类型实时监测、检查与分析设备所采集或产生&#xff0c;涉及制造、电力、化工、工程作业等多个行业&#xff0c;具备写多读少、量非常…

从端到端打通模型端侧部署流程(NCNN)

文章目录背景介绍&#xff1a;为什么要做端侧推理&#xff1a;端侧深度学习部署流程&#xff1a;一条主要技术路线&#xff1a;ONNX&#xff1a;NCNN框架&#xff1a;NCNN的官方介绍&#xff1a;NCNN问题解决&#xff1a;NCNN使用样例快速在NCNN框架下验证自己的模型&#xff1…

数据分析思维(六)|循环/闭环思维

循环/闭环思维 1、概念 在很多的分析场景下&#xff0c;我们需要按照一套流程反复分析&#xff0c;而不是进行一次性的分析&#xff0c;也就是说这套流程的结果会成为该流程的新一次输入&#xff0c;从而形成一个闭环&#xff0c;此时的分析思维我们称之为循环/闭环思维。 常…

计算机断层扫描结肠镜和全自动骨密度仪在一次检查中的可行性

计算机断层扫描结肠镜和全自动骨密度仪在一次检查中的可行性 Feasibility of Simultaneous Computed Tomographic Colonography and Fully Automated Bone Mineral Densitometry in a Single Examination 简单总结&#xff1a; 数据&#xff1a;患者的结肠镜检查和腹部CT检查…

2022黑马Redis跟学笔记.实战篇(三)

2022黑马Redis跟学笔记.实战篇 三4.2.商家查询的缓存功能4.3.1.认识缓存4.3.1.1.什么是缓存4.3.1.2.缓存的作用1.为什么要使用缓存2.如何使用缓存3. 添加商户缓存4. 缓存模型和思路4.3.1.3.缓存的成本4.3.2.添加redis缓存4.3.3.缓存更新策略4.3.3.1.三种策略(1).内存淘汰:Redis…

NoSQL和Redis

NoSQL一、NoSqlNoSQL Not Only SQL(不仅仅是SQL)非关系型数据库二、为什么需要NoSQL1、web1.0在90年代&#xff0c;一个网站的访问量一般都不大&#xff0c;用单个数据库完全可以轻松应付。在那个时候&#xff0c;更多的都是静态网页&#xff0c;动态交互类型的网站不多。单机…

CS224W课程学习笔记(一):课程介绍与图深度学习概念

引言 我们从怎么利用图形或网络表示数据这一动机开始。网络成为了用于描述复杂系统中交互实体的通用语言。从图片上讲&#xff0c;与其认为我们的数据集由一组孤立的数据点组成&#xff0c;不如考虑这些点之间的相互作用和关系。 在不同种类的网络之间进行哲学上的区分是有启…

系统功能设计:教育缴费平台产品需求文档

教育缴费系统后台能够支撑前端业务&#xff0c;查询所需字段&#xff0c;为支撑前端业务提供服务&#xff0c;支持学校分校管理、班级分班管理、账单撤回及强制结束等功能。为了将教育缴费的需求清晰准确地描述清楚&#xff0c;本文作者编写了这个产品需求文档&#xff0c;一起…

Jmeter自带函数不够用?不如自己动手开发一个

在Jmeter的函数助手里&#xff0c;有很多内置的函数&#xff0c;比如Random、UUID、time等等。使用这些函数可以快速帮我们生成某些数据&#xff0c;进行一些逻辑处理。用起来非常的方便。 但是在实际接口测试过程中&#xff0c;有很多的需求&#xff0c;Jmeter内置的函数可能…

对抗生成网络GAN系列——Spectral Normalization原理详解及源码解析

&#x1f34a;作者简介&#xff1a;秃头小苏&#xff0c;致力于用最通俗的语言描述问题 &#x1f34a;专栏推荐&#xff1a;深度学习网络原理与实战 &#x1f34a;近期目标&#xff1a;写好专栏的每一篇文章 &#x1f34a;支持小苏&#xff1a;点赞&#x1f44d;&#x1f3fc;、…

JavaEE-HTTP协议(二)

目录HTTP请求的方法GET方法POST 方法其他方法“报头”User-AgentRefererCookieHTTP响应200 OK404 Not Found403 Forbidden405 Method Not Allowed500 Internal Server Error504 Gateway Timeout302 Move temporarily301 Moved PermanentlyHTTP请求的方法 GET方法 GET 是最常用…

Jmeter之直连数据库框架搭建简介

案例简介 通过直连数据库让程序代替接口访问数据库&#xff0c;如果二者预期结果不一致&#xff0c;就找到了程序的缺陷。 下面通过一个案例分析讲解如何实现&#xff1a;获取某个字段值&#xff0c;放在百度上搜索。 实现方式 1、Jmeter本身不具备直连数据库的功能&#xf…

机器学习笔记之生成模型综述(四)概率图模型 vs 神经网络

机器学习笔记之生成模型综述——概率图模型vs神经网络引言回顾&#xff1a;概率图模型与前馈神经网络贝叶斯网络 VS\text{VS}VS 神经网络表示层面观察两者区别推断、学习层面观察两者区别引言 本节将介绍概率图模型与神经网络之间的关联关系和各自特点。 回顾&#xff1a;概率…