Spark 动态资源分配参数与源码原理分析

news2025/1/12 3:00:18

1.1.1、Dynamic Allocation

1.1.1.1 参数说明

  • 1.2 版本
参数名及默认值含义
spark.dynamicAllocation.enabled = false是否开启动态资源分配,主要是基于集群负载分配executor
spark.dynamicAllocation.executorIdleTimeout=60sexecutor空闲时间达到规定值,则将该executor移除。
spark.dynamicAllocation.maxExecutors=infinity最多使用的executor数,默认为你申请的最大executor数
spark.dynamicAllocation.minExecutors=0最少保留的executor数
spark.dynamicAllocation.schedulerBacklogTimeout=1s有task等待运行时间超过该值后开始启动executor
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=schedulerBacklogTimeout动态启动executor的间隔
  • 1.3 版本
参数名及默认值含义
spark.dynamicAllocation.initialExecutors=spark.dynamicAllocation.minExecutors启动的初始executor数
  • 1.4版本
参数名及默认值含义
spark.dynamicAllocation.cachedExecutorIdleTimeout=infinity缓存了数据的executor如果超过该值仍然空闲 会被移除
  • 2.4版本
参数名及默认值含义
spark.dynamicAllocation.executorAllocationRatio=1默认动态分配会请求很多executor以实现最大并行度,但对于小任务而言,该策略反而会造成资源浪费。该值会受最大最小executor数的影响
  • 3.0版本
参数名及默认值含义
spark.dynamicAllocation.shuffleTracking.enabled=false为executor开启shuffle文件跟踪(即存储shuffle数据),避免动态分配依赖外部shuffle服务。
spark.dynamicAllocation.shuffleTracking.timeout=infinity当shuffle跟踪开启后,控制executor存储shuffle数据的超时时间,

1.1.1.2 源码分析

最开始生效位置 : org.apache.spark.SparkContext#_executorAllocationManager

// 动态分配参数必须 在非local环境下才能生效,
val dynamicAllocationEnabled = Utils.isDynamicAllocationEnabled(_conf)
_executorAllocationManager =
  if (dynamicAllocationEnabled) {
    schedulerBackend match {
      case b: ExecutorAllocationClient =>
        // 动态分配资源交给 动态分配管理器对象来 实现
        Some(new ExecutorAllocationManager(
          schedulerBackend.asInstanceOf[ExecutorAllocationClient], listenerBus, _conf,
          cleaner = cleaner, resourceProfileManager = resourceProfileManager))
      case _ =>
        None
    }
  } else {
    None
  }
// 调用 ExecutorAllocationManager 的start方法
_executorAllocationManager.foreach(_.start())
  def isDynamicAllocationEnabled(conf: SparkConf): Boolean = {
      // DYN_ALLOCATION_ENABLED 对应 spark.dynamicAllocation.enabled参数
    val dynamicAllocationEnabled = conf.get(DYN_ALLOCATION_ENABLED)
    dynamicAllocationEnabled &&
      (!isLocalMaster(conf) || conf.get(DYN_ALLOCATION_TESTING))
  }

  // 运行模式必须非本地,才能使用动态资源分配
  def isLocalMaster(conf: SparkConf): Boolean = {
    val master = conf.get("spark.master", "")
    master == "local" || master.startsWith("local[")
  }
1.1.1.2.1 ExecutorAllocationManager

动态资源分配的工作,全部交由ExecutorAllocationManager类来管理,可以根据集群负载 实现最大并行化运行程序。

1.1.1.2.1.1 start方法

在sparkcontext初始化时,被调用。

注意看,这里用到了 spark.dynamicAllocation.minExecutors(默认为0),spark.dynamicAllocation.initialExecutors(默认等于minexecutor),spark.executor.instances (默认为0) 3个参数,取其中最大值作为初始化 分配的 executor数。

org.apache.spark.ExecutorAllocationManager#start

/**
   * Register for scheduler callbacks to decide when to add and remove executors, and start
   * the scheduling task.
   */
  def start(): Unit = {
    listenerBus.addToManagementQueue(listener)
    listenerBus.addToManagementQueue(executorMonitor)
    cleaner.foreach(_.attachListener(executorMonitor))

    val scheduleTask = new Runnable() {
      override def run(): Unit = {
        try {
          schedule()
        } catch {
          case ct: ControlThrowable =>
            throw ct
          case t: Throwable =>
            logWarning(s"Uncaught exception in thread ${Thread.currentThread().getName}", t)
        }
      }
    }
    // 定时任务, 请求executor 或者 回收过期executor
    // intervalMillis 默认100,单位ms
    executor.scheduleWithFixedDelay(scheduleTask, 0, intervalMillis, TimeUnit.MILLISECONDS)
	// 请求初始数量executor,numExecutorsTarget一开始被初始化这3个参数的最大值 max(spark.dynamicAllocation.minExecutors,spark.dynamicAllocation.initialExecutors,spark.executor.instances)
    client.requestTotalExecutors(numExecutorsTarget, localityAwareTasks, hostToLocalTaskCount)
  }

org.apache.spark.ExecutorAllocationManager#schedule

  /**
   * This is called at a fixed interval to regulate the number of pending executor requests
   * and number of executors running.
   *
   * First, adjust our requested executors based on the add time and our current needs.
   * Then, if the remove time for an existing executor has expired, kill the executor.
   *
   * This is factored out into its own method for testing.
   */
  private def schedule(): Unit = synchronized {
    val executorIdsToBeRemoved = executorMonitor.timedOutExecutors()
    if (executorIdsToBeRemoved.nonEmpty) {
      initializing = false
    }

    // 请求的当前实际所需executor
    updateAndSyncNumExecutorsTarget(clock.nanoTime())
    // 移除过期的executor
    if (executorIdsToBeRemoved.nonEmpty) {
      removeExecutors(executorIdsToBeRemoved)
    }
  }

总体调用示意图如下:

schedule是一个定时任务,每隔100ms运行一次

请添加图片描述

1.1.1.2.1.2 updateAndSyncNumExecutorsTarget方法

这里我们先看 updateAndSyncNumExecutorsTarget 和removeExecutors方法,因为其内部 最终也会调用 requestTotalExecutors

注意看,这里用到了一个新参数 spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 默认为 spark.dynamicAllocation.schedulerBacklogTimeout 参数,默认为1s

org.apache.spark.ExecutorAllocationManager#updateAndSyncNumExecutorsTarget

private def updateAndSyncNumExecutorsTarget(now: Long): Int = synchronized {
    // 我们需要的最大executor
    val maxNeeded = maxNumExecutorsNeeded

    if (initializing) {
      // 当前仍在初始化
      0
    } else if (maxNeeded < numExecutorsTarget) {
       // numExecutorsTarget表示已经分配的,超过了最大所需要maxNeeded,因此需要回收executor
      val oldNumExecutorsTarget = numExecutorsTarget
        // minNumExecutors对应spark.dynamicAllocation.minExecutors参数,默认为0
      numExecutorsTarget = math.max(maxNeeded, minNumExecutors)
      numExecutorsToAdd = 1

      // 实际需要的executor数 小于 当前的executor数
      if (numExecutorsTarget < oldNumExecutorsTarget) {
      // 异步请求去释放空闲executor资源
        client.requestTotalExecutors(numExecutorsTarget, localityAwareTasks, hostToLocalTaskCount)
        logDebug(s"Lowering target number of executors to $numExecutorsTarget (previously " +
          s"$oldNumExecutorsTarget) because not all requested executors are actually needed")
      }
      // 返回释放executor数量,负数表示移除executor
      numExecutorsTarget - oldNumExecutorsTarget
    } else if (addTime != NOT_SET && now >= addTime) {
       // 如果 最大请求executor数超过了当前已分配的executor数,且超过了间隔时间 spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 默认为 spark.dynamicAllocation.schedulerBacklogTimeout 参数,默认为1s
      val delta = addExecutors(maxNeeded)
      logDebug(s"Starting timer to add more executors (to " +
        s"expire in $sustainedSchedulerBacklogTimeoutS seconds)")
      addTime = now + TimeUnit.SECONDS.toNanos(sustainedSchedulerBacklogTimeoutS)
      delta
    } else {
      0
    }
  }

org.apache.spark.ExecutorAllocationManager#maxNumExecutorsNeeded

注意看,这里又出现了1个新参数,spark.dynamicAllocation.executorAllocationRatio参数,默认1.0

private def maxNumExecutorsNeeded(): Int = {
    // totalPendingTasks包括等待的任务+ 等待的推测执行任务
  val  numRunningOrPendingTasks= listener.totalPendingTasks + listener.totalRunningTasks
    // executorAllocationRatio 即 spark.dynamicAllocation.executorAllocationRatio参数,默认1.0
    // tasksPerExecutorForFullParallelism参数计算如下
    // 向上取整结果
  val maxNeeded = math.ceil(numRunningOrPendingTasks * executorAllocationRatio /
    tasksPerExecutorForFullParallelism).toInt
  if (tasksPerExecutorForFullParallelism > 1 && maxNeeded == 1 &&
    listener.pendingSpeculativeTasks > 0) {
      // 如果最大需要executor为1个,且推测执行还有等待任务,则多分配1个
      maxNeeded + 1
    } else {
      maxNeeded
    }
  }
// EXECUTOR_CORES 对应 spark.executor.cores,表示每个executor的cpu数,默认为1
//CPUS_PER_TASK对应spark.task.cpus,表示每个task所消耗cpu数,默认为1;
  private val tasksPerExecutorForFullParallelism =
    conf.get(EXECUTOR_CORES) / conf.get(CPUS_PER_TASK)

// 向资源管理器请求一定数量的executor
// 如果请求的executor数量到达最大executor数,那就放弃请求,重置为0;否则翻倍去请求资源
private def addExecutors(maxNumExecutorsNeeded: Int): Int = {
    // maxNumExecutors 对应 spark.dynamicAllocation.maxExecutors 
    if (numExecutorsTarget >= maxNumExecutors) {
      logDebug(s"Not adding executors because our current target total " +
        s"is already $numExecutorsTarget (limit $maxNumExecutors)")
      numExecutorsToAdd = 1
      return 0
    }

    val oldNumExecutorsTarget = numExecutorsTarget
    // There's no point in wasting time ramping up to the number of executors we already have, so
    // make sure our target is at least as much as our current allocation:
    numExecutorsTarget = math.max(numExecutorsTarget, executorMonitor.executorCount)
    // Boost our target with the number to add for this round:
    numExecutorsTarget += numExecutorsToAdd
    // Ensure that our target doesn't exceed what we need at the present moment:
    numExecutorsTarget = math.min(numExecutorsTarget, maxNumExecutorsNeeded)
    // Ensure that our target fits within configured bounds:
    numExecutorsTarget = math.max(math.min(numExecutorsTarget, maxNumExecutors), minNumExecutors)

    // 重新计算得出当前要请求的executor数
    val delta = numExecutorsTarget - oldNumExecutorsTarget

    // If our target has not changed, do not send a message
    // to the cluster manager and reset our exponential growth
    if (delta == 0) {
      numExecutorsToAdd = 1
      return 0
    }

    val addRequestAcknowledged = try {
      testing ||
        //  和回收executor资源一样,请求executor资源 也是这个api
        client.requestTotalExecutors(numExecutorsTarget, localityAwareTasks, hostToLocalTaskCount)
    } catch {
      case NonFatal(e) =>
        // Use INFO level so the error it doesn't show up by default in shells. Errors here are more
        // commonly caused by YARN AM restarts, which is a recoverable issue, and generate a lot of
        // noisy output.
        logInfo("Error reaching cluster manager.", e)
        false
    }
    if (addRequestAcknowledged) {
      val executorsString = "executor" + { if (delta > 1) "s" else "" }
      logInfo(s"Requesting $delta new $executorsString because tasks are backlogged" +
        s" (new desired total will be $numExecutorsTarget)")
      numExecutorsToAdd = if (delta == numExecutorsToAdd) {
        numExecutorsToAdd * 2
      } else {
        1
      }
      delta
    } else {
      logWarning(
        s"Unable to reach the cluster manager to request $numExecutorsTarget total executors!")
      numExecutorsTarget = oldNumExecutorsTarget
      0
    }
  }

计算当前最大需要的executor:

pendingTasks方法 + pendingSpeculativeTasks方法 + totalRunningTasks方法

变量stageAttemptToNumTasks–》pendingTasks (基于stageAttemptToTaskIndices(表示已分配)相减,得出剩余待运行任务)

变量stageAttemptToNumRunningTask(已运行任务:包括推测任务)–》totalRunningTasks

变量stageAttemptToNumSpeculativeTasks(推测任务:包括等待和已运行的)–》pendingSpeculativeTasks (基于stageAttemptToSpeculativeTaskIndices(表示已运行的推测任务),相减得出剩余待运行的推测任务)

updateAndSyncNumExecutorsTarget 逻辑流程示意图如下:

请添加图片描述

todo: 为什么 新增的executor 等于 numExecutorsToAdd,下次分配的executor即 numExecutorsToAdd 翻倍?

1.1.1.2.1.3 requestTotalExecutors接口方法

请添加图片描述

CoarseGrainedSchedulerBackend 实现了ExecutorAllocationClient接口的requestTotalExecutors方法。

经过断点调试,追踪 yarn client 模式下,requestTotalExecutors方法调用路径如下:

org.apache.spark.ExecutorAllocationClient#requestTotalExecutors

org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend#requestTotalExecutors

org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend#doRequestTotalExecutors

org.apache.spark.scheduler.cluster.YarnSchedulerBackend#doRequestTotalExecutors

org.apache.spark.scheduler.cluster.YarnSchedulerBackend#prepareRequestExecutors

private[cluster] def prepareRequestExecutors(requestedTotal: Int): RequestExecutors = {
  val nodeBlacklist: Set[String] = scheduler.nodeBlacklist()
  // For locality preferences, ignore preferences for nodes that are blacklisted
  val filteredHostToLocalTaskCount =
    hostToLocalTaskCount.filter { case (k, v) => !nodeBlacklist.contains(k) }
   // driver端 发送 RequestExecutors 消息
  RequestExecutors(requestedTotal, localityAwareTasks, filteredHostToLocalTaskCount,
    nodeBlacklist)
}

org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors

// Request executors by specifying the new total number of executors desired
// This includes executors already pending or running
case class RequestExecutors(
    requestedTotal: Int,
    localityAwareTasks: Int,
    hostToLocalTaskCount: Map[String, Int],
    nodeBlacklist: Set[String])
  extends CoarseGrainedClusterMessage

todo: 动态资源分配和 普通分配(静态分配)的区别在哪??

动态分配可以基于当前集群负载最大化并行运行任务,避免静态分配资源分配不合理,造成资源浪费。

1.1.1.2.2 ApplicationMaster
1.1.1.2.2.1 ApplicationMaster端接收到消息并更新 targetNumExecutors

org.apache.spark.deploy.yarn.ApplicationMaster.AMEndpoint#receiveAndReply

    override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = {
      case r: RequestExecutors =>
        Option(allocator) match {
          case Some(a) =>
            // allocator为YarnAllocator,用于向resourcemanager请求资源,
            if (a.requestTotalExecutorsWithPreferredLocalities(r.requestedTotal,
              r.localityAwareTasks, r.hostToLocalTaskCount, r.nodeBlacklist)) {
              resetAllocatorInterval()
            }
            context.reply(true)

          case None =>
            logWarning("Container allocator is not ready to request executors yet.")
            context.reply(false)
        }

org.apache.spark.deploy.yarn.YarnAllocator#requestTotalExecutorsWithPreferredLocalities

def requestTotalExecutorsWithPreferredLocalities(
    requestedTotal: Int,
    localityAwareTasks: Int,
    hostToLocalTaskCount: Map[String, Int],
    nodeBlacklist: Set[String]): Boolean = synchronized {
  this.numLocalityAwareTasks = localityAwareTasks
  this.hostToLocalTaskCounts = hostToLocalTaskCount

  if (requestedTotal != targetNumExecutors) {
    logInfo(s"Driver requested a total number of $requestedTotal executor(s).")
    // 更新 要请求的executor数,这个非常关键,为什么这里没有同步请求resourmanager分配资源?且看后面解释
    targetNumExecutors = requestedTotal
    allocatorBlacklistTracker.setSchedulerBlacklistedNodes(nodeBlacklist)
    true
  } else {
    false
  }
}
1.1.1.2.2.1 守护线程向resourceManger请求资源

在appmaster创建时,同时也创建1个YarnAllocator,用于向resourcemanager请求资源等操作。

调用链如下:

org.apache.spark.deploy.yarn.ApplicationMaster#runUnmanaged

org.apache.spark.deploy.yarn.ApplicationMaster#createAllocator

createAllocator 逻辑序列图如下:

主要点在于结尾给appmaster 创建并启动1个后台上报线程,用于间隔一定时间,向resourcemanager请求资源

请添加图片描述

launchReporterThread方法调用如下

org.apache.spark.deploy.yarn.ApplicationMaster#launchReporterThread

org.apache.spark.deploy.yarn.ApplicationMaster#allocationThreadImpl

org.apache.spark.deploy.yarn.YarnAllocator#allocateResources

org.apache.spark.deploy.yarn.YarnAllocator#updateResourceRequests

org.apache.spark.deploy.yarn.YarnAllocator#handleAllocatedContainers

org.apache.spark.deploy.yarn.YarnAllocator#runAllocatedContainers

请添加图片描述

1.1.1.3 示例案例

环境:yanr-client运行模式下,开启了动态资源分配

流程示意图如下:

请添加图片描述

部分日志摘要如下:

------driver端日志-----------
22/12/03 22:58:30 INFO ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 1)

------appMaster端日志-----------
22/12/03 09:58:31 INFO YarnAllocator: Driver requested a total number of 1 executor(s).
22/12/03 09:58:31 INFO YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 896 MB memory (including 384 MB of overhead)
22/12/03 09:58:31 INFO YarnAllocator: Submitted container request for host hadoop3,hadoop2,hadoop1.
22/12/03 09:58:32 INFO AMRMClientImpl: Received new token for : hadoop2:33222
22/12/03 09:58:32 INFO YarnAllocator: Launching container container_1670078106874_0004_01_000002 on host hadoop2 for executor with ID 1
22/12/03 09:58:32 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
22/12/03 09:58:32 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
22/12/03 09:58:32 INFO ContainerManagementProtocolProxy: Opening proxy : hadoop2:33222

------resourceManager端日志-----------
2022-12-03 09:58:32,581 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1670078106874_0004_000001 container=null queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@24ca8dd clusterResource=<memory:24576, vCores:24> type=NODE_LOCAL requestedPartition=
2022-12-03 09:58:32,581 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.041666668 absoluteUsedCapacity=0.041666668 used=<memory:1024, vCores:1> cluster=<memory:24576, vCores:24>
2022-12-03 09:58:32,582 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1670078106874_0004_01_000002 Container Transitioned from NEW to ALLOCATED
2022-12-03 09:58:32,582 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1670078106874_0004	CONTAINERID=container_1670078106874_0004_01_000002	RESOURCE=<memory:1024, vCores:1>
2022-12-03 09:58:32,582 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.083333336 absoluteUsedCapacity=0.083333336 used=<memory:2048, vCores:2> cluster=<memory:24576, vCores:24>
2022-12-03 09:58:32,582 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Allocation proposal accepted
2022-12-03 09:58:32,810 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : hadoop2:33222 for container : container_1670078106874_0004_01_000002
2022-12-03 09:58:32,811 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1670078106874_0004_01_000002 Container Transitioned from ALLOCATED to ACQUIRED
2022-12-03 09:58:33,583 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1670078106874_0004_01_000002 Container Transitioned from ACQUIRED to RUNNING

------driver端日志-----------
22/12/03 22:58:35 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.150.22:56294) with ID 1
22/12/03 22:58:35 INFO ExecutorMonitor: New executor 1 has registered (new total is 1)
22/12/03 22:58:36 INFO BlockManagerMasterEndpoint: Registering block manager hadoop2:43194 with 93.3 MiB RAM, BlockManagerId(1, hadoop2, 43194, None)
22/12/03 22:58:36 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, hadoop2, executor 1, partition 0, NODE_LOCAL, 7557 bytes)

1.1.1.4 参考

https://blog.csdn.net/lovetechlovelife/article/details/112723766

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/70854.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

程序人生:化解互联网 “寒冬” 危机,我教你摆脱困境

三年反复的yi情&#xff0c;近20年史无前例的互联网裁员潮汹涌而至。 寒冬来袭&#xff0c;每一个职场打工人&#xff0c;都感到了寒意。 互联网企业大裁员的底层逻辑&#xff0c;一方面是受宏观环境影响&#xff08;yq、互联网红利结束、政策变化等&#xff09;&#xff0c;…

ChatGPT震撼上市,AI也开始跟你卷了,来一起看看怎么用ChatGPT!!!

强大AI产品&#xff0c;ChatGPT震撼上市&#xff0c;程序员真的要失业了吗&#xff1f; 最近聊天机器人异常火爆&#xff0c;火爆到什么程度&#xff0c;卖个关子&#xff0c;下文解释。 OpenAI推出了ChatGPT&#xff0c;它能够回答各种各样的问题&#xff0c;包括生成代码&a…

Node.js学习下(70th)

一、Buffer 缓冲区 背景 1、浏览器没有储存图片文件等媒体文件的需求&#xff0c;JS 存的都是一些基本数据类型。 2、服务器需要存储图片/视频/音频等媒体文件&#xff0c;因此有了 Buffer 缓冲器。 1. Buffer 是什么 Buffer 是一个和数组类似的对象&#xff0c;不同是 Buf…

python代码学习1

\n 换行符号 \r 回车 \b 后退一个格 \t 制表符&#xff08;4个字符为一组&#xff0c;当不字符被占有时&#xff0c;重新生成一个制表符。如果被占据&#xff0c;不满4个字符时&#xff0c;生成剩余部分空格。&#xff09; #原字符 不希望字符串中转义字符起作用&#xff0…

FME Server 无代码环境中自动化您数据和应用集成工作流

专为现代企业打造 简化数据和应用程序集成&#xff0c;让您的数据为您所用。在FME Desktop中创作工作流&#xff0c;并使用 FME Server 将其自动化&#xff0c;以按计划或响应事件运行数据集成。 构建无代码 Web 应用程序&#xff0c;提供自助式数据提交和验证&#xff0c;并向…

Java学习之动态绑定机制

目录 举例说明 父类 子类 main类 运行结果 ​编辑 动态绑定 举例说明 父类 子类 main类 分析 运行结果 Java重要特性&#xff1a;动态绑定机制&#xff08;非常重要&#xff09; 举例说明 父类 class A {//父类public int i 10;public int sum() {return getI(…

2022年11月国产数据库大事记-墨天轮

本文为墨天轮社区整理的2022年11月国产数据库大事件和重要产品发布消息。 文章目录11月国产数据库大事记&#xff08;时间线&#xff09;产品/版本发布兼容认证排行榜新增数据库11月国产数据库大事记&#xff08;时间线&#xff09; 11月1日&#xff0c;国际知名研究机构 IDC …

什么是内存对齐

内存对齐 什么是内存对齐为什么要内存对齐内存对齐的规则结构体中内存对齐 sizeof无嵌套有嵌套 iOS中对象内存对齐 iOS中获取内存大小方式 class_getInstanceSize()malloc_size() iOS中内存对齐 实际占用内存对齐方式系统分配内存对齐方式问题 内存优化 总结 内存对齐 什么…

基于C++的AGV机器人无线控制

1 AGV系统概述 1.1AGV原理 AGV行走控制系统由控制面板、导向传感器、方向电位器、状态指示灯、避障传感器、光电控制信号传感器、驱动单元、导引磁条、电源组成。 AGV的导引&#xff08;Guidance&#xff09;是指根据AGV导向传感器&#xff08;Navigation&#xff09;所得到…

基于FFmpeg进行rtsp推流及拉流(详细教程)

目录 1.1 Windows系统 1.2 Ubuntu 和 Debian 系统 1.3 CentOS 和 Fedora 系统 1.4 macOS系统 2. 安装rtsp-simple-server 3. FFmpeg推流 3.1 UDP推流 3.2 TCP推流 3.3 循环推流 4 拉流 4.1 ffplay/VLC拉流显示 4.2 FFmpeg拉流保存成视频 1. 安装FFmpeg FFmpeg 是一…

tftp服务/nfs服务/二进制工具集/uboot基础

一、什么是系统移植 1&#xff09;系统移植就是给开发板搭建一个linux操作系统 2&#xff09;从官方获取源码&#xff0c;进行配置和编译&#xff0c;生成板子需要的镜像文件 二、为什么系统移植 1&#xff09;为后面学习驱动开发课程打基础 2&#xff09;驱动开发工程师必…

入行4年,跳槽2次,在软件测试这一行我已经悟了!

近年来&#xff0c;软件测试行业如火如荼。互联网及许多传统公司对于软件测试人员的需求缺口逐年增大。然而&#xff0c;20年的疫情导致大规模裁员&#xff0c;让人觉得行业寒冬已经到来。软件测试人员的职业规划值得我们深思。 大家对软件测试行业比较看好&#xff0c;只是因…

【云服务器 ECS 实战】专有网络 VPC、弹性网卡的概述与配置

一、ECS 专有网络 VPC1. 传统经典网络与专有网络 VPC 对比2. 建立自己的专有网络 VPC二、弹性网卡1. 弹性网卡的概念与优势2. 弹性网卡的配置一、ECS 专有网络 VPC 阿里云在早期使用的是一种传统的网络模式&#xff0c;将所有的 ECS 云服务直接建立在传统网络层之上&#xff0…

【有营养的算法笔记】归并排序

&#x1f451;作者主页&#xff1a;进击的安度因 &#x1f3e0;学习社区&#xff1a;进击的安度因&#xff08;个人社区&#xff09; &#x1f4d6;专栏链接&#xff1a;有营养的算法笔记 文章目录一、思路二、模板讲解三、模板测试四、加练 —— 逆序对的数量今天讲解的内容是…

[附源码]Python计算机毕业设计SSM加油站管理信息系统(程序+LW)

项目运行 环境配置&#xff1a; Jdk1.8 Tomcat7.0 Mysql HBuilderX&#xff08;Webstorm也行&#xff09; Eclispe&#xff08;IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持&#xff09;。 项目技术&#xff1a; SSM mybatis Maven Vue 等等组成&#xff0c;B/S模式 M…

【论文笔记】InverseForm: A Loss Function for Structured Boundary-Aware Segmentation

论文 标题&#xff1a;InverseForm: A Loss Function for Structured Boundary-Aware Segmentation 收录于&#xff1a;CVPR 2021 论文&#xff1a;[2104.02745] InverseForm: A Loss Function for Structured Boundary-Aware Segmentation (arxiv.org) 代码&#xff1a;Git…

大数据都应用在哪些领域?

大数据被应用较多的领域有哪些&#xff1f;疫情期间大数据技术对于疫情的防控发挥了巨大的作用&#xff0c;抗疫期间多家互联网企业纷纷加强大数据在疫情防控中的应用。小到社区大到部委相关部门都将大数据作为不可或缺的防疫工具&#xff0c;生活中很多方面涉及到大数据由此可…

Scala014--Scala中的函数

一&#xff0c;函数的定义和声明 对于其他计算机语言来说&#xff0c;如Java&#xff0c;python&#xff0c;函数和方法是一样的&#xff0c;但是对于Scala来说&#xff0c;函数和方法并不是同一个概念&#xff0c;方法是类或者是对象的成员&#xff0c;而函数是一个对象。但是…

澳亚集团通过聆讯:毛利率波动,预计利润将下滑,陈荣南为董事长

撰稿|汤汤 来源|贝多财经 近日&#xff0c;港交所披露的信息显示&#xff0c;澳亚集团有限公司&#xff08;下称“澳亚集团”&#xff09;通过港交所聆讯&#xff0c;并披露了聆讯后资料集&#xff08;即招股书&#xff09;&#xff0c;中金公司和星展银行&#xff08;DBS&am…

如何从 Power BI 示例中获取数据以供练习

如果您是 Power BI 初学者, Microsoft Power BI 教程中提供的示例是入门的好地方。 在这篇文章中,我将按照步骤在 excel 中查看示例数据,以便您可以将这些数据用于练习目的。 下载 Excel 文件 首先,在浏览器中打开人力资源数据。文包含有关如何使用数据构建 Power BI 报…