kube-scheduler组件的启动流程与源码走读

news2024/11/23 1:04:33

概述

摘要:kube-scheduler是kubernetes系统中的重要组件,kub-scheduler 的核心职责是为待调度的 pod 寻找一个最合适的 node 节点, 然后进行 bind node 绑定, 后面 kubelet 才会监听到并创建真正的 pod。kub-scheduler本身是一个功能强大且负责的核心组件,本文聚焦在调度器的启动过程,从源码层面走读启动逻辑。

正文

说明:基于 kubernetes v1.12.0 源码分析

kube-scheduler调度过程

kube-scheduler 调度pod时,需要经过 predicates 预选和 priority 优选,选出最合适的node节点。

  • 预选就是从集群的所有节点中根据调度算法,筛选(filter)出所有可以运行该 pod 的节点集合
  • 优选则是按照算法对预选出来的节点进行打分(score),找到分值最高的节点作为调度节点.

接下来我们先将k8s scheduler的调度过程展开描述。在阅读源码前,我们先了解下kube-schduler的运行逻辑和包含的重要组件。下图是kube-scheduler执行调度任务的核心过程。

在这里插入图片描述

kube-scheduler包含的组件:

  • informer: 作用是从kube-apiserver获取资源信息与监听变化事件,再将信息保存到Scheduling_queue和ScheduleCache中。
  • Scheduling_queue:调度队列,是一个优先级队列,用于保存待调度的pod。
  • ScheduleCache : 调度缓存,包括保存已经调度了的pod信息与node信息,在调度器执行调度任务时,为其提供查询信息。
  • scheduler: 调度器,利用调度算法为待调度的pod,执行调度任务。
  • Algorithm: 调度算法,kube-apiserver中的调度算法主要分为2类:predicates 预选和 priority 优选,算法的目的是为待调度的pod寻找一个合适的node.

Kube-scheduler调度过程:

  1. Kube-scheduler启动时,会初始化并启动各种资源的informer,并初始化调度缓存ScheduleCache、调度队列Scheduling_queue,并且加载调度算法:predicates 预选类算法和 priority 优选类算法。
  2. Informer启动后,将从kube-apiserver中,首先list出所有pod,并且将已经调度过的pod放入ScheduleCache中保存,将待调度的节点Add()到Scheduling_queue中去。其次list出所有node,将node信息保存到ScheduleCache保存。list完成后,informer开始watch kube-apiserver发送的 pod 变化事件,并将变化信息保存到ScheduleCache或Scheduling_queue中。
  3. Scheduler会启动16个并发worker,执行调度任务,每个调度任务执行scheduleOne(),即从Scheduling_queue获取一个待调度的 pod,使用predicates和priority算法为pod找一个合适的node。
  4. 选出最优节点后, 对 apiserver 发起 pod 节点 bind 操作, 其实就是对 pod 的 spec.NodeName 赋值最优节点.

源码结构

kube-scheduler是Kubernetes集群中的一个关键组件,它的源码位于k8s.io/kubernetes目录下,结构主要包括以下部分:

  • cmd/kube-scheduler: 这是kube-scheduler的主入口点。它负责解析命令行参数,初始化日志,设置信号处理,并启动scheduler

  • pkg/scheduler: 这个目录包含了kube-scheduler的主要逻辑。它包括了调度算法,调度队列,调度器的核心循环等。

  • pkg/scheduler/algorithm: 这个目录包含了调度算法的实现。Kubernetes支持多种调度算法,包括优先级,选择器,节点亲和性等。

  • pkg/scheduler/algorithm/priorities: 这个目录包含了各种优先级函数的实现。优先级函数用于确定Pod应该被调度到哪个节点。

  • pkg/scheduler/algorithm/predicates: 这个目录包含了各种预测函数的实现。预测函数用于判断一个节点是否满足Pod的某些条件。

  • pkg/scheduler/algorithm/priorities/node_affinity: 这个目录包含了节点亲和性的实现。节点亲和性用于确定Pod应该被调度到哪些节点。

  • pkg/scheduler/api: 这个目录包含了调度器的API接口。

  • pkg/scheduler/core: 这个目录包含了调度器的核心逻辑。

  • pkg/scheduler/framework: 这个目录包含了调度框架的实现。调度框架是Kubernetes调度器的核心,它负责调度决策的制定。

  • pkg/scheduler/util: 这个目录包含了调度器的工具函数。

  • pkg/scheduler/volumebinder: 这个目录包含了卷绑定的实现。卷绑定用于确定Pod应该使用哪些卷。

  • pkg/scheduler/volume: 这个目录包含了卷的实现。

kube-scheduler的启动流程

kube-scheduler的启动流程的示意图如下:

在这里插入图片描述

Kube-scheduler组件的启动流程,主要步骤有

  • corbra命令行参数解析

  • 注册内存的调度算法

  • 实例化调度器配置schedulerConfig

  • 实例化调度器对象scheduler

  • 启动informer同步资源

  • 启动leader选举

  • 启动调度器sched.run()

corbra命令行参数解析

Kube-scheduler组件使用corbra三方库作为命令行参数解析,函数启动。

源码位置:/k8s.io/kubernetes/cmd/kube-scheduler/scheduler.go

main()作为函数入口

func main() {
	rand.Seed(time.Now().UTC().UnixNano())
	// 通过命令启动调度器
	command := app.NewSchedulerCommand()
  // 此次代码有省略...
	if err := command.Execute(); err != nil {
		fmt.Fprintf(os.Stderr, "%v\n", err)
		os.Exit(1)
	}
}

// NewSchedulerCommand creates a *cobra.Command object with default parameters
func NewSchedulerCommand() *cobra.Command {
	opts, err := options.NewOptions()

	cmd := &cobra.Command{
		Use: "kube-scheduler",
		Run: func(cmd *cobra.Command, args []string) {
			verflag.PrintAndExitIfRequested()
			utilflag.PrintFlags(cmd.Flags())

			// 此次代码有省略
      // 验证配置有效性
			if errs := opts.Validate(); len(errs) > 0 {
				fmt.Fprintf(os.Stderr, "%v\n", utilerrors.NewAggregate(errs))
				os.Exit(1)
			}

			if len(opts.WriteConfigTo) > 0 {
				if err := options.WriteConfigFile(opts.WriteConfigTo, &opts.ComponentConfig); err != nil {
					fmt.Fprintf(os.Stderr, "%v\n", err)
					os.Exit(1)
				}
				glog.Infof("Wrote configuration to: %s\n", opts.WriteConfigTo)
				return
			}

			c, err := opts.Config()
			if err != nil {
				fmt.Fprintf(os.Stderr, "%v\n", err)
				os.Exit(1)
			}
			// c.Complete()获取完整的配置
			// 执行run()
			stopCh := make(chan struct{})
			if err := Run(c.Complete(), stopCh); err != nil {
				fmt.Fprintf(os.Stderr, "%v\n", err)
				os.Exit(1)
			}
		},
	}
	return cmd
}

注册内置调度函数

server.Run()函数是kube-scheduler组件的核心,这个函数里面分别完成了注册内存调度算法、调度器配置初始化、http/https服务的启动、调度器对象的创建、调度器配置的启动、leader的选举等。

源码位置:k8s.io/kubernetes/cmd/kube-scheduler/app/server.go

源码分析如下,注意代码注释说明。


// Run runs the Scheduler.
func Run(c schedulerserverconfig.CompletedConfig, stopCh <-chan struct{}) error {
	// To help debugging, immediately log version
	glog.Infof("Version: %+v", version.Get())

	// Apply algorithms based on feature gates.
	// TODO: make configurable?
  // 加载内置的调度算法
	algorithmprovider.ApplyFeatureGates()

	// Configz registration.
	if cz, err := configz.New("componentconfig"); err == nil {
		cz.Set(c.ComponentConfig)
	} else {
		return fmt.Errorf("unable to register configz: %s", err)
	}

	// Build a scheduler config from the provided algorithm source.
  // 构建一个调度器配置,调度器配置里面包括了指定的调度算法、各种资源的Informer、和clientset等非常重要的数据结构
	schedulerConfig, err := NewSchedulerConfig(c)
	if err != nil {
		return err
	}

	// Create the scheduler.
  // 通过schedulerConfig中指定的参数,来创建一个scheduler
	sched := scheduler.NewFromConfig(schedulerConfig)

	// Prepare the event broadcaster.
  // 准备事件广播器
	if c.Broadcaster != nil && c.EventClient != nil {
		c.Broadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: c.EventClient.Events("")})
	}

	// Start up the healthz server.
  // 启动healthz server服务,可以通过curl http://<kube-sheduler>:<port>/healthz检查http服务器是否正常
	if c.InsecureServing != nil {
		separateMetrics := c.InsecureMetricsServing != nil
		handler := buildHandlerChain(newHealthzHandler(&c.ComponentConfig, separateMetrics), nil, nil)
		if err := c.InsecureServing.Serve(handler, 0, stopCh); err != nil {
			return fmt.Errorf("failed to start healthz server: %v", err)
		}
	}
  // 启动metrics servier
  // 用于promethuse监控数据采集,可以通过curl <kube-sheduler>:<port>/metrics获取监控数据
	if c.InsecureMetricsServing != nil {
		handler := buildHandlerChain(newMetricsHandler(&c.ComponentConfig), nil, nil)
		if err := c.InsecureMetricsServing.Serve(handler, 0, stopCh); err != nil {
			return fmt.Errorf("failed to start metrics server: %v", err)
		}
	}
  // 启动https对应的healthz服务
	if c.SecureServing != nil {
		handler := buildHandlerChain(newHealthzHandler(&c.ComponentConfig, false), c.Authentication.Authenticator, c.Authorization.Authorizer)
		if err := c.SecureServing.Serve(handler, 0, stopCh); err != nil {
			// fail early for secure handlers, removing the old error loop from above
			return fmt.Errorf("failed to start healthz server: %v", err)
		}
	}

	// Start all informers.
  // 启动所有informer
	go c.PodInformer.Informer().Run(stopCh)
	c.InformerFactory.Start(stopCh)

	// Wait for all caches to sync before scheduling.
  // 等待informer将数据同步到cache
	c.InformerFactory.WaitForCacheSync(stopCh)
	controller.WaitForCacheSync("scheduler", stopCh, c.PodInformer.Informer().HasSynced)

	// Prepare a reusable run function.
  // 定义一个用于启动scheduler的run函数,这个函数接收ctx用于控制sheduler的停止
	run := func(ctx context.Context) {
		sched.Run()
		<-ctx.Done()
	}

	ctx, cancel := context.WithCancel(context.TODO()) // TODO once Run() accepts a context, it should be used here
	defer cancel()

  // 启动一个协程,当协程收到stopCh信号之后,会执行ctx.Done(),这样就间接的通过stopCh控制了sheduler的停止
	go func() {
		select {
		case <-stopCh:
			cancel()
		case <-ctx.Done():
		}
	}()

	// If leader election is enabled, run via LeaderElector until done and exit.
  // 如果启用了leader的选举,就会进入leader的选举
	if c.LeaderElection != nil {
		c.LeaderElection.Callbacks = leaderelection.LeaderCallbacks{
			OnStartedLeading: run,
			OnStoppedLeading: func() {
				utilruntime.HandleError(fmt.Errorf("lost master"))
			},
		}
		leaderElector, err := leaderelection.NewLeaderElector(*c.LeaderElection)
		if err != nil {
			return fmt.Errorf("couldn't create leader elector: %v", err)
		}
		// 启动循环leader选举的任务
		leaderElector.Run(ctx)

		return fmt.Errorf("lost lease")
	}

	// Leader election is disabled, so run inline until done.
  // 启动scheduler
	run(ctx)
	return fmt.Errorf("finished without leader elect")
}

接下来,我们对如何实现内置调度算法注册的过程做一个详细说明。

当执行到k8s.io/kubernetes/cmd/kube-scheduler/app/server.go的server.Run()函数时,通过``Import “k8s.io/kubernetes/pkg/scheduler/factory”` 包,定义并初始化了2个map数据结构,分别用于存放 pridicate 算法 和 priority 算法。

import (
  // 代码略
  "k8s.io/kubernetes/pkg/scheduler/factory"
  // 代码略
)

k8s.io/kubernetes/pkg/scheduler/factory包中定义了存放算法的map: fitPredicateMap 和 priorityFunctionMap

fitPredicateMap: map结构,存放预选调度算法

priorityFunctionMap: map结构,存放优选调度算法

var (
	schedulerFactoryMutex sync.Mutex

	// maps that hold registered algorithm types
	// 定义存放 pridicate 算法的map
	fitPredicateMap        = make(map[string]FitPredicateFactory)
	mandatoryFitPredicates = sets.NewString()
	
	// 定义存放 priority 算法的map
	priorityFunctionMap    = make(map[string]PriorityConfigFactory)
  
  // 存放所有调度算法
	algorithmProviderMap   = make(map[string]AlgorithmProviderConfig)

	// Registered metadata producers
	priorityMetadataProducer  PriorityMetadataProducerFactory
	predicateMetadataProducer PredicateMetadataProducerFactory
)

我们再回到server.Run(),看下算法是如何注册的,所谓注册也就是将算法存放到fitPredicateMap和priorityFunctionMap

// Run runs the Scheduler.
func Run(c schedulerserverconfig.CompletedConfig, stopCh <-chan struct{}) error {

	// Apply algorithms based on feature gates.
	// TODO: make configurable?
	algorithmprovider.ApplyFeatureGates()
  // 代码略
}
package algorithmprovider

import (
	"k8s.io/kubernetes/pkg/scheduler/algorithmprovider/defaults"
)

// ApplyFeatureGates applies algorithm by feature gates.
func ApplyFeatureGates() {
	defaults.ApplyFeatureGates()
}

k8s.io/kubernetes/pkg/scheduler/algorithmprovider/defaults 中的init()函数实现预选算法与优选算法的注册。


func init() {
	// Register functions that extract metadata used by predicates and priorities computations.
	factory.RegisterPredicateMetadataProducerFactory(
		func(args factory.PluginFactoryArgs) algorithm.PredicateMetadataProducer {
			return predicates.NewPredicateMetadataFactory(args.PodLister)
		})
	factory.RegisterPriorityMetadataProducerFactory(
		func(args factory.PluginFactoryArgs) algorithm.PriorityMetadataProducer {
			return priorities.NewPriorityMetadataFactory(args.ServiceLister, args.ControllerLister, args.ReplicaSetLister, args.StatefulSetLister)
		})
	// 注册 defaultPredicates()返回的默认的预选算法
  // 注册 defaultPriorities()返回的默认的优选算法
	registerAlgorithmProvider(defaultPredicates(), defaultPriorities())
  // 注册预选算法
	factory.RegisterFitPredicate("PodFitsPorts", predicates.PodFitsHostPorts)
	factory.RegisterFitPredicate(predicates.PodFitsHostPortsPred, predicates.PodFitsHostPorts)
	factory.RegisterFitPredicate(predicates.PodFitsResourcesPred, predicates.PodFitsResources)
	factory.RegisterFitPredicate(predicates.HostNamePred, predicates.PodFitsHost)
	factory.RegisterFitPredicate(predicates.MatchNodeSelectorPred, predicates.PodMatchNodeSelector)

	// 注册优选算法
	factory.RegisterPriorityConfigFactory(
		"ServiceSpreadingPriority",
		factory.PriorityConfigFactory{
			MapReduceFunction: func(args factory.PluginFactoryArgs) (algorithm.PriorityMapFunction, algorithm.PriorityReduceFunction) {
				return priorities.NewSelectorSpreadPriority(args.ServiceLister, algorithm.EmptyControllerLister{}, algorithm.EmptyReplicaSetLister{}, algorithm.EmptyStatefulSetLister{})
			},
			Weight: 1,
		},
	)
	factory.RegisterPriorityFunction2("EqualPriority", core.EqualPriorityMap, nil, 1)
	factory.RegisterPriorityFunction2("MostRequestedPriority", priorities.MostRequestedPriorityMap, nil, 1)
	factory.RegisterPriorityFunction2(
		"RequestedToCapacityRatioPriority",
		priorities.RequestedToCapacityRatioResourceAllocationPriorityDefault().PriorityMap,
		nil,
		1)
}

从函数RegisterFitPredicateFactory()可以看出,注册算法主要是将算法名与算法函数加入 map

// RegisterFitPredicateFactory registers a fit predicate factory with the
// algorithm registry. Returns the name with which the predicate was registered.
func RegisterFitPredicateFactory(name string, predicateFactory FitPredicateFactory) string {
	schedulerFactoryMutex.Lock()
	defer schedulerFactoryMutex.Unlock()
	validateAlgorithmNameOrDie(name)
  // 将算法名与算法函数加入 map
	fitPredicateMap[name] = predicateFactory
	return name
}

predicateFactory表示一个func()类型,返回值是调度算法 algorithm.FitPredicate,FitPredicate 是一个函数类型,作用是:对于给定一个 pod 对象 与 nodeInfo 对象,判断 node 是否满足 pod 的某种调度条件。

// FitPredicateFactory produces a FitPredicate from the given args.
// FitPredicateFactory 返回一个 FitPredicate 函数方法
type FitPredicateFactory func(PluginFactoryArgs) algorithm.FitPredicate

// FitPredicate is a function that indicates if a pod fits into an existing node.
// The failure information is given by the error.
// FitPredicate 是一个函数类型,判断 node 是否满足 pod 的调度条件。
// 如果能满足调度,返回true,如果无法满足调度条件,返回false并返回失败原因
type FitPredicate func(pod *v1.Pod, meta PredicateMetadata, nodeInfo *schedulercache.NodeInfo) (bool, []PredicateFailureReason, error)

同理,RegisterPriorityConfigFactoryPriorityFunction的作用类似。

defaultPredicates()函数,注册了内置的各种预选算法


func defaultPredicates() sets.String {
	return sets.NewString(
		// Fit is determined by volume zone requirements.
		factory.RegisterFitPredicateFactory(
			predicates.NoVolumeZoneConflictPred,
			func(args factory.PluginFactoryArgs) algorithm.FitPredicate {
				return predicates.NewVolumeZonePredicate(args.PVInfo, args.PVCInfo, args.StorageClassInfo)
			},
		),
		// Fit is determined by whether or not there would be too many AWS EBS volumes attached to the node
		factory.RegisterFitPredicateFactory(
			predicates.MaxEBSVolumeCountPred,
			func(args factory.PluginFactoryArgs) algorithm.FitPredicate {
				return predicates.NewMaxPDVolumeCountPredicate(predicates.EBSVolumeFilterType, args.PVInfo, args.PVCInfo)
			},
		),
		// Fit is determined by whether or not there would be too many GCE PD volumes attached to the node
		factory.RegisterFitPredicateFactory(
			predicates.MaxGCEPDVolumeCountPred,
			func(args factory.PluginFactoryArgs) algorithm.FitPredicate {
				return predicates.NewMaxPDVolumeCountPredicate(predicates.GCEPDVolumeFilterType, args.PVInfo, args.PVCInfo)
			},
		),
		// Fit is determined by whether or not there would be too many Azure Disk volumes attached to the node
		factory.RegisterFitPredicateFactory(
			predicates.MaxAzureDiskVolumeCountPred,
			func(args factory.PluginFactoryArgs) algorithm.FitPredicate {
				return predicates.NewMaxPDVolumeCountPredicate(predicates.AzureDiskVolumeFilterType, args.PVInfo, args.PVCInfo)
			},
		),
		factory.RegisterFitPredicateFactory(
			predicates.MaxCSIVolumeCountPred,
			func(args factory.PluginFactoryArgs) algorithm.FitPredicate {
				return predicates.NewCSIMaxVolumeLimitPredicate(args.PVInfo, args.PVCInfo)
			},
		),
		// Fit is determined by inter-pod affinity.
		factory.RegisterFitPredicateFactory(
			predicates.MatchInterPodAffinityPred,
			func(args factory.PluginFactoryArgs) algorithm.FitPredicate {
				return predicates.NewPodAffinityPredicate(args.NodeInfo, args.PodLister)
			},
		),

		// Fit is determined by non-conflicting disk volumes.
		factory.RegisterFitPredicate(predicates.NoDiskConflictPred, predicates.NoDiskConflict),

		// GeneralPredicates are the predicates that are enforced by all Kubernetes components
		// (e.g. kubelet and all schedulers)
		factory.RegisterFitPredicate(predicates.GeneralPred, predicates.GeneralPredicates),

		// Fit is determined by node memory pressure condition.
		factory.RegisterFitPredicate(predicates.CheckNodeMemoryPressurePred, predicates.CheckNodeMemoryPressurePredicate),

		// Fit is determined by node disk pressure condition.
		factory.RegisterFitPredicate(predicates.CheckNodeDiskPressurePred, predicates.CheckNodeDiskPressurePredicate),

		// Fit is determined by node pid pressure condition.
		factory.RegisterFitPredicate(predicates.CheckNodePIDPressurePred, predicates.CheckNodePIDPressurePredicate),

		// Fit is determined by node conditions: not ready, network unavailable or out of disk.
		factory.RegisterMandatoryFitPredicate(predicates.CheckNodeConditionPred, predicates.CheckNodeConditionPredicate),

		// Fit is determined based on whether a pod can tolerate all of the node's taints
		factory.RegisterFitPredicate(predicates.PodToleratesNodeTaintsPred, predicates.PodToleratesNodeTaints),

		// Fit is determined by volume topology requirements.
		factory.RegisterFitPredicateFactory(
			predicates.CheckVolumeBindingPred,
			func(args factory.PluginFactoryArgs) algorithm.FitPredicate {
				return predicates.NewVolumeBindingPredicate(args.VolumeBinder)
			},
		),
	)
}

defaultPriorities()函数,注册了内置的各种预选算法


func defaultPriorities() sets.String {
	return sets.NewString(
		// spreads pods by minimizing the number of pods (belonging to the same service or replication controller) on the same node.
		factory.RegisterPriorityConfigFactory(
			"SelectorSpreadPriority",
			factory.PriorityConfigFactory{
				MapReduceFunction: func(args factory.PluginFactoryArgs) (algorithm.PriorityMapFunction, algorithm.PriorityReduceFunction) {
					return priorities.NewSelectorSpreadPriority(args.ServiceLister, args.ControllerLister, args.ReplicaSetLister, args.StatefulSetLister)
				},
				Weight: 1,
			},
		),
		// pods should be placed in the same topological domain (e.g. same node, same rack, same zone, same power domain, etc.)
		// as some other pods, or, conversely, should not be placed in the same topological domain as some other pods.
		factory.RegisterPriorityConfigFactory(
			"InterPodAffinityPriority",
			factory.PriorityConfigFactory{
				Function: func(args factory.PluginFactoryArgs) algorithm.PriorityFunction {
					return priorities.NewInterPodAffinityPriority(args.NodeInfo, args.NodeLister, args.PodLister, args.HardPodAffinitySymmetricWeight)
				},
				Weight: 1,
			},
		),

		// Prioritize nodes by least requested utilization.
		factory.RegisterPriorityFunction2("LeastRequestedPriority", priorities.LeastRequestedPriorityMap, nil, 1),

		// Prioritizes nodes to help achieve balanced resource usage
		factory.RegisterPriorityFunction2("BalancedResourceAllocation", priorities.BalancedResourceAllocationMap, nil, 1),

		// Set this weight large enough to override all other priority functions.
		// TODO: Figure out a better way to do this, maybe at same time as fixing #24720.
		factory.RegisterPriorityFunction2("NodePreferAvoidPodsPriority", priorities.CalculateNodePreferAvoidPodsPriorityMap, nil, 10000),

		// Prioritizes nodes that have labels matching NodeAffinity
		factory.RegisterPriorityFunction2("NodeAffinityPriority", priorities.CalculateNodeAffinityPriorityMap, priorities.CalculateNodeAffinityPriorityReduce, 1),

		// Prioritizes nodes that marked with taint which pod can tolerate.
		factory.RegisterPriorityFunction2("TaintTolerationPriority", priorities.ComputeTaintTolerationPriorityMap, priorities.ComputeTaintTolerationPriorityReduce, 1),

		// ImageLocalityPriority prioritizes nodes that have images requested by the pod present.
		factory.RegisterPriorityFunction2("ImageLocalityPriority", priorities.ImageLocalityPriorityMap, nil, 1),
	)
}

实例化scheduler对象

scheduler对象是运行kube-scheduler对象的组件的主对象,它包含了实现调度的所依赖的模块对象。包括schedulerCache、scheduler_queue、informer、eventhandler函数

server.Run()中的一段代码

*// Build a scheduler config from the provided algorithm source.
*schedulerConfig, err := NewSchedulerConfig©

源码位置: k8s.io/kubernetes/cmd/kube-scheduler/app/server.go

// NewSchedulerConfig creates the scheduler configuration. This is exposed for use by tests.
func NewSchedulerConfig(s schedulerserverconfig.CompletedConfig) (*scheduler.Config, error) {
	var storageClassInformer storageinformers.StorageClassInformer
	if utilfeature.DefaultFeatureGate.Enabled(features.VolumeScheduling) {
    // storageClassInformer
		storageClassInformer = s.InformerFactory.Storage().V1().StorageClasses()
	}

	// Set up the configurator which can create schedulers from configs.
	configurator := factory.NewConfigFactory(&factory.ConfigFactoryArgs{
    // 指定调度器名
		SchedulerName:                  s.ComponentConfig.SchedulerName,
		Client:                         s.Client,
    // 实例化各种 informer
		NodeInformer:                   s.InformerFactory.Core().V1().Nodes(),
		PodInformer:                    s.PodInformer,
		PvInformer:                     s.InformerFactory.Core().V1().PersistentVolumes(),
		PvcInformer:                    s.InformerFactory.Core().V1().PersistentVolumeClaims(),
		ReplicationControllerInformer:  s.InformerFactory.Core().V1().ReplicationControllers(),
		ReplicaSetInformer:             s.InformerFactory.Apps().V1().ReplicaSets(),
		StatefulSetInformer:            s.InformerFactory.Apps().V1().StatefulSets(),
		ServiceInformer:                s.InformerFactory.Core().V1().Services(),
		PdbInformer:                    s.InformerFactory.Policy().V1beta1().PodDisruptionBudgets(),
		StorageClassInformer:           storageClassInformer,
    
		HardPodAffinitySymmetricWeight: s.ComponentConfig.HardPodAffinitySymmetricWeight,
		EnableEquivalenceClassCache:    utilfeature.DefaultFeatureGate.Enabled(features.EnableEquivalenceClassCache),
    // 是否禁用抢占
		DisablePreemption:              s.ComponentConfig.DisablePreemption,
    // 打分阶段多少比例的node参与打分
		PercentageOfNodesToScore:       s.ComponentConfig.PercentageOfNodesToScore,
		BindTimeoutSeconds:             *s.ComponentConfig.BindTimeoutSeconds,
	})
	// 代码略
}

configurator := factory.NewConfigFactory(&factory.ConfigFactoryArgs{ … }

NewConfigFactory()函数中,实例化了各种Lister、调度器缓存 schedulerCache、调度队列 podQueue、以及pod,node等资源对应的事件处理回调函数(AddFunc,UpdateFunc,DeletedFunc)

// NewConfigFactory initializes the default implementation of a Configurator To encourage eventual privatization of the struct type, we only
// return the interface.
func NewConfigFactory(args *ConfigFactoryArgs) scheduler.Configurator {
	stopEverything := make(chan struct{})
	schedulerCache := schedulercache.New(30*time.Second, stopEverything)

	// storageClassInformer is only enabled through VolumeScheduling feature gate
	var storageClassLister storagelisters.StorageClassLister
	if args.StorageClassInformer != nil {
		storageClassLister = args.StorageClassInformer.Lister()
	}
	c := &configFactory{
    // 实例化 client
		client:                         args.Client,
    // podLister是从调度缓存 schedulerCache 从获取数据
		podLister:                      schedulerCache,
    // 实例化 待调度Pod的调度队列
		podQueue:                       core.NewSchedulingQueue(stopEverything),
    // 实例化各种Lister
		pVLister:                       args.PvInformer.Lister(),
		pVCLister:                      args.PvcInformer.Lister(),
		serviceLister:                  args.ServiceInformer.Lister(),
		controllerLister:               args.ReplicationControllerInformer.Lister(),
		replicaSetLister:               args.ReplicaSetInformer.Lister(),
		statefulSetLister:              args.StatefulSetInformer.Lister(),
		pdbLister:                      args.PdbInformer.Lister(),
		storageClassLister:             storageClassLister,
    // 实例化调度器缓存 schedulerCache
		schedulerCache:                 schedulerCache,
		StopEverything:                 stopEverything,
    // 指定调度器名称
		schedulerName:                  args.SchedulerName,
		hardPodAffinitySymmetricWeight: args.HardPodAffinitySymmetricWeight,
		enableEquivalenceClassCache:    args.EnableEquivalenceClassCache,
		disablePreemption:              args.DisablePreemption,
		percentageOfNodesToScore:       args.PercentageOfNodesToScore,
	}

	c.scheduledPodsHasSynced = args.PodInformer.Informer().HasSynced
	// scheduled pod cache
  // 注册 AddEventHandler 回调函数,将已经调度过的 pod 放到调度器缓存 schedulerCache
	args.PodInformer.Informer().AddEventHandler(
		cache.FilteringResourceEventHandler{
			FilterFunc: func(obj interface{}) bool {
				switch t := obj.(type) {
				case *v1.Pod:
					return assignedNonTerminatedPod(t)
				case cache.DeletedFinalStateUnknown:
					if pod, ok := t.Obj.(*v1.Pod); ok {
						return assignedNonTerminatedPod(pod)
					}
					runtime.HandleError(fmt.Errorf("unable to convert object %T to *v1.Pod in %T", obj, c))
					return false
				default:
					runtime.HandleError(fmt.Errorf("unable to handle object in %T: %T", c, obj))
					return false
				}
			},
			Handler: cache.ResourceEventHandlerFuncs{
				AddFunc:    c.addPodToCache,
				UpdateFunc: c.updatePodInCache,
				DeleteFunc: c.deletePodFromCache,
			},
		},
	)
	// unscheduled pod queue
  // 注册 AddEventHandler 回调函数,将待调度的 pod 放到调度器队列 
	args.PodInformer.Informer().AddEventHandler(
		cache.FilteringResourceEventHandler{
			FilterFunc: func(obj interface{}) bool {
				switch t := obj.(type) {
				case *v1.Pod:
					return unassignedNonTerminatedPod(t) && responsibleForPod(t, args.SchedulerName)
				case cache.DeletedFinalStateUnknown:
					if pod, ok := t.Obj.(*v1.Pod); ok {
						return unassignedNonTerminatedPod(pod) && responsibleForPod(pod, args.SchedulerName)
					}
					runtime.HandleError(fmt.Errorf("unable to convert object %T to *v1.Pod in %T", obj, c))
					return false
				default:
					runtime.HandleError(fmt.Errorf("unable to handle object in %T: %T", c, obj))
					return false
				}
			},
			Handler: cache.ResourceEventHandlerFuncs{
				AddFunc:    c.addPodToSchedulingQueue,
				UpdateFunc: c.updatePodInSchedulingQueue,
				DeleteFunc: c.deletePodFromSchedulingQueue,
			},
		},
	)
	// ScheduledPodLister is something we provide to plug-in functions that
	// they may need to call.
	c.scheduledPodLister = assignedPodLister{args.PodInformer.Lister()}
  // 注册 AddEventHandler 回调函数,将 node事件 放到调度缓存
	args.NodeInformer.Informer().AddEventHandler(
		cache.ResourceEventHandlerFuncs{
			AddFunc:    c.addNodeToCache,
			UpdateFunc: c.updateNodeInCache,
			DeleteFunc: c.deleteNodeFromCache,
		},
	)
	c.nodeLister = args.NodeInformer.Lister()

	args.PdbInformer.Informer().AddEventHandler(
		cache.ResourceEventHandlerFuncs{
			AddFunc:    c.addPDBToCache,
			UpdateFunc: c.updatePDBInCache,
			DeleteFunc: c.deletePDBFromCache,
		},
	)
	c.pdbLister = args.PdbInformer.Lister()

	// On add and delete of PVs, it will affect equivalence cache items
	// related to persistent volume
	args.PvInformer.Informer().AddEventHandler(
		cache.ResourceEventHandlerFuncs{
			// MaxPDVolumeCountPredicate: since it relies on the counts of PV.
			AddFunc:    c.onPvAdd,
			UpdateFunc: c.onPvUpdate,
			DeleteFunc: c.onPvDelete,
		},
	)
	c.pVLister = args.PvInformer.Lister()

	// This is for MaxPDVolumeCountPredicate: add/delete PVC will affect counts of PV when it is bound.
	args.PvcInformer.Informer().AddEventHandler(
		cache.ResourceEventHandlerFuncs{
			AddFunc:    c.onPvcAdd,
			UpdateFunc: c.onPvcUpdate,
			DeleteFunc: c.onPvcDelete,
		},
	)
	c.pVCLister = args.PvcInformer.Lister()

	// This is for ServiceAffinity: affected by the selector of the service is updated.
	// Also, if new service is added, equivalence cache will also become invalid since
	// existing pods may be "captured" by this service and change this predicate result.
	args.ServiceInformer.Informer().AddEventHandler(
		cache.ResourceEventHandlerFuncs{
			AddFunc:    c.onServiceAdd,
			UpdateFunc: c.onServiceUpdate,
			DeleteFunc: c.onServiceDelete,
		},
	)
	c.serviceLister = args.ServiceInformer.Lister()

	// Existing equivalence cache should not be affected by add/delete RC/Deployment etc,
	// it only make sense when pod is scheduled or deleted

	if utilfeature.DefaultFeatureGate.Enabled(features.VolumeScheduling) {
		// Setup volume binder
		c.volumeBinder = volumebinder.NewVolumeBinder(args.Client, args.PvcInformer, args.PvInformer, args.StorageClassInformer, time.Duration(args.BindTimeoutSeconds)*time.Second)

		args.StorageClassInformer.Informer().AddEventHandler(
			cache.ResourceEventHandlerFuncs{
				AddFunc:    c.onStorageClassAdd,
				DeleteFunc: c.onStorageClassDelete,
			},
		)
	}

	// Setup cache comparer
	comparer := &cacheComparer{
		podLister:  args.PodInformer.Lister(),
		nodeLister: args.NodeInformer.Lister(),
		pdbLister:  args.PdbInformer.Lister(),
		cache:      c.schedulerCache,
		podQueue:   c.podQueue,
	}

	ch := make(chan os.Signal, 1)
	signal.Notify(ch, compareSignal)

	go func() {
		for {
			select {
			case <-c.StopEverything:
				return
			case <-ch:
				comparer.Compare()
			}
		}
	}()

	return c
}

从上面代码可以看出,调度器实例化模块时,将待调度的pod放入调度队列podQueue;而将已经调度过的Pod放入了调度缓存scheduleCache,同时也会将node信息放入调度缓存scheduleCache。之后scheduler对象,调度任务会从调度队列取出一个待调度Pod,再查询scheduleCache中的pod,node信息,借助调度算法执行调度任务。才形成一个完整的调度逻辑。

启动http/https服务

Kube-scheduler组件会启动Http/https服务,作用是提供监控和运行状态采集,具体接口包括

  • /healthz: 用于监控检查
  • /metrics: 用于为prometheus提供监控采集

源码位置: k8s.io/kubernetes/cmd/kube-scheduler/app/server.go

// Run runs the Scheduler.
func Run(c schedulerserverconfig.CompletedConfig, stopCh <-chan struct{}) error {

	// 代码略
	
	// Start up the healthz server.
	if c.InsecureServing != nil {
		separateMetrics := c.InsecureMetricsServing != nil
		handler := buildHandlerChain(newHealthzHandler(&c.ComponentConfig, separateMetrics), nil, nil)
		if err := c.InsecureServing.Serve(handler, 0, stopCh); err != nil {
			return fmt.Errorf("failed to start healthz server: %v", err)
		}
	}
	if c.InsecureMetricsServing != nil {
		handler := buildHandlerChain(newMetricsHandler(&c.ComponentConfig), nil, nil)
		if err := c.InsecureMetricsServing.Serve(handler, 0, stopCh); err != nil {
			return fmt.Errorf("failed to start metrics server: %v", err)
		}
	}
	if c.SecureServing != nil {
		handler := buildHandlerChain(newHealthzHandler(&c.ComponentConfig, false), c.Authentication.Authenticator, c.Authorization.Authorizer)
		if err := c.SecureServing.Serve(handler, 0, stopCh); err != nil {
			// fail early for secure handlers, removing the old error loop from above
			return fmt.Errorf("failed to start healthz server: %v", err)
		}
	}
	// 代码略
}

启动informer同步资源

在前面已经实例化了各种资源的informer对象,接下来就启动Informer同步资源。

源码位置: k8s.io/kubernetes/cmd/kube-scheduler/app/server.go

	// Start all informers.
  // 启动所有 informer
	go c.PodInformer.Informer().Run(stopCh)
	c.InformerFactory.Start(stopCh)

	// Wait for all caches to sync before scheduling.
  // 等待 informer 将资源同步到 cache
	c.InformerFactory.WaitForCacheSync(stopCh)
	controller.WaitForCacheSync("scheduler", stopCh, c.PodInformer.Informer().HasSynced)

在informer 启动后,通过前面配置的event handler,会将已经调度过的 pod 对象放入调度缓存schedulerCache,将待调度的 pod 放入调度队列schedule_queue.同时 node 信息也会放入调度缓存schedulerCache

源码位置: k8s.io/kubernetes/pkg/scheduler/factory/factory.go


	// scheduled pod cache
	args.PodInformer.Informer().AddEventHandler(
		cache.FilteringResourceEventHandler{
			FilterFunc: func(obj interface{}) bool {
				switch t := obj.(type) {
				case *v1.Pod:
					return assignedNonTerminatedPod(t)
				case cache.DeletedFinalStateUnknown:
					if pod, ok := t.Obj.(*v1.Pod); ok {
            // 过滤出调度过的 pod 
						return assignedNonTerminatedPod(pod)
					}
					runtime.HandleError(fmt.Errorf("unable to convert object %T to *v1.Pod in %T", obj, c))
					return false
				default:
					runtime.HandleError(fmt.Errorf("unable to handle object in %T: %T", c, obj))
					return false
				}
			},
			Handler: cache.ResourceEventHandlerFuncs{
				AddFunc:    c.addPodToCache,
				UpdateFunc: c.updatePodInCache,
				DeleteFunc: c.deletePodFromCache,
			},
		},
	)
	// unscheduled pod queue
	args.PodInformer.Informer().AddEventHandler(
		cache.FilteringResourceEventHandler{
			FilterFunc: func(obj interface{}) bool {
				switch t := obj.(type) {
				case *v1.Pod:
					return unassignedNonTerminatedPod(t) && responsibleForPod(t, args.SchedulerName)
				case cache.DeletedFinalStateUnknown:
					if pod, ok := t.Obj.(*v1.Pod); ok {
            // 过滤出待调度的 pod 
						return unassignedNonTerminatedPod(pod) && responsibleForPod(pod, args.SchedulerName)
					}
					runtime.HandleError(fmt.Errorf("unable to convert object %T to *v1.Pod in %T", obj, c))
					return false
				default:
					runtime.HandleError(fmt.Errorf("unable to handle object in %T: %T", c, obj))
					return false
				}
			},
      // 加入调度队列 SchedulingQueue
			Handler: cache.ResourceEventHandlerFuncs{
				AddFunc:    c.addPodToSchedulingQueue,
				UpdateFunc: c.updatePodInSchedulingQueue,
				DeleteFunc: c.deletePodFromSchedulingQueue,
			},
		},
	)
	// ScheduledPodLister is something we provide to plug-in functions that
	// they may need to call.
	c.scheduledPodLister = assignedPodLister{args.PodInformer.Lister()}

  // node 事件放入 schedulerCache
	args.NodeInformer.Informer().AddEventHandler(
		cache.ResourceEventHandlerFuncs{
			AddFunc:    c.addNodeToCache,
			UpdateFunc: c.updateNodeInCache,
			DeleteFunc: c.deleteNodeFromCache,
		},
	)

leader选举

如果配置文件中指定启用了,先进行leader选举,之后再由leader启动sched.run()

源码位置: k8s.io/kubernetes/cmd/kube-scheduler/app/server.go

	// If leader election is enabled, run via LeaderElector until done and exit.
	if c.LeaderElection != nil {
		c.LeaderElection.Callbacks = leaderelection.LeaderCallbacks{
		  // leader选举后,会由leader启动run() scheduler对象。
			OnStartedLeading: run,
			OnStoppedLeading: func() {
				utilruntime.HandleError(fmt.Errorf("lost master"))
			},
		}
		leaderElector, err := leaderelection.NewLeaderElector(*c.LeaderElection)
		if err != nil {
			return fmt.Errorf("couldn't create leader elector: %v", err)
		}
    // 启动leader选举
		leaderElector.Run(ctx)

		return fmt.Errorf("lost lease")
	}

示意图
在这里插入图片描述
(图片来做互联网,如有侵权请联系作者)

启动调度器sched.run()

Sched.run()启动调度器,再启动调度器之前需要等待 informer 把资源数据同步到cache。之前启动一个后台协程,在协程中使用wait.Until,循环的执行函数sched.scheduleOne。 sched.scheduleOne函数是kube-scheduler组件的调度主逻辑,它的作用是从调度队列中取出一个pod,利用调度算法找到合适的node节点

源码位置: k8s.io/kubernetes/pkg/scheduler/scheduler.go

// Run begins watching and scheduling. It waits for cache to be synced, then starts a goroutine and returns immediately.
func (sched *Scheduler) Run() {
  // 启动调度器之前必须等待cache同步完成
	if !sched.config.WaitForCacheSync() {
		return
	}
	// 使用wait.Until,循环执行 sched.scheduleOne, 
  // sched.scheduleOne 的作用是执行一次pod调度
	go wait.Until(sched.scheduleOne, 0, sched.config.StopEverything)
}

结论

本文介绍了kube-scheduler调度器执行调度任务的主要流程,并从源码走读了scheduler的启动过程。进过浅层次的分析,我们对kube-scheduler大致有一定的认识。kube-scheduler组件较为复杂,接下来我会对其展开分析,包括但不限于:调度算法与调度过程详解、优先级与抢占特性、亲和性调度、性能优化等。

参考文档

kubernetes源码-kube-scheduler 原理和源码分析(一)

kubernetes_scheduler_code

nodes-scheduler-default-modifying_nodes-scheduler-default

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2083084.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

使用Axure打造智慧场景下的数据可视化大屏原型

在数字化转型的浪潮中&#xff0c;数据可视化大屏以其直观、动态、高效的特性&#xff0c;成为了智慧社区、智慧城市、智慧工厂、智慧园区等各个领域不可或缺的展示窗口。作为专业的原型设计软件&#xff0c;Axure不仅以其强大的交互设计能力著称&#xff0c;更在数据可视化大屏…

线段树小例题——结合前后缀获得结果

和我今天上午写的那个前后缀太像了&#xff0c;不能说一模一样&#xff0c;但是至少我觉得思路是真的一点没变 上一道题的链接&#xff1a;http://t.csdnimg.cn/pXBnU P4513 小白逛公园 题意&#xff1a;有n个点&#xff0c;每个点都有一个初值&#xff0c;然后有m个操作&am…

SQL慢查询优化方式

目录 一、SQL语句优化 1. 避免使用 SELECT * &#xff0c;而是具体字段 2.避免使用 % 开头的 LIKE 的查询 3.避免使用子查询&#xff0c;使用JOIN 4.使用EXISTS代替IN 5.使用LIMIT 1优化查询 6.使用批量插入、优化INSERT操作 7.其他方式 二、SQL索引优化 1.在查询条件…

Java异常详解(全文干货)

介绍 Throwable Throwable 是 Java 语言中所有错误与异常的超类。 Throwable 包含两个子类&#xff1a;Error&#xff08;错误&#xff09;和 Exception&#xff08;异常&#xff09;&#xff0c;它们通常用于指示发生了异常情况。 Throwable 包含了其线程创建时线程执行堆栈…

分库分表学习笔记(二)

分库分表学习笔记&#xff08;一&#xff09;-CSDN博客 分表分库规则 图源&#xff08;https://zhuanlan.zhihu.com/p/535713197&#xff09; 水平分表 水平分表一般是我们数据库的数据太多了&#xff0c;原大众点评的订单单表早就已经突破两百G。 数据量太多的影响 1. 查询…

linux dma cache和主存数据不一致问题

1、问题原因 根本原因是cache和dma的目的地址存在重叠。 如果DMA的目的地址与Cache所缓存的内存地址访问有重叠&#xff08;如上图所示&#xff09; &#xff0c; 经过DMA操作&#xff0c; 与Cache缓存对应的内存中的数据已经被修改&#xff0c; 而CPU本身并不知道&#xff0c…

VScode开发ESP32

以下是所有的功能 先选择串口&#xff0c;再选择编译&#xff0c;然后再烧录

2023年最新自适应主题懒人网址导航v3.9php源码

源码简介 这个懒人网址导航源码是一个基于PHPMySQL开发的网址导航系统。该版本是在原有3.8版本的基础上进行了修复和功能增强。我们建议新用户直接使用这个最新版本&#xff0c;放弃旧版本。如果你有二次开发的能力&#xff0c;可以根据更新日志自行进行升级。我们将在后期继续…

论文阅读笔记:RepViT: Revisiting Mobile CNN From Vit Perspective

文章目录 RepViT: Revisiting Mobile CNN From Vit Perspective动机现状问题 贡献实现Block设置独立的token融合器和通道融合器减少膨胀并增加宽度 宏观设计stem的早期卷积简单分类器整体阶段比率 微观设计内核大小选择Squeeze-and-excitation层放置网络架构 实验ImageNet-1K上…

idea import配置

简介 本文记录idea中import相关配置&#xff1a;自动导入依赖、自动删除无用依赖、避免自动导入*包 自动导入依赖 在编辑代码时&#xff0c;当只有一个具有匹配名称的可导入声明时&#xff0c;会自动添加导入 File -> Settings -> Editor -> General -> Auto Imp…

基于xr-frame实现微信小程序的人脸识别3D模型叠加AR功能(含源码)

前言 xr-frame是一套小程序官方提供的XR/3D应用解决方案&#xff0c;基于混合方案实现&#xff0c;性能逼近原生、效果好、易用、强扩展、渐进式、遵循小程序开发标准。xr-frame在基础库v2.32.0开始基本稳定&#xff0c;发布为正式版&#xff0c;但仍有一些功能还在开发&#…

笔试——双指针算法

双指针&#xff1a; 把数组下标看作指针&#xff0c;注意数组越界问题&#xff0c;注意区间边界值 文章目录 283.移动零1089.复写零 283.移动零 class Solution {public void moveZeroes(int[] nums) {int cur 0;int dest -1;while(cur < nums.length)if(nums[cur] ! 0)…

uni-app开发日志:schema2code生成的新增页和修改页因字段太多用分段器实现分段分类

schema2code默认只能实现较为简单的分组&#xff0c;当填写项目较多的时候&#xff0c;肯定是用选项卡明确分段比较合适&#xff0c;这时候schema2code自生成的就没法实现了&#xff0c;摒着最最少的代码修改来尝试设置生成前的schema和生成后的vue页面。 一、schema设计 先把…

ORA-16072: a minimum of one standby database destination is required

原因 Doc ID 260819.1 The problem is that these Data Guard protection mode will not allow the database to be opened without a Standby Database available and a corresponding setup log_archive_dest_n. Therefore the Data Guard Protection Mode must be reduced…

MATLAB 地面点构建三角网(83)

MATLAB 地面点构建三角网(83) 一、算法介绍二、算法实现1.代码一、算法介绍 使用少量的抽稀后的地面点。构建了一层2.5维的三角网,用于表示地形的起伏变化,随着点数量增多,构建和耗时都会相应增加,这里只是输出和研究三角网构建效果,并不做实际工程使用,具体的构建结果…

在容器 (podman) 中运行虚拟机 (QEMU/KVM, libvirt)

虚拟机 (virtual machine) 是一种计算机的虚拟化技术, 容器 (container) 是一种更轻量级的虚拟化技术. 虚拟机可以套娃 (嵌套, nest), 也就是在虚拟机中运行虚拟机. 容器也可以套娃, 比如 Docker in Docker, Podman in Podman, Podman in Docker 等. 容器和虚拟机也可以互相套娃…

【云原生之kubernetes实战】k8s环境中部署Nginx服务

【云原生之kubernetes实战】k8s环境中部署Nginx服务 一、Nginx介绍1.1 Nginx简介1.2 Nginx特点1.3 Nginx使用场景二、本次实践介绍2.1 本次实践简介2.2 本次环境规划三、检查k8s环境3.1 检查工作节点状态3.2 检查系统pod状态四、部署storageclass(可选)4.1 配置NFS服务器4.2 …

Leetcode42接雨水(单调栈)

题目 题目链接 解法一 求出前缀最大和后缀最大&#xff0c;用两者较小值减去当前高度&#xff0c;累加即可&#xff0c;这个思路容易想到&#xff0c;这里不赘述 class Solution { public:int trap(vector<int>& height) {vector<int> preMx(height.size()…

故障检测(同相/反相放大器+电压跟随器)+概念(开环/闭环增益+增益的频率依赖性+3dB/单位增益带宽+增益-频率依赖性+相移)

2024-8-28&#xff0c;星期二&#xff0c;20:19&#xff0c;天气&#xff1a;阴雨&#xff0c;心情&#xff1a;晴。今天没什么事情发生&#xff0c;继续学习。 今天完成了第六章运算放大器的学习&#xff0c;开始了第七章运算放大器响应的学习&#xff0c;主要学习内容为&…

毕 业 设 计(论 文)远程接入企业网络规划与设计

毕 业 设 计&#xff08;论 文&#xff09; 远程接入企业网络规划与设计 毕业设计论文中文摘要 随着Internet技术的日益普及&#xff0c;网络技术的飞速发展&#xff0c;企业信息化工作越来越受到重视&#xff0c;进入二十一世纪后&#xff0c;企业信息化不再满足于个人或单…