【大数据】五、yarn基础

news2024/9/29 21:18:50

Yarn

Yarn 是用来做分布式系统中的资源协调技术

MapReduce 1.x

对于 MapReduce 1.x 的版本上:

由 Client 发起计算请求,Job Tracker 接收请求之后分发给各个TaskTrack进行执行

在这个阶段,资源的管理与请求的计算是集成在 mapreduce 上的,这种架构会导致 mapreduce 的功能过于臃肿,也会衍生出一系列的问题。

而 YARN 的出现及时的对这个问题作出了改变,YARN 就类似于一个操作系统,mapreduce 就类似于运行在 YARN 这个操作系统上的实际程序

YARN 同时也支持 Spark、Flink、Taz 等分布式计算技术,这使得 YARN 进一步被发扬光大

YARN 基础

Yarn 的基础原理就是将资源管理 与 作业调度(监视)功能进行拆分,由 ResourceManager 进行资源管理,由 ApplicationMaster 进行作业调度与监视功能

ResourceManager

NodeManager

NodeManager

Client 将作业提交给 ResourceManager 进行资源调度

YARN 基础配置

ResourceManager 会将任务分配给一个个的 NodeManager 每个NodeManager 中都有一个个的Contaniner,这一个个的 Container中就保存着一个个的任务的计算,同时,NodeManager 中也保存着一个个任务的 Application Master、每一个 Container 的计算完成后会汇报给 Application Master 其结束的信息,Application Master 会及时向 Resource Manager 汇报其情况信息。

修改 mapred-site.xml:

在最后添加;

<configuration>
        <!-- 指定Mapreduce 的作业执行时,用yarn进行资源调度 -->
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>yarn.app.mapreduce.am.env</name>
                <value>HADOOP_MAPRED_HOME=/export/server/hadoop-3.3.6</value>
        </property>
        <property>
                <name>mapreduce.map.env</name>
                <value>HADOOP_MAPRED_HOME=/export/server/hadoop-3.3.6</value>
        </property>
        <property>
                <name>mapreduce.reduce.env</name>
                <value>HADOOP_MAPRED_HOME=/export/server/hadoop-3.3.6</value>
        </property>
</configuration>

修改 yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
    <!-- 设置ResourceManager -->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>node1</value>
    </property>
    <!--配置yarn的shuffle服务-->
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
</configuration>

在 hadoop-env.sh 中添加:

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

之后将修改好的这几个文件分发给其他节点

scp mapred-site.xml yarn-site.xml hadoop-env.sh node2:$PWD
scp mapred-site.xml yarn-site.xml hadoop-env.sh node3:$PWD

之后就可以打开 yarn.sh:

start-yarn.sh

之后在对应的 8088 端口就可以找到对应的可视化网页信息了

进行词频统计(wordcount)的测试:

hadoop jar /export/server/hadoop-3.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar wordcount /input /output1

就可以顺利调用对应的东西了。

另外,如果我们希望查看日志信息,还需要再 mapred-site.xml 中继续进行配置:

    <property>
        <name>MapReduce.jobhistory.address</name>
        <value>node1:10020</value>
    </property>
    <property>
        <name>MapReduce.jobhistory.webapp.address</name>
        <value>node1:19888</value>
    </property>

继续在 yarn.xml 中配置:

    <!-- 添加如下配置 -->
    <!-- 是否需要开启⽇志聚合 -->
    <!-- 开启⽇志聚合后,将会将各个Container的⽇志保存在yarn.nodemanager.remote-app-logdir的位置 -->
    <!-- 默认保存在/tmp/logs -->
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <!-- 历史⽇志在HDFS保存的时间,单位是秒 -->
    <!-- 默认的是-1,表示永久保存 -->
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property>
    <property>
        <name>yarn.log.server.url</name>
        <value>http://node1:19888/jobhistory/logs</value>
    </property>

YARN 的任务提交流程

在这里插入图片描述

  1. MapReduce 程序运行 Job 任务,创建出一个 JobCommiter,再由 JobCommiter 进行任务提交等工作。

  2. JobCommiter 会将自己要执行的任务提交给 ResourceManager,申请一个应用ID

  3. 若资源足够,ResourceManager 会分配给 MapReduce 一个应用ID,这个时候,Mapreduce 会将自己的程序上传到 HDFS,再由需要程序的节点下载对应的程序来进行运算

  4. JobCommitter 正式向 ResourceManager 提交作业任务,

  5. ResourceManager 会找到一个负载较小的 NodeManager,指定其完成这个任务

  6. 这个被指定的 NodeManager 会创建一个 Container,并创建一个 AppMaster 用来监控这个任务并调度资源(这个 AppMaster 会从 HDFS 上接收对应的信息(分片信息、任务程序)对应每一个分片都对应一个 MapTask)

  7. 在明确了需要的空间之后,这个 AppMaster 会向 ResourceManager 申请资源,ResourceManager 会根据 NodeManager 上的负载情况为其分配对应的 NodeManager

  8. 对应的 Node Manager 会向 HDFS 上下载对应的程序,进行真正的任务计算,并在执行的时候向 APPMaster 进行汇报,例如在成功的时候向 appMaster 进行报告。

YARN 的命令

查看当前在 yarn 中运行的任务:

yarn top

测试一下执行:

hadoop jar /export/server/hadoop-3.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar pi 10 10

使用 yarn application 可以查看运行的任务信息

yarn application -list -appStates ALL		# 查看所有任务信息
yarn application -list -appStates FINISHED / RUNNING		# 查看所有已完成的 / 正在运行的

我们也可以根据查看出来的 APPID 直接杀死进程:

yarn application -kill xxxxxxxxxxx

被杀死的任务会被标记为 KILLED,我们可以使用 yarn application -list -appStates KILLED 来进行查看

YARN 调度器

先进先出调度器

如题,先来的先处理,存在饥饿问题哪怕你只需要一毫秒的运行也需要一直等

容量调度器

会开辟两个空间,其中 80% 的资源会用来像 FIFO 一样进行处理,另有 20% 等待处理其他问题,这样就在一定程度上规避了小型任务饥饿问题,但其存在资源浪费问题,因为可能有 20% 的资源始终没有使用

公平调度器

公平调度器会为每个任务分配相同的资源,当有任务执行结束时,其会将其所占有的资源分配给其他任务

YARN 的队列

YARN 默认使用的是 只有一个队列的容量调度器,其实也就是 FIFO,但这里我们可以进行配置,一般情况下,会创建两个队列,一个用来处理主任务,另一个用来处理小任务

这里需要修改配置文件:

修改 etc/hadoop 中的 capacity-scheduler.xml

<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
	<!-- yarn 允许的提交任务的最大数量 -->
  <property>
    <name>yarn.scheduler.capacity.maximum-applications</name>
    <value>10000</value>
    <description>
      Maximum number of applications that can be pending and running.
    </description>
  </property>

    <!-- Application Master 允许占集群的资源比例,0.1 代表 10% -->
  <property>
    <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
    <value>0.1</value>
    <description>
      Maximum percent of resources in the cluster which can be used to run 
      application masters i.e. controls number of concurrent running
      applications.
    </description>
  </property>

    <!-- 队列类型 -->
  <property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
    <description>
      The ResourceCalculator implementation to be used to compare 
      Resources in the scheduler.
      The default i.e. DefaultResourceCalculator only uses Memory while
      DominantResourceCalculator uses dominant-resource to compare 
      multi-dimensional resources such as Memory, CPU etc.
    </description>
  </property>

    <!-- 默认只有一个 default 队列,这里再添加一个 small 队列来处理小任务 -->
  <property>
    <name>yarn.scheduler.capacity.root.queues</name>
    <value>default,small</value>
    <description>
      The queues at the this level (root is the root queue).
    </description>
  </property>

    <!-- 这里是每个队列占用的资源百分比 -->
  <property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>70</value>
    <description>Default queue target capacity.</description>
  </property>
  <property>
    <name>yarn.scheduler.capacity.root.small.capacity</name>
    <value>30</value>
    <description>Default queue target capacity.</description>
  </property>
	<!-- 用户可以占用的资源的比例 -->
  <property>
    <name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
    <value>1</value>
    <description>
      Default queue user limit a percentage from 0.0 to 1.0.
    </description>
  </property>
  <property>
    <name>yarn.scheduler.capacity.root.small.user-limit-factor</name>
    <value>1</value>
    <description>
      Default queue user limit a percentage from 0.0 to 1.0.
    </description>
  </property>
	<!-- 每个队列最多占用整体资源的比例 -->
  <property>
    <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
    <value>100</value>
    <description>
      The maximum capacity of the default queue. 
    </description>
  </property>
  <property>
    <name>yarn.scheduler.capacity.root.small.maximum-capacity</name>
    <value>100</value>
    <description>
      The maximum capacity of the default queue. 
    </description>
  </property>

    <!-- 队列的状态 RUNNING 表示该队列是启用的状态 -->
  <property>
    <name>yarn.scheduler.capacity.root.default.state</name>
    <value>RUNNING</value>
    <description>
      The state of the default queue. State can be one of RUNNING or STOPPED.
    </description>
  </property>
  <property>
    <name>yarn.scheduler.capacity.root.small.state</name>
    <value>RUNNING</value>
    <description>
      The state of the default queue. State can be one of RUNNING or STOPPED.
    </description>
  </property>
    
	<!-- 权限管理 允许哪些用户向队列中提交任务 -->
  <property>
    <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
    <value>*</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>
  <property>
    <name>yarn.scheduler.capacity.root.small.acl_submit_applications</name>
    <value>*</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
    <value>*</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>
  <property>
    <name>yarn.scheduler.capacity.root.small.acl_administer_queue</name>
    <value>*</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>
    

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_application_max_priority</name>
    <value>*</value>
    <description>
      The ACL of who can submit applications with configured priority.
      For e.g, [user={name} group={name} max_priority={priority} default_priority={priority}]
    </description>
  </property>
  <property>
    <name>yarn.scheduler.capacity.root.small.acl_application_max_priority</name>
    <value>*</value>
    <description>
      The ACL of who can submit applications with configured priority.
      For e.g, [user={name} group={name} max_priority={priority} default_priority={priority}]
    </description>
  </property>

   <property>
     <name>yarn.scheduler.capacity.root.default.maximum-application-lifetime
     </name>
     <value>-1</value>
     <description>
        Maximum lifetime of an application which is submitted to a queue
        in seconds. Any value less than or equal to zero will be considered as
        disabled.
        This will be a hard time limit for all applications in this
        queue. If positive value is configured then any application submitted
        to this queue will be killed after exceeds the configured lifetime.
        User can also specify lifetime per application basis in
        application submission context. But user lifetime will be
        overridden if it exceeds queue maximum lifetime. It is point-in-time
        configuration.
        Note : Configuring too low value will result in killing application
        sooner. This feature is applicable only for leaf queue.
     </description>
   </property>
   <property>
     <name>yarn.scheduler.capacity.root.small.maximum-application-lifetime
     </name>
     <value>-1</value>
     <description>
        Maximum lifetime of an application which is submitted to a queue
        in seconds. Any value less than or equal to zero will be considered as
        disabled.
        This will be a hard time limit for all applications in this
        queue. If positive value is configured then any application submitted
        to this queue will be killed after exceeds the configured lifetime.
        User can also specify lifetime per application basis in
        application submission context. But user lifetime will be
        overridden if it exceeds queue maximum lifetime. It is point-in-time
        configuration.
        Note : Configuring too low value will result in killing application
        sooner. This feature is applicable only for leaf queue.
     </description>
   </property>

   <property>
     <name>yarn.scheduler.capacity.root.default.default-application-lifetime
     </name>
     <value>-1</value>
     <description>
        Default lifetime of an application which is submitted to a queue
        in seconds. Any value less than or equal to zero will be considered as
        disabled.
        If the user has not submitted application with lifetime value then this
        value will be taken. It is point-in-time configuration.
        Note : Default lifetime can't exceed maximum lifetime. This feature is
        applicable only for leaf queue.
     </description>
   </property>
   <property>
     <name>yarn.scheduler.capacity.root.small.default-application-lifetime
     </name>
     <value>-1</value>
     <description>
        Default lifetime of an application which is submitted to a queue
        in seconds. Any value less than or equal to zero will be considered as
        disabled.
        If the user has not submitted application with lifetime value then this
        value will be taken. It is point-in-time configuration.
        Note : Default lifetime can't exceed maximum lifetime. This feature is
        applicable only for leaf queue.
     </description>
   </property>

  <property>
    <name>yarn.scheduler.capacity.node-locality-delay</name>
    <value>40</value>
    <description>
      Number of missed scheduling opportunities after which the CapacityScheduler 
      attempts to schedule rack-local containers.
      When setting this parameter, the size of the cluster should be taken into account.
      We use 40 as the default value, which is approximately the number of nodes in one rack.
      Note, if this value is -1, the locality constraint in the container request
      will be ignored, which disables the delay scheduling.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.rack-locality-additional-delay</name>
    <value>-1</value>
    <description>
      Number of additional missed scheduling opportunities over the node-locality-delay
      ones, after which the CapacityScheduler attempts to schedule off-switch containers,
      instead of rack-local ones.
      Example: with node-locality-delay=40 and rack-locality-delay=20, the scheduler will
      attempt rack-local assignments after 40 missed opportunities, and off-switch assignments
      after 40+20=60 missed opportunities.
      When setting this parameter, the size of the cluster should be taken into account.
      We use -1 as the default value, which disables this feature. In this case, the number
      of missed opportunities for assigning off-switch containers is calculated based on
      the number of containers and unique locations specified in the resource request,
      as well as the size of the cluster.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.queue-mappings</name>
    <value></value>
    <description>
      A list of mappings that will be used to assign jobs to queues
      The syntax for this list is [u|g]:[name]:[queue_name][,next mapping]*
      Typically this list will be used to map users to queues,
      for example, u:%user:%user maps all users to queues with the same name
      as the user.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.queue-mappings-override.enable</name>
    <value>false</value>
    <description>
      If a queue mapping is present, will it override the value specified
      by the user? This can be used by administrators to place jobs in queues
      that are different than the one specified by the user.
      The default is false.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.per-node-heartbeat.maximum-offswitch-assignments</name>
    <value>1</value>
    <description>
      Controls the number of OFF_SWITCH assignments allowed
      during a node's heartbeat. Increasing this value can improve
      scheduling rate for OFF_SWITCH containers. Lower values reduce
      "clumping" of applications on particular nodes. The default is 1.
      Legal values are 1-MAX_INT. This config is refreshable.
    </description>
  </property>


  <property>
    <name>yarn.scheduler.capacity.application.fail-fast</name>
    <value>false</value>
    <description>
      Whether RM should fail during recovery if previous applications'
      queue is no longer valid.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.workflow-priority-mappings</name>
    <value></value>
    <description>
      A list of mappings that will be used to override application priority.
      The syntax for this list is
      [workflowId]:[full_queue_name]:[priority][,next mapping]*
      where an application submitted (or mapped to) queue "full_queue_name"
      and workflowId "workflowId" (as specified in application submission
      context) will be given priority "priority".
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.workflow-priority-mappings-override.enable</name>
    <value>false</value>
    <description>
      If a priority mapping is present, will it override the value specified
      by the user? This can be used by administrators to give applications a
      priority that is different than the one specified by the user.
      The default is false.
    </description>
  </property>

</configuration>

若我们在提交任务时不指定队列,默认会被提交到 default 队列里,另外我们也可以指定我们将队列提交到哪里:

# 默认:
hadoop jar /export/server/hadoop-3.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar pi 10 10

# 指定队列:
hadoop jar /export/server/hadoop-3.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar wordcount -Dmapreduce.job.queuename=small /input /output3

另外,如果我们希望修改默认提交到的队列,需要在 mapred-site.xml 文件中添加如下配置:

<property>
    <name>mapreduce.job.queuename</name>
    <value>small</value>
</property>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1537969.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

TCP重传机制详解——01概述

文章目录 TCP重传机制详解——01概述什么是TCP重传&#xff1f;TCP为什么要重传&#xff1f;TCP如何做到重传&#xff1f;TCP重传方式有哪些超时重传(timeout or timer-based retransmission)快速重传(fast retransmission或者fast retransmit)改进的重传机制&#xff0c;早期重…

单机模拟分布式MINIO(阿里云)

拉取的最新MINIO&#xff1a; minio version RELEASE.2024-03-15T01-07-19Z Runtime: go1.21.8 linux/amd64 分布式 MinIO 至少需要4个节点&#xff0c;也就意味着至少4个硬盘&#xff0c;对于囊中羞涩仅用来开发测试的人来说&#xff0c;这笔花销还是比较高昂。有没有更好的…

手机可以看到电脑在干什么吗

手机与电脑之间的连接与互动已成为我们日常生活和工作中的常态。 那么&#xff0c;一个常被提及的问题是&#xff1a;手机可以看到电脑在干什么吗&#xff1f; 答案是肯定的。 随着技术的不断进步&#xff0c;我们现在已经可以通过多种方式实现手机对电脑操作的实时监控。 首…

hadoop namenode 查看日志里面报错8485无法连接

一、通过日志排查问题&#xff1a; 1、首先我通过jpsall命令查看我的进程&#xff0c;发现namenode都没有开启 2、找到问题后首先进入我的日志目录里查看namenode.log [rootnode01 ~]# /opt/yjx/hadoop-3.3.4/logs/ [rootnode01 ~]# ll [rootnode01 ~]# cat hadoop-root-nam…

短视频矩阵系统--技术实际开发打板3年真实开发分享

短视频矩阵系统--技术实际开发打板3年真实开发分享&#xff0c;短视频矩阵系统/矩阵获客系统是一种基于短视频平台的获客游戏。短视频矩阵系统可以通过多账号发布来替代传统的单账号游戏。可以一键发布所有账号&#xff0c;批量制作多个视频AI智能剪辑。过去很多人只能完成的工…

JupyterNotebook 如何切换使用的虚拟环境kernel

在Jupyter Notebook中&#xff0c;如果需要修改使用的虚拟环境Kernel&#xff1a; 首先&#xff0c;需要确保虚拟环境已经安装conda上【conda基本操作】 打开Jupyter Notebook。 在Jupyter Notebook的顶部菜单中&#xff0c;选择 “New” 在弹出的窗口中&#xff0c;列出了…

练习 12 Web [极客大挑战 2019]BabySQL

本题复习&#xff1a;1.常规的万能语句SQL查询&#xff0c;union联合查询&#xff0c;Extractvalue()报错注入 extractvalue(1,concat(‘0x7e’,select(database())))%23 我一开始挨着试&#xff0c;感觉都无效 直到报错注入&#xff0c;查到了库名‘geek’ 尝试查表名&…

长三角科技盛会“2024南京国际人工智能,机器人,自动驾驶展览会”

2024南京国际人工智能,机器人,自动驾驶展览会 2024 Nanjing International Ai, Robotics, Autonomous Driving Expo 时间:2024年11月22-24日 地点:南京国际博览中心 南京&#xff0c;这座历史悠久的文化名城&#xff0c;如今正站在新一轮科技产业变革的前沿&#xff0c;以人工…

伪装目标检测之注意力CBAM:《Convolutional Block Attention Module》

论文地址&#xff1a;link 代码&#xff1a;link 摘要 我们提出了卷积块注意力模块&#xff08;CBAM&#xff09;&#xff0c;这是一种简单而有效的用于前馈卷积神经网络的注意力模块。给定一个中间特征图&#xff0c;我们的模块依次推断沿着两个独立维度的注意力图&#xff…

5.域控服务器都要备份哪些资料?如何备份DNS服务器?如何备份DHCP服务器?如何备份组策略?如何备份服务器状态的备份?

&#xff08;2.1) NTD(域控数据库&#xff09;备份 &#xff08;2.2&#xff09;DNS备份 &#xff08;2.3&#xff09;DHCP备份 &#xff08;2.4&#xff09;组策略备份 &#xff08;2.5&#xff09;CA证书备份 &#xff08;2.6&#xff09;系统状态备份 &#xff08;2.1)…

乳腺癌分类模型

乳腺癌分类模型的定义中&#xff0c;必须有_init_&#xff08;初始化&#xff09;函数和forward&#xff08;正向传播&#xff09;函数 乳腺癌分类模型定义 # 自定义模型 class MyModel(torch.nn.Module):def __init__(self,in_features):super(MyModel,self).__init__() #调用…

qt+ffmpeg 实现音视频播放(三)之视频播放

一、视频播放流程 &#xff08;PS&#xff1a;视频的播放流程跟音频的及其相似&#xff01;&#xff01;&#xff09; 1、打开视频文件 通过 avformat_open_input() 打开媒体文件并分配和初始化 AVFormatContext 结构体。 函数原型如下&#xff1a; int avformat_open_inpu…

Flask python 开发篇:链接mysql

一、历史回顾 根据上一篇&#xff1a;配置文件编写&#xff0c;已经把各种配置根据开发环境做了区分&#xff0c;再config.py中&#xff0c;我们可以分别处理测试、生产的相关配置&#xff0c;这节主要说一下数据库的链接和使用 二、配置数据库连接 Flask定义和链接数据库文…

手机可以格式化存储卡吗?格式化以后出现什么情况

随着智能手机的普及&#xff0c;存储卡&#xff08;如SD卡、MicroSD卡等&#xff09;已成为手机存储扩展的重要工具。然而&#xff0c;在使用过程中&#xff0c;我们有时可能会遇到需要格式化存储卡的情况。那么&#xff0c;手机能否直接格式化存储卡呢&#xff1f;格式化后存储…

【Flutter学习笔记】10.3 组合实例:TurnBox

参考资料&#xff1a;《Flutter实战第二版》 10.3 组合实例&#xff1a;TurnBox 这里尝试实现一个更为复杂的例子&#xff0c;其能够旋转子组件。Flutter中的RotatedBox可以旋转子组件&#xff0c;但是它有两个缺点&#xff1a; 一是只能将其子节点以90度的倍数旋转二是当旋转…

在服务器(Ubuntu20.04)安装用户级别的cuda11.8(以及仿照前面教程安装cuda11.3后安装cudnn和pytorch1.9.0)

1、cuda11.8的下载 首先在cuda官网下载我们需要的cuda版本&#xff0c;这里我下载的是cuda11.8&#xff08;我的最高支持cuda12.0&#xff09; 这里我直接使用wget命令下载不了&#xff0c;于是我直接在浏览器输入后面的链接下载到本地&#xff0c;之后再上传至服务器的&am…

数据分析概述、Conda环境搭建及JupyterLab的搭建

1. 数据分析职责概述 当今世界对信息技术的依赖程度在不断加深&#xff0c;每天都会有大量的数据产生&#xff0c;我们经常会感到数据越来越多&#xff0c;但是要从中发现有价值的信息却越来越难。这里所说的信息&#xff0c;可以理解为对数据集处理之后的结果&#xff0c;是从…

SQLiteC/C++接口详细介绍sqlite3_stmt类(十)

返回&#xff1a;SQLite—系列文章目录 上一篇&#xff1a;SQLiteC/C接口详细介绍sqlite3_stmt类&#xff08;九&#xff09; 下一篇&#xff1a; SQLiteC/C接口详细介绍sqlite3_stmt类&#xff08;十一&#xff09; 38、sqlite3_column_value sqlite3_column_valu…

Python:熟悉简单的skfuzzy构建接近生活事件的模糊控制器”(附带详细注释说明)+ 测试结果

参考资料&#xff1a;https: // blog.csdn.net / shelgi / article / details / 126908418 ————通过下面这个例子&#xff0c;终于能理解一点模糊理论的应用了&#xff0c;感谢原作。 熟悉简单的skfuzzy构建接近生活事件的模糊控制器 假设下面这样的场景, 我们希望构建一套…

linux系统------------MySQL 存储引擎

目录 一、存储引擎概念介绍 二、常用的存储引擎 2.1MyISAM 2.1.1MYlSAM的特点 2.1.2MyISAM 表支持 3 种不同的存储格式⭐&#xff1a; &#xff08;1&#xff09;静态(固定长度)表 &#xff08;2&#xff09;动态表 &#xff08;3&#xff09;压缩表 2.1.3MyISAM适…