【Tensorflow object detection API + 微软NNI】图像分类问题完成自动调参,进一步提升模型准确率!

news2024/9/22 5:42:07

1. 背景&目标

利用Tensorflow object detection API开发并训练图像分类模型(例如,Mobilenetv2等),自己直接手动调参,对于模型的准确率提不到极致,利用微软NNI自动调参工具进行调参,进一步提升准确率。

2. 方法

  1. 关于Tensorflow object detection API开发并训练图像分类模型详见这篇博客:【tensorflow-slim】使用tensroflow-slim训练自己的图像分类数据集+冻成pb文件+预测(本文针对场景分类,手把手详细教学!)
  2. 关于微软NNI工具的使用参考官方网站即可:Neural Network Intelligence
  3. 具体代码实现,直接将下列代码放在一个.py文件中,文件名为nni_train_eval_image_classifier.py,放在Tensorflow object detection API官方代码仓的models-master/research/slim目录下,再依照NNI工具官方使用方式使用即可。
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Generic training script that trains a model using a given dataset."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import nni
import argparse
import logging
import time

import tensorflow as tf
from tensorflow.contrib import quantize as contrib_quantize
from tensorflow.contrib import slim as contrib_slim
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.client import timeline
from tensorflow.python.lib.io import file_io

from datasets import dataset_factory
from deployment import model_deploy
from nets import nets_factory
from preprocessing import preprocessing_factory

slim = contrib_slim
logger = logging.getLogger('mobilenetv2_AutoML')

def _configure_learning_rate(num_samples_per_epoch, global_step):
  """Configures the learning rate.

  Args:
    num_samples_per_epoch: The number of samples in each epoch of training.
    global_step: The global_step tensor.

  Returns:
    A `Tensor` representing the learning rate.

  Raises:
    ValueError: if
  """
  # Note: when num_clones is > 1, this will actually have each clone to go
  # over each epoch FLAGS.num_epochs_per_decay times. This is different
  # behavior from sync replicas and is expected to produce different results.
  steps_per_epoch = num_samples_per_epoch / params['batch_size']

  decay_steps = int(steps_per_epoch * params['num_epochs_per_decay'])

  if params['learning_rate_decay_type'] == 'exponential':
    learning_rate = tf.train.exponential_decay(
        params['learning_rate'],
        global_step,
        decay_steps,
        params['learning_rate_decay_factor'],
        staircase=True,
        name='exponential_decay_learning_rate')
  else:
    raise ValueError('learning_rate_decay_type [%s] was not recognized' %
                     params['learning_rate_decay_type'])
  return learning_rate


def _configure_optimizer(learning_rate):
  """Configures the optimizer used for training.

  Args:
    learning_rate: A scalar or `Tensor` learning rate.

  Returns:
    An instance of an optimizer.

  Raises:
    ValueError: if FLAGS.optimizer is not recognized.
  """
  if params['optimizer'] == 'adadelta':
    optimizer = tf.train.AdadeltaOptimizer(
        learning_rate,
        rho=0.95,
        epsilon=1.0)
  elif params['optimizer'] == 'adagrad':
    optimizer = tf.train.AdagradOptimizer(
        learning_rate,
        initial_accumulator_value=0.1)
  elif params['optimizer'] == 'adam':
    optimizer = tf.train.AdamOptimizer(
        learning_rate,
        beta1=0.9,
        beta2=0.999,
        epsilon=1.0)
  elif params['optimizer'] == 'rmsprop':
    optimizer = tf.train.RMSPropOptimizer(
        learning_rate,
        decay=0.9,
        momentum=0.9,
        epsilon=1.0)
  elif params['optimizer'] == 'sgd':
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
  else:
    raise ValueError('Optimizer [%s] was not recognized' % params['optimizer'])
  return optimizer


def _get_init_fn():
  """Returns a function run by the chief worker to warm-start the training.

  Note that the init_fn is only run when initializing the model during the very
  first global step.

  Returns:
    An init function run by the supervisor.
  """
  if params['checkpoint_path'] is None:
    return None

  # Warn the user if a checkpoint exists in the train_dir. Then we'll be
  # ignoring the checkpoint anyway.
  if tf.train.latest_checkpoint(os.path.join(params['train_dir'], nni.get_trial_id())):
    tf.logging.info(
        'Ignoring --checkpoint_path because a checkpoint already exists in %s'
        % os.path.join(params['train_dir'], nni.get_trial_id()))
    return None

  exclusions = []


  # TODO(sguada) variables.filter_variables()
  variables_to_restore = []
  for var in slim.get_model_variables():
    for exclusion in exclusions:
      if var.op.name.startswith(exclusion):
        break
    else:
      variables_to_restore.append(var)

  if tf.gfile.IsDirectory(params['checkpoint_path']):
    checkpoint_path = tf.train.latest_checkpoint(params['checkpoint_path'])
  else:
    checkpoint_path = params['checkpoint_path']

  tf.logging.info('Fine-tuning from %s' % checkpoint_path)

  return slim.assign_from_checkpoint_fn(
      checkpoint_path,
      variables_to_restore,
      ignore_missing_vars=False)


def _get_variables_to_train():
  """Returns a list of variables to train.

  Returns:
    A list of variables to train by the optimizer.
  """
  if params['trainable_scopes'] is None:
    return tf.trainable_variables()
  else:
    scopes = [scope.strip() for scope in params['trainable_scopes'].split(',')]

  variables_to_train = []
  for scope in scopes:
    variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope)
    variables_to_train.extend(variables)
  return variables_to_train


def train_step_for_print(sess, train_op, global_step, train_step_kwargs):
  """Function that takes a gradient step and specifies whether to stop.
  Args:
    sess: The current session.
    train_op: An `Operation` that evaluates the gradients and returns the total
      loss.
    global_step: A `Tensor` representing the global training step.
    train_step_kwargs: A dictionary of keyword arguments.
  Returns:
    The total loss and a boolean indicating whether or not to stop training.
  Raises:
    ValueError: if 'should_trace' is in `train_step_kwargs` but `logdir` is not.
  """
  start_time = time.time()

  trace_run_options = None
  run_metadata = None
  if 'should_trace' in train_step_kwargs:
    if 'logdir' not in train_step_kwargs:
      raise ValueError('logdir must be present in train_step_kwargs when '
                       'should_trace is present')
    if sess.run(train_step_kwargs['should_trace']):
      trace_run_options = config_pb2.RunOptions(
          trace_level=config_pb2.RunOptions.FULL_TRACE)
      run_metadata = config_pb2.RunMetadata()

  total_loss, np_global_step = sess.run([train_op, global_step],
                                        options=trace_run_options,
                                        run_metadata=run_metadata)
  time_elapsed = time.time() - start_time

  if run_metadata is not None:
    tl = timeline.Timeline(run_metadata.step_stats)
    trace = tl.generate_chrome_trace_format()
    trace_filename = os.path.join(train_step_kwargs['logdir'],
                                  'tf_trace-%d.json' % np_global_step)
    logging.info('Writing trace to %s', trace_filename)
    file_io.write_string_to_file(trace_filename, trace)
    if 'summary_writer' in train_step_kwargs:
      train_step_kwargs['summary_writer'].add_run_metadata(
          run_metadata, 'run_metadata-%d' % np_global_step)

  if 'should_log' in train_step_kwargs:
    if sess.run(train_step_kwargs['should_log']):
      logging.info('global step %d: loss = %.4f (%.3f sec/step)',
                   np_global_step, total_loss, time_elapsed)

  if 'should_stop' in train_step_kwargs:
    should_stop = sess.run(train_step_kwargs['should_stop'])
  else:
    should_stop = False
  nni.report_intermediate_result(total_loss)
  return total_loss, should_stop


def main(params):
  if not params['dataset_dir']:
    raise ValueError('You must supply the dataset directory with --dataset_dir')

  tf.logging.set_verbosity(tf.logging.INFO)
  with tf.Graph().as_default():
    #######################
    # Config model_deploy #
    #######################
    deploy_config = model_deploy.DeploymentConfig(
        num_clones=2,
        clone_on_cpu=False,
        replica_id=0,
        num_replicas=1,
        num_ps_tasks=0)

    # Create global_step
    with tf.device(deploy_config.variables_device()):
      global_step = slim.create_global_step()

    ######################
    # Select the dataset #
    ######################
    dataset = dataset_factory.get_dataset(
        params['dataset_name'], params['dataset_split_name'], params['dataset_dir'])

    ######################
    # Select the network #
    ######################
    network_fn = nets_factory.get_network_fn(
        params['model_name'],
        num_classes=(dataset.num_classes - 0 ),
        weight_decay=0.00004,
        is_training=True)

    #####################################
    # Select the preprocessing function #
    #####################################
    preprocessing_name = params['model_name']
    image_preprocessing_fn = preprocessing_factory.get_preprocessing(
        preprocessing_name,
        is_training=True,
        use_grayscale=False)

    ##############################################################
    # Create a dataset provider that loads data from the dataset #
    ##############################################################
    with tf.device(deploy_config.inputs_device()):
      provider = slim.dataset_data_provider.DatasetDataProvider(
          dataset,
          num_readers=params['num_readers'],
          common_queue_capacity=20 * params['batch_size'],
          common_queue_min=10 * params['batch_size'])
      [image, label] = provider.get(['image', 'label'])
      label -=0

      train_image_size = params['train_image_size'] or network_fn.default_image_size

      image = image_preprocessing_fn(image, train_image_size, train_image_size)

      images, labels = tf.train.batch(
          [image, label],
          batch_size=params['batch_size'],
          num_threads=params['num_preprocessing_threads'],
          capacity=5 * params['batch_size'])
      labels = slim.one_hot_encoding(
          labels, dataset.num_classes - 0 )
      batch_queue = slim.prefetch_queue.prefetch_queue(
          [images, labels], capacity=2 * deploy_config.num_clones)

    ####################
    # Define the model #
    ####################
    def clone_fn(batch_queue):
      """Allows data parallelism by creating multiple clones of network_fn."""
      images, labels = batch_queue.dequeue()
      logits, end_points = network_fn(images)

      #############################
      # Specify the loss function #
      #############################
      if 'AuxLogits' in end_points:
        slim.losses.softmax_cross_entropy(
            end_points['AuxLogits'], labels,
            label_smoothing=0.0, weights=0.4,
            scope='aux_loss')
      slim.losses.softmax_cross_entropy(
          logits, labels, label_smoothing=0.0, weights=1.0)
      return end_points

    # Gather initial summaries.
    summaries = set(tf.get_collection(tf.GraphKeys.SUMMARIES))

    clones = model_deploy.create_clones(deploy_config, clone_fn, [batch_queue])
    first_clone_scope = deploy_config.clone_scope(0)
    # Gather update_ops from the first clone. These contain, for example,
    # the updates for the batch_norm variables created by network_fn.
    update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, first_clone_scope)

    # Add summaries for end_points.
    end_points = clones[0].outputs
    for end_point in end_points:
      x = end_points[end_point]
      summaries.add(tf.summary.histogram('activations/' + end_point, x))
      summaries.add(tf.summary.scalar('sparsity/' + end_point,
                                      tf.nn.zero_fraction(x)))
      if len(x.shape) <4:
        continue



    # Add summaries for losses.
    for loss in tf.get_collection(tf.GraphKeys.LOSSES, first_clone_scope):
      summaries.add(tf.summary.scalar('losses/%s' % loss.op.name, loss))

    # Add summaries for variables.
    for variable in slim.get_model_variables():
      summaries.add(tf.summary.histogram(variable.op.name, variable))


    #########################################
    # Configure the optimization procedure. #
    #########################################
    with tf.device(deploy_config.optimizer_device()):
      learning_rate = _configure_learning_rate(dataset.num_samples, global_step)
      optimizer = _configure_optimizer(learning_rate)
      summaries.add(tf.summary.scalar('learning_rate', learning_rate))


    # Variables to train.
    variables_to_train = _get_variables_to_train()

    #  and returns a train_tensor and summary_op
    total_loss, clones_gradients = model_deploy.optimize_clones(
        clones,
        optimizer,
        var_list=variables_to_train)
    # Add total_loss to summary.
    summaries.add(tf.summary.scalar('total_loss', total_loss))

    # Create gradient updates.
    grad_updates = optimizer.apply_gradients(clones_gradients,
                                             global_step=global_step)
    update_ops.append(grad_updates)

    update_op = tf.group(*update_ops)
    with tf.control_dependencies([update_op]):
      train_tensor = tf.identity(total_loss, name='train_op')

    # Add the summaries from the first clone. These contain the summaries
    # created by model_fn and either optimize_clones() or _gather_clone_loss().
    summaries |= set(tf.get_collection(tf.GraphKeys.SUMMARIES,
                                       first_clone_scope))

    # Merge all summaries together.
    summary_op = tf.summary.merge(list(summaries), name='summary_op')

    ###########################
    # Kicks off the training. #
    ###########################
    final_loss = slim.learning.train(
        train_tensor,
        train_step_fn=train_step_for_print,
        logdir=os.path.join(params['train_dir'], nni.get_trial_id()),
        master=params['master'],
        is_chief=True,
        init_fn=_get_init_fn(),
        summary_op=summary_op,
        number_of_steps=params['max_number_of_steps'],
        log_every_n_steps=params['log_every_n_steps'],
        save_summaries_secs=params['save_summaries_secs'],
        save_interval_secs=params['save_interval_secs'],
        sync_optimizer=None)

    nni.report_final_result(final_loss)




def get_params():
    ''' Get parameters from command line '''
    parser = argparse.ArgumentParser()
    parser.add_argument("--master", type=str, default='', help='The address of the TensorFlow master to use.')
    parser.add_argument("--train_dir", type=str, default='/***/workspace/mobilenetssd/models-master/research/slim/sentiment_cnn/model/training2')
    parser.add_argument("--num_preprocessing_threads", type=int, default=4)
    parser.add_argument("--log_every_n_steps", type=int, default=10)
    parser.add_argument("--save_summaries_secs", type=int, default=600)
    parser.add_argument("--save_interval_secs", type=int, default=600)
    parser.add_argument("--learning_rate", type=float, default=1e-4)
    parser.add_argument("--batch_num", type=int, default=2000)

    parser.add_argument("--dataset_name", type=str, default="sendata3_train")
    parser.add_argument("--dataset_split_name", type=str, default='train')
    parser.add_argument("--dataset_dir", type=str, default='/***/workspace/mobilenetssd/models-master/research/slim/sentiment_cnn/dataset/data_3')
    parser.add_argument("--model_name", type=str, default='mobilenet_v2')

    parser.add_argument("--batch_size", type=int, default=16)
    parser.add_argument("--train_image_size", type=int, default=80)
    parser.add_argument("--max_number_of_steps", type=int, default=60000)
    parser.add_argument("--trainable_scopes", type=str, default=None)

    parser.add_argument("--optimizer", type=str, default='adam')
    parser.add_argument("--learning_rate_decay_type", type=str, default='exponential')
    parser.add_argument("--learning_rate_decay_factor", type=float, default=0.8)

    parser.add_argument("--checkpoint_path", type=str, default=None)
    parser.add_argument("--num_readers", type=int, default=4)

    parser.add_argument("--num_epochs_per_decay", type=float, default=3.0)

    args, _ = parser.parse_known_args()
    return args

if __name__ == '__main__':
    try:
        # get parameters form tuner
        tuner_params = nni.get_next_parameter()
        logger.debug(tuner_params)
        params = vars(get_params())
        params.update(tuner_params)
        main(params)
    except Exception as exception:
        logger.exception(exception)
        raise

【注意】:需要依照自己设备、模型等实际情况,修改代码中的如下部分:

def get_params():
    ''' Get parameters from command line '''
    parser = argparse.ArgumentParser()
    parser.add_argument("--master", type=str, default='', help='The address of the TensorFlow master to use.')
    parser.add_argument("--train_dir", type=str, default='/***/workspace/mobilenetssd/models-master/research/slim/sentiment_cnn/model/training2')
    parser.add_argument("--num_preprocessing_threads", type=int, default=4)
    parser.add_argument("--log_every_n_steps", type=int, default=10)
    parser.add_argument("--save_summaries_secs", type=int, default=600)
    parser.add_argument("--save_interval_secs", type=int, default=600)
    parser.add_argument("--learning_rate", type=float, default=1e-4)
    parser.add_argument("--batch_num", type=int, default=2000)

    parser.add_argument("--dataset_name", type=str, default="sendata3_train")
    parser.add_argument("--dataset_split_name", type=str, default='train')
    parser.add_argument("--dataset_dir", type=str, default='/***/workspace/mobilenetssd/models-master/research/slim/sentiment_cnn/dataset/data_3')
    parser.add_argument("--model_name", type=str, default='mobilenet_v2')

    parser.add_argument("--batch_size", type=int, default=16)
    parser.add_argument("--train_image_size", type=int, default=80)
    parser.add_argument("--max_number_of_steps", type=int, default=60000)
    parser.add_argument("--trainable_scopes", type=str, default=None)

    parser.add_argument("--optimizer", type=str, default='adam')
    parser.add_argument("--learning_rate_decay_type", type=str, default='exponential')
    parser.add_argument("--learning_rate_decay_factor", type=float, default=0.8)

    parser.add_argument("--checkpoint_path", type=str, default=None)
    parser.add_argument("--num_readers", type=int, default=4)

    parser.add_argument("--num_epochs_per_decay", type=float, default=3.0)

    args, _ = parser.parse_known_args()
    return args

3. 结果

进一步压榨你的图像分类模型准确率叭!
例如,针对私有数据集训练Mobilenetv2以前只能到90%的准确率,现在能到92.2857%
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/690504.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Keep通过IPO聆讯,3年烧掉16亿

“运动科技第一股”来了&#xff01; 6月21日&#xff0c;线上健身平台的运营方、北京卡路里科技有限公司&#xff08;下称“Keep”&#xff09;已正式通过聆讯&#xff0c;股票代码为810342.HK。 Keep是一家在线健身平台&#xff0c;主要产品包括在线健身内容、智能健身设备…

【python百炼成魔】python之内置函数range

前言 文章目录 前言内置函数 range()三种创建方式1. 只有一个参数的情况2. 给定两个参数的情况3. 三个参数都给定的时候 使用in和not in 来判断指定的整数是否存在1. 判断range生成的序列中是否存在指定的值2. in 和not in 不与range结合的情况 总结 内置函数 range() range()函…

第六章、Linux文件与目录管理

6.1 目录与路径 6.1.1 相对路径与绝对路径 绝对路径&#xff1a;路径的写法“一定由根目录 / 写起”&#xff0c;例如&#xff1a; /usr/share/doc 这个目录。 相对路径&#xff1a;路径的写法“不是由 / 写起”&#xff0c;例如由 /usr/share/doc 要到 /usr/share/man 下面…

chatgpt赋能python:使用Python获取句柄和发送消息

使用Python获取句柄和发送消息 什么是句柄&#xff1f; 在计算机中&#xff0c;句柄是指一个唯一的标识符&#xff0c;用于引用正在执行的进程或程序。在Python中&#xff0c;我们可以使用win32api模块获取Windows操作系统中的句柄。使用句柄&#xff0c;我们可以与Windows中…

Qt/C++编写跨平台的推流工具(支持win/linux/mac/嵌入式linux/安卓等)

一、前言 跨平台的推流工具当属OBS最牛逼&#xff0c;功能也是最强大的&#xff0c;唯一的遗憾就是多路推流需要用到插件&#xff0c;而且CPU占用比较高&#xff0c;默认OBS的规则是将对应画布中的视频画面和设定的音频一起重新编码再推流&#xff0c;意味着肯定占用不少CPU资…

DragGAN开源:生成图像流形上的基于点的交互式操作

文旨在解决生成对抗网络&#xff08;GAN&#xff09;中控制生成图像的问题。通过“拖动”图像中的任意点&#xff0c;实现用户交互式精确控制生成图像的姿态、形状、表情和布局。 这个名叫DragGAN的模型&#xff0c;本质上是为各种GAN开发的一种交互式图像操作方法。论文以Sty…

215. 数组中的第K个最大元素

题目描述&#xff1a; 主要思路&#xff1a; 利用堆排序实现第k大的数的查找。 class Solution { public:void maxHeapify(vector<int>& a,int i,int heapsize){int li*2,ri*21,lagesti;if(l<heapsize&&a[l]>a[lagest])lagestl;if(r<heapsize&…

iOS应用上架全攻略

目录 引言 一、基本需求信息。 二、证书 一.证书管理 二.新建证书 三.使用appuploader服务同步证书 三、打包 三、审核 四、整体架构流程 五、代码实现 六、总结 引言 上架IOS应用到app store&#xff0c;需要正式的打包证书、证书profile文件和需要使用专用的工具…

越来越“变态”的验证码,到底在验证什么?

验证码要验证的是它所面对的是真实的人还是计算机程序。最开始的验证码非常的简单&#xff0c;只要输入几个数字就可以。不知道从何时开始见证了变得越来越变态&#xff0c;变得花样不断的验证&#xff0c;验证码就不仅仅是视力的挑战了&#xff0c;有的时候已经是视力及智力的…

CB5309高集成国产2.4 GHz射频前端放大器功放芯片

目录 什么是射频前端&#xff1f;CB5309简介芯片特性 什么是射频前端&#xff1f; 射频前端是射频收发器和天线之间的一系列组件&#xff0c;主要包括功率放大器(PA)、天线开关(Switch)、滤波器(Filter)、双工器(Duplexer和Diplexer)和低噪声放大器(LNA)等&#xff0c;对射频信…

【Nginx】第七章 Nginx原理与优化参数配置

7.1 Nginx原理 master-workers的机制的好处 首先&#xff0c;对于每个worker进程来说&#xff0c;独立的进程&#xff0c;不需要加锁&#xff0c;所以省掉了锁带来的开销&#xff0c;同时在编程以及问题查找时&#xff0c;也会方便很多。 其次&#xff0c;采用独立的进程&…

如何直接在线抠图人像?掌握这两个方法,轻松编辑你的照片!

在日常工作和生活中&#xff0c;我们经常需要对照片进行抠图操作&#xff0c;特别是对人像进行抠图&#xff0c;以便更换背景或添加特效。然而&#xff0c;对于那些没有接受过专门培训的人来说&#xff0c;使用复杂的图像编辑软件可能会感到非常困惑和无所适从。别担心&#xf…

如何设置微信小程序启动页及其全屏背景色?

一、设置启动页 打开微信小程序就会进入pages里面的第一个页面&#xff0c;所以只需要在pages.json中&#xff0c;把启动页写在pages的第一项就可以了 二、去掉导航栏&#xff0c;实现全屏显示效果 先清除全局的导航栏标题&#xff0c;在需要全屏的页面&#xff0c;添加以下代…

初级应急响应-Windows-常用命令

命令&#xff1a;regedit 说明&#xff1a;注册表 命令&#xff1a;Taskmgr 说明&#xff1a;任务管理器 命令&#xff1a;Msconfig 说明&#xff1a;系统配置&#xff08;包含启动项&#xff09; 命令&#xff1a;eventvwr.msc 说明&#xff1a;事件查看器 命令&#xff1a;co…

前端(vue)npm如何发布自己的包

1.首先vue create xxx创建一个空的项目&#xff08;lib和local文件夹怎么来看后面的步骤&#xff09; 2.将自己的方法或者组建文件夹放在src同层目录下&#xff0c;我这边是local文件夹 3.在APP.vue页面引入本地文件测试自己的方法有没有问题 4.在package.json中的scripts中配置…

3D轻量化引擎HOOPS Communicator中的反向代理

一、HOOPS Communicator概述 HOOPS Communicator由三个主要组件组成&#xff1a;Web查看器、服务器和数据创作工具 &#xff08;1&#xff09;Web GL Viewer&#xff1a;该组件嵌入在客户端的Web浏览器中&#xff0c;负责显示CAD数据、PMI视图、属性、测量、数据标记等。 服…

【Flutter】包管理(7)Flutter 状态管理 BLoC 从基础到实践

文章目录 一、前言二、BLoC 的基本概念三、在 Flutter 中使用 BLoC四、BLoC 的高级用法五、BLoC 的最佳实践六、购物车应用的实例七、总结一、前言 在 Flutter 开发中,状态管理是一个非常重要的话题。正确的状态管理策略可以使我们的代码更加清晰,更易于维护。 本文将深入探…

【Linux】详解进程控制 ( 再谈进程退出 | 程序替换exec*类型函数 )

再谈进程退出进程程序替换引入程序替换原理有哪些替换函数execl:execlp&#xff1a;execv:execvp:execle: execve: 接续上篇博客 “详解进程控制 ( fork函数 | 写时拷贝 | 进程退出 | 进程等待 )” 再谈进程退出 进程退出会变成僵尸状态&#xff0c;将自己的推出结果写入task_…

芯片等高科技制造业 如何实现安全的跨网数据交换?

芯片是信息产业的基础&#xff0c;一直以来占据全球半导体产品超过80%的销售额&#xff0c;在计算机、家用电器、数码电子、自动化、电气、通信、交通、医疗、航空航天等几乎所有的电子设备领域中都有使用。 所以&#xff0c;对于芯片这种高科技制造业来说&#xff0c;数据的安…

装饰模式(Decorator)

别名 装饰者模式&#xff08;Wrapper&#xff09;。 定义 装饰是一种结构型设计模式&#xff0c;允许你通过将对象放入包含行为的特殊封装对象中来为原对象绑定新的行为。 前言 1. 问题 假设你正在开发一个提供通知功能的库&#xff0c;其他程序可使用它向用户发送关于重…