Halcon深度学习分类模型

news2025/2/26 19:16:01

1.Halcon20之后深度学习支持CPU训练模型,没有money买显卡的小伙伴有福了。但是缺点也很明显,就是训练速度超级慢,推理效果也没有GPU好,不过学习用足够。
2.分类模型是Halcon深度学习最简单的模型,可以用在物品分类,缺陷检测等项目。
3.图像预处理和训练代码
*分类网络
dev_update_off ()
dev_close_window ()
WindowWidth := 800
WindowHeight := 600
dev_open_window_fit_size (0, 0, WindowWidth, WindowHeight, -1, -1, WindowHandle)
set_display_font (WindowHandle, 16, ‘mono’, ‘true’, ‘false’)
*训练原图路径
RawImageBaseFolder :=‘D:/训练图/’+ [‘U’,‘SR’,‘MR’,‘BR’,‘C’,‘D’,‘NG’]
*预处理数据存储路径
ExampleDataDir := ‘D:/classify_pill_defects_data’

  • Dataset directory basename for any outputs written by preprocess_dl_dataset.
    DataDirectoryBaseName := ExampleDataDir + ‘/dldataset_pill’


  • ** Set parameters ***

  • LabelSource for reading in the dataset.
    LabelSource := ‘last_folder’
  • Percentages for splitting the dataset.
    TrainingPercent := 70
    ValidationPercent := 15
  • Image dimensions the images are rescaled to during preprocessing.
    ImageWidth := 300
    ImageHeight := 300
    ImageNumChannels := 3
  • Further parameters for image preprocessing.
    NormalizationType := ‘none’
    DomainHandling := ‘full_domain’
  • In order to get a reproducible split we set a random seed.
  • This means that re-running the script results in the same split of DLDataset.
    SeedRand := 42

  • ** Read the labeled data and split it into train, validation and test ***

  • Set the random seed.
    set_system (‘seed_rand’, SeedRand)
  • Read the dataset with the procedure read_dl_dataset_classification.
  • Alternatively, you can read a DLDataset dictionary
  • as created by e.g., the MVTec Deep Learning Tool using read_dict().
    read_dl_dataset_classification (RawImageBaseFolder, LabelSource, DLDataset)
  • Generate the split.
    split_dl_dataset (DLDataset, TrainingPercent, ValidationPercent, [])

  • ** Preprocess the dataset ***

  • Create the output directory if it does not exist yet.
    file_exists (ExampleDataDir, FileExists)
    if (not FileExists)
    make_dir (ExampleDataDir)
    endif
  • Create preprocess parameters.
    create_dl_preprocess_param (‘classification’, ImageWidth, ImageHeight, ImageNumChannels, -127, 128, NormalizationType, DomainHandling, [], [], [], [], DLPreprocessParam)
  • Dataset directory for any outputs written by preprocess_dl_dataset.
    DataDirectory := DataDirectoryBaseName + ‘_’ + ImageWidth + ‘x’ + ImageHeight
  • Preprocess the dataset. This might take a few seconds.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘overwrite_files’, true)
    preprocess_dl_dataset (DLDataset, DataDirectory, DLPreprocessParam, GenParam, DLDatasetFileName)
  • Store preprocess params separately in order to use it e.g. during inference.
    PreprocessParamFileBaseName := DataDirectory + ‘/dl_preprocess_param.hdict’
    write_dict (DLPreprocessParam, PreprocessParamFileBaseName, [], [])

  • ** Preview the preprocessed dataset ***

  • Before moving on to training, it is recommended to check the preprocessed dataset.
  • Display the DLSamples for 10 randomly selected train images.
    get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
    find_dl_samples (DatasetSamples, ‘split’, ‘train’, ‘match’, SampleIndices)
    tuple_shuffle (SampleIndices, ShuffledIndices)
    read_dl_samples (DLDataset, ShuffledIndices[0:9], DLSampleBatchDisplay)

create_dict (WindowHandleDict)
for Index := 0 to |DLSampleBatchDisplay| - 1 by 1
* Loop over samples in DLSampleBatchDisplay.
dev_display_dl_data (DLSampleBatchDisplay[Index], [], DLDataset, ‘classification_ground_truth’, [], WindowHandleDict)
Text := ‘Press Run (F5) to continue’
dev_disp_text (Text, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
stop ()
endfor
*

  • Close windows that have been used for visualization.
    dev_close_window_dict (WindowHandleDict)

*检测电脑是否有GPU,如果无GPU则使用CPU训练
query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
if (|DLDeviceHandles| == 0)
throw (‘No supported device found to continue this example.’)
endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]
    get_dl_device_param (DLDevice, ‘type’, DLDeviceType)
    if (DLDeviceType == ‘cpu’)
    • The number of used threads may have an impact
    • on the training duration.
      NumThreadsTraining := 4
      set_system (‘thread_num’, NumThreadsTraining)
      endif

  • ** Set input and output paths ***

  • File path of the initialized model.
    ModelFileName := ‘pretrained_dl_classifier_compact.hdl’

  • File path of the preprocessed DLDataset.

  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDataDir + ‘/dldataset_pill_300x300’
    DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’
    DLPreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’

  • Output path of the best evaluated model.
    BestModelBaseName := ExampleDataDir + ‘/best_dl_model_classification’

  • Output path for the final trained model.
    FinalModelBaseName := ExampleDataDir + ‘/final_dl_model_classification’


  • ** Set basic parameters ***

  • The following parameters need to be adapted frequently.
  • Model parameters.
  • Batch size. In case this example is run on a GPU,
  • you can set BatchSize to ‘maximum’ and it will be
  • determined automatically.
    BatchSize := 64
  • Initial learning rate.
    InitialLearningRate := 0.001
  • Momentum should be high if batch size is small.
    Momentum := 0.9
  • Parameters used by train_dl_model.
  • Number of epochs to train the model.
    NumEpochs := 16
  • Evaluation interval (in epochs) to calculate evaluation measures on the validation split.
    EvaluationIntervalEpochs := 1
  • Change the learning rate in the following epochs, e.g. [4, 8, 12].
  • Set it to [] if the learning rate should not be changed.
    ChangeLearningRateEpochs := [4,8,12]
  • Change the learning rate to the following values, e.g. InitialLearningRate * [0.1, 0.01, 0.001].
  • The tuple has to be of the same length as ChangeLearningRateEpochs.
    ChangeLearningRateValues := InitialLearningRate * [0.1,0.01,0.001]

  • ** Set advanced parameters ***

  • The following parameters might need to be changed in rare cases.
  • Model parameter.
  • Set the weight prior.
    WeightPrior := 0.0005
  • Parameters used by train_dl_model.
  • Control whether training progress is displayed (true/false).
    EnableDisplay := true
  • Set a random seed for training.
    RandomSeed := 42
    set_system (‘seed_rand’, RandomSeed)
  • In order to obtain nearly deterministic training results on the same GPU
  • (system, driver, cuda-version) you could specify “cudnn_deterministic” as
  • “true”. Note, that this could slow down training a bit.
  • set_system (‘cudnn_deterministic’, ‘true’)
  • Set generic parameters of create_dl_train_param.
  • Please see the documentation of create_dl_train_param for an overview on all available parameters.
    GenParamName := []
    GenParamValue := []
  • Augmentation parameters.
  • If samples should be augmented during training, create the dict required by augment_dl_samples.
  • Here, we set the augmentation percentage and method.
    create_dict (AugmentationParam)
  • Percentage of samples to be augmented.
    set_dict_tuple (AugmentationParam, ‘augmentation_percentage’, 50)
  • Mirror images along row and column.
    set_dict_tuple (AugmentationParam, ‘mirror’, ‘rc’)
    GenParamName := [GenParamName,‘augment’]
    GenParamValue := [GenParamValue,AugmentationParam]
  • Change strategies.
  • It is possible to change model parameters during training.
  • Here, we change the learning rate if specified above.
    if (|ChangeLearningRateEpochs| > 0)
    create_dict (ChangeStrategy)
    • Specify the model parameter to be changed, here the learning rate.
      set_dict_tuple (ChangeStrategy, ‘model_param’, ‘learning_rate’)
    • Start the parameter value at ‘initial_value’.
      set_dict_tuple (ChangeStrategy, ‘initial_value’, InitialLearningRate)
    • Reduce the learning rate in the following epochs.
      set_dict_tuple (ChangeStrategy, ‘epochs’, ChangeLearningRateEpochs)
    • Reduce the learning rate to the following values.
      set_dict_tuple (ChangeStrategy, ‘values’, ChangeLearningRateValues)
    • Collect all change strategies as input.
      GenParamName := [GenParamName,‘change’]
      GenParamValue := [GenParamValue,ChangeStrategy]
      endif
  • Serialization strategies.
  • There are several options for saving intermediate models to disk (see create_dl_train_param).
  • Here, we save the best and the final model to the paths set above.
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘best’)
    set_dict_tuple (SerializationStrategy, ‘basename’, BestModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘final’)
    set_dict_tuple (SerializationStrategy, ‘basename’, FinalModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]
  • Display parameters.
  • In this example, 20% of the training split are selected to display the
  • evaluation measure for the reduced training split during the training. A lower percentage
  • helps to speed up the evaluation/training. If the evaluation measure for the training split
  • shall not be displayed, set this value to 0 (default).
    SelectedPercentageTrainSamples := 20
  • Set the x-axis argument of the training plots.
    XAxisLabel := ‘epochs’
    create_dict (DisplayParam)
    set_dict_tuple (DisplayParam, ‘selected_percentage_train_samples’, SelectedPercentageTrainSamples)
    set_dict_tuple (DisplayParam, ‘x_axis_label’, XAxisLabel)
    GenParamName := [GenParamName,‘display’]
    GenParamValue := [GenParamValue,DisplayParam]

  • ** Read initial model and dataset ***

  • Check if all necessary files exist.
    check_data_availability (ExampleDataDir, DLDatasetFileName, DLPreprocessParamFileName)
  • Read in the model that was initialized during preprocessing.
    read_dl_model (ModelFileName, DLModelHandle)
  • Read in the preprocessed DLDataset file.
    read_dict (DLDatasetFileName, [], [], DLDataset)

  • ** Set model parameters ***

  • Set model hyper-parameters as specified in the settings above.
    set_dl_model_param (DLModelHandle, ‘learning_rate’, InitialLearningRate)
    set_dl_model_param (DLModelHandle, ‘momentum’, Momentum)
  • Set the class names for the model.
    get_dict_tuple (DLDataset, ‘class_names’, ClassNames)
    set_dl_model_param (DLModelHandle, ‘class_names’, ClassNames)
  • Get image dimensions from preprocess parameters and set them for the model.
    read_dict (DLPreprocessParamFileName, [], [], DLPreprocessParam)
    get_dict_tuple (DLPreprocessParam, ‘image_width’, ImageWidth)
    get_dict_tuple (DLPreprocessParam, ‘image_height’, ImageHeight)
    get_dict_tuple (DLPreprocessParam, ‘image_num_channels’, ImageNumChannels)
    set_dl_model_param (DLModelHandle, ‘image_dimensions’, [ImageWidth,ImageHeight,ImageNumChannels])
    if (BatchSize == ‘maximum’ and DLDeviceType == ‘gpu’)
    set_dl_model_param_max_gpu_batch_size (DLModelHandle, 100)
    else
    set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSize)
    endif
  • When the batch size is determined, set the device.
    set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
    if (|WeightPrior| > 0)
    set_dl_model_param (DLModelHandle, ‘weight_prior’, WeightPrior)
    endif
  • Set class weights to counteract unbalanced training data. In this example
  • we choose the default values, since the classes are evenly distributed in the dataset.
    tuple_gen_const (|ClassNames|, 1.0, ClassWeights)
    set_dl_model_param (DLModelHandle, ‘class_weights’, ClassWeights)

  • ** Train the model ***

  • Create training parameters.
    create_dl_train_param (DLModelHandle, NumEpochs, EvaluationIntervalEpochs, EnableDisplay, RandomSeed, GenParamName, GenParamValue, TrainParam)
  • Start the training by calling the training operator
  • train_dl_model_batch () within the following procedure.
    train_dl_model (DLDataset, DLModelHandle, TrainParam, 0, TrainResults, TrainInfos, EvaluationInfos)
  • Stop after the training has finished, before closing the windows.
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
  • Close training windows.
    dev_close_window ()
    4.推理代码
    dev_update_off ()
    dev_close_window ()
    WindowWidth := 800
    WindowHeight := 600
    dev_open_window_fit_size (0, 0, WindowWidth, WindowHeight, -1, -1, WindowHandle)
    set_display_font (WindowHandle, 16, ‘mono’, ‘true’, ‘false’)
  • ** INFERENCE **

*检测电脑是否有GPU,如果无GPU则使用CPU推理
query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
if (|DLDeviceHandles| == 0)
throw (‘No supported device found to continue this example.’)
endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]

*总路径
ExampleDataDir := ‘D:/classify_pill_defects_data’

  • Dataset directory basename for any outputs written by preprocess_dl_dataset.
    DataDirectoryBaseName := ExampleDataDir + ‘/dldataset_pill’

  • File name of the dict containing parameters used for preprocessing.

  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDataDir + ‘/dldataset_pill_300x300’
    PreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’

  • File name of the finetuned object detection model.
    RetrainedModelFileName := ExampleDataDir + ‘/best_dl_model_classification.hdl’

  • Batch Size used during inference.
    BatchSizeInference := 1


  • ** Inference ***

  • Check if all necessary files exist.
    check_data_availability (ExampleDataDir, PreprocessParamFileName, RetrainedModelFileName, false)
  • Read in the retrained model.
    read_dl_model (RetrainedModelFileName, DLModelHandle)
  • Set the batch size.
    set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSizeInference)
  • Initialize the model for inference.
    set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
  • Get the class names and IDs from the model.
    get_dl_model_param (DLModelHandle, ‘class_names’, ClassNames)
    get_dl_model_param (DLModelHandle, ‘class_ids’, ClassIDs)
  • Get the parameters used for preprocessing.
    read_dict (PreprocessParamFileName, [], [], DLPreprocessParam)
  • Create window dictionary for displaying results.
    create_dict (WindowHandleDict)
  • Create dictionary with dataset parameters necessary for displaying.
    create_dict (DLDataInfo)
    set_dict_tuple (DLDataInfo, ‘class_names’, ClassNames)
    set_dict_tuple (DLDataInfo, ‘class_ids’, ClassIDs)
  • Set generic parameters for visualization.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘scale_windows’, 1.1)

list_files (‘E:/NG’, [‘files’,‘follow_links’], ImageFiles)
tuple_regexp_select (ImageFiles, [‘\.(tif|tiff|gif|bmp|jpg|jpeg|jp2|png|pcx|pgm|ppm|pbm|xwd|ima|hobj)$’,‘ignore_case’], ImageFiles)
for Index := 0 to |ImageFiles| - 1 by 1
read_image (ImageBatch, ImageFiles[Index])
gen_dl_samples_from_images (ImageBatch, DLSampleBatch)
preprocess_dl_samples (DLSampleBatch, DLPreprocessParam)
apply_dl_model (DLModelHandle, DLSampleBatch, [], DLResultBatch)
DLSample := DLSampleBatch[0]
DLResult := DLResultBatch[0]
*获取识别结果 参数:分类的结果,批处理中图像的索引,通用参数的名称,通用参数的值
get_dict_tuple (DLResult, ‘classification_class_ids’, ClassificationClassID)
get_dict_tuple (DLResult, ‘classification_class_names’, ClassificationClassName)
get_dict_tuple (DLResult, ‘classification_confidences’, ClassificationClassConfidence)
dev_display (ImageBatch)
Text := ‘预测类为: ’ + ClassificationClassName[0] + ’ 置信度:’+ClassificationClassConfidence[0]
dev_disp_text (Text, ‘window’, ‘top’, ‘left’, ‘red’, ‘box’, ‘false’)
stop ()
endfor
dev_close_window_dict (WindowHandleDict)

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1960003.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

说真的,内裤袜子丢进洗衣机比手洗好!内裤袜子洗衣机推荐

内裤和袜子作为日常生活中不可或缺的贴身衣物,其清洁卫生尤为重要,但频繁的洗涤工作往往令人感到繁琐。正是因为这一清洗需求,内裤袜子洗衣机应运而生,它不仅为我们的生活带来了便利,更体现了对个人卫生和生活品质的重…

货拉拉论文入选亚太消费者研究会议及亚太营销国际学术会议

近日,亚太消费者研究会议(AP-ACR)召开。本次会议上,货拉拉和香港中文大学合作就论文《Why Showing Multiple Options Simultaneously Makes Customers Less Picky》(《为什么同步显示多个选项会使消费者变得更不挑剔》)进行主题报告。此前,本篇论文也曾在第二届亚太营销国际学术…

libevent入门篇

文章目录 概述下载编译目录samplehello-world初始化创建监听器处理连接处理信号 build 小结 概述 libevent 和 libev 都是由 c 实现的异步事件库;注册异步事件,检测异步事件,根据事件的触发先 后顺序,调用相对应回调函数处理事件…

命令行使用ADB,不用root,完美卸载小米预装软件

ADB安装与运行 install java 下载安装 注意选择JDK17以上版本 https://www.oracle.com/java/technologies/downloads/#jdk22-windows 选择中间的安装文件下载 编辑系统变量 C:\Program Files (x86)\Java\jdk-22 C:\Program Files (x86)\Java\jdk-22\bin 把C:\Progra…

YOLOv9训练完成后的权重文件夹中绘制的图像和txt,val_loss一直为0

现象: 在利用YOLOv9源码进行模型训练的时候,在训练完成后,权重文件夹中的图像和txt文件,val_loss一直为0。 原因: 在训练过程中,为计算验证的loss 修改: 在val_dual.py的197行,将原…

安防监控视频平台LntonAIServer视频监控管理平台裸土检测算法

LntonAIServer裸土检测算法代表了一种先进的土地监测技术,它利用人工智能的强劲能力,实现了对裸土区域的自动识别和实时监测。该算法的推出,为环境保护、农业管理以及城市规划等多个领域提供了创新的解决方案,其应用前景广阔&…

2024年《开学第一课》手机在线观看高清直播入口直达词令是什么?

2024年《开学第一课》播出时间是:2024年9月1日20:00 将在央视CCTV-1电视频直播,手机上观看2024年《开学第一课》高清直播或投屏到电视观看入口直达词令。详情请查如下说明: 2024年《开学第一课》手机在线观看高清直播入口直达词令是什么&…

【云原生】Kubernetes----k8s免密使用harbor私有仓库

目录 引言 一、搭建Harbor仓库 (一)关闭防护 (二)安装docker (三)安装docker-compose (四)安装harbor-offline 1.获取安装包 2.修改配置文件 3.启动服务 4.登录仓库验证 二…

适合大学生体质的开发者工具介绍

在这个快速变化的技术世界中,开发者们总是在寻找能够提升工作效率、优化代码质量的工具。本篇博客将带领您深入了解一系列专为开发者设计的实用工具,它们不仅能帮助您简化开发流程,还能增强代码的可读性和可维护性。 从代码编辑器到版本控制…

Milvus Cloud实战指南:选型与部署的艺术

Milvus Cloud 向量数据库进阶探索:实战场景下的选型与部署策略 在快速发展的AI与大数据领域,向量数据库作为处理高维数据的关键技术,正逐渐成为开发者们不可或缺的工具。然而,面对琳琅满目的开源向量数据库项目以及它们提供的多样化部署形态,如何根据实际需求做出最佳选择…

hot100-6--矩阵

73矩阵置0 54螺旋矩阵 48旋转图像 240搜索二维矩阵2 思路

K8S及Rancher部署

前置准备工作 SSH命令 查看本机ssh文件 cat .ssh/ 没有则生成,生成ssh ssh-keygen -t rsa -C "邮箱" 参数解释: -t 表示ssh的密钥类型,常用的有:rsa、ed25519、dss。-C 注释或称名称标识,此值随意。…

跟着丑萌气质狗学习WPF——布局控件Grid和StackPanel

布局控件Grid和StackPanel 1. 基本属性2. 行列分配2.1 完整代码2.2 绝对分配2.3 相对分配2.4 自动分配 1. 基本属性 <Window x:Class"WPF_Study_Solution.MainWindow"xmlns"http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x"ht…

PDF编辑器大分享,这三款加速PDF编辑!

嘿&#xff0c;各位办公室的小伙伴们&#xff0c;今儿咱们来聊聊那些让咱们文员生活变得更加轻松愉快的神器——PDF编辑器&#xff01;作为每天跟文档打交道的“文字魔术师”&#xff0c;选对工具那可真是事半功倍啊。今天&#xff0c;我就从我的亲身体验出发&#xff0c;给大伙…

maven项目依赖本地jar包

maven项目依赖本地jar包 一、jar包依赖 在项目下新建lib目录&#xff0c;并将jar包拷贝到lib目录下。 二、POM配置 <!--依赖本地jar包文件--> <dependency><groupId>cn.dd.summer</groupId><artifactId>dd-summer-async</artifactId><…

Clo3D 导出glb带动画

1.前言 Clo3D的服装动画属于顶点动画&#xff0c;为了让服装动画在浏览器上播放需要导出glb格式。在此之前&#xff0c;导出过Alembic&#xff0c;然后导入Unity3D进行播放&#xff0c;但是浏览器不支持Alembic格式动画&#xff0c;所以想到导出glb格式&#xff0c;然后使用Thr…

Windows中文用户名改为英文用户名的办法

注意&#xff1a;本教程适合对电脑有一定了解&#xff0c;适合有很强动手能力的朋友操作&#xff0c;小白切勿尝试&#xff01; 注意&#xff1a;本教程适合对电脑有一定了解&#xff0c;适合有很强动手能力的朋友操作&#xff0c;小白切勿尝试&#xff01; 注意&#xff1a;…

苹果电脑怎么使用Windows软件 苹果笔记本怎么安装Windows mac怎么安装windows

最早的苹果电脑的概念是在1976年的时候由乔布斯提出来的&#xff0c;在1977年的时候发行的第一款个人电脑&#xff0c;也就是苹果笔记本电脑。苹果笔记本的操作系统是MAC OSmac OS是基于unix内核的系统&#xff0c;这个系统是专门为苹果电脑开发的。macOS比windows的视觉冲击大…

scipy.fft.fft函数与scipy.fft.rfft函数的异同

import numpy as np from scipy import signal import matplotlib.pyplot as plt思路&#xff1a;1&#xff09;先利用fft计算得出其幅频值2&#xff09;在利用rfft计算得出其幅频值&#xff0c;看1&#xff09;和2&#xff09;那个能还原出信号的原始幅值# 生成一个示例信号 n…