Python3《机器学习实战》学习笔记(十):ANN人工神经网络代码详解(数字识别案例以及人脸识别案例)

news2025/1/16 0:57:59

文章目录

  • 一、构建基本代码结构
    • 1.1预处理数据的工具包
    • 1.2 初始化参数
    • 1.3工具类sigmoid
    • 1.4工具类矩阵变换
    • 1.5初始化theta
    • 1.6正向传播
    • 1.7反向传播
    • 1.8梯度下降
    • 1.9训练模块
  • 二、MNIST数字识别
  • 三、人脸识别
  • 四、总结

一、构建基本代码结构

1.1预处理数据的工具包

在这里插入图片描述

"""Dataset Features Related Utils"""

from .normalize import normalize
from .generate_polynomials import generate_polynomials
from .generate_sinusoids import generate_sinusoids
from .prepare_for_training import prepare_for_training

"""Add polynomial features to the features set"""

import numpy as np
from .normalize import normalize


def generate_polynomials(dataset, polynomial_degree, normalize_data=False):
    """Extends data set with polynomial features of certain degree.

    Returns a new feature array with more features, comprising of
    x1, x2, x1^2, x2^2, x1*x2, x1*x2^2, etc.

    :param dataset: dataset that we want to generate polynomials for.
    :param polynomial_degree: the max power of new features.
    :param normalize_data: flag that indicates whether polynomials need to normalized or not.
    """

    # Split features on two halves.
    features_split = np.array_split(dataset, 2, axis=1)
    dataset_1 = features_split[0]
    dataset_2 = features_split[1]

    # Extract sets parameters.
    (num_examples_1, num_features_1) = dataset_1.shape
    (num_examples_2, num_features_2) = dataset_2.shape

    # Check if two sets have equal amount of rows.
    if num_examples_1 != num_examples_2:
        raise ValueError('Can not generate polynomials for two sets with different number of rows')

    # Check if at list one set has features.
    if num_features_1 == 0 and num_features_2 == 0:
        raise ValueError('Can not generate polynomials for two sets with no columns')

    # Replace empty set with non-empty one.
    if num_features_1 == 0:
        dataset_1 = dataset_2
    elif num_features_2 == 0:
        dataset_2 = dataset_1

    # Make sure that sets have the same number of features in order to be able to multiply them.
    num_features = num_features_1 if num_features_1 < num_examples_2 else num_features_2
    dataset_1 = dataset_1[:, :num_features]
    dataset_2 = dataset_2[:, :num_features]

    # Create polynomials matrix.
    polynomials = np.empty((num_examples_1, 0))

    # Generate polynomial features of specified degree.
    for i in range(1, polynomial_degree + 1):
        for j in range(i + 1):
            polynomial_feature = (dataset_1 ** (i - j)) * (dataset_2 ** j)
            polynomials = np.concatenate((polynomials, polynomial_feature), axis=1)

    # Normalize polynomials if needed.
    if normalize_data:
        polynomials = normalize(polynomials)[0]

    # Return generated polynomial features.
    return polynomials

"""Add sinusoid features to the features set"""

import numpy as np


def generate_sinusoids(dataset, sinusoid_degree):
    """Extends data set with sinusoid features.

    Returns a new feature array with more features, comprising of
    sin(x).

    :param dataset: data set.
    :param sinusoid_degree: multiplier for sinusoid parameter multiplications
    """

    # Create sinusoids matrix.
    num_examples = dataset.shape[0]
    sinusoids = np.empty((num_examples, 0))

    # Generate sinusoid features of specified degree.
    for degree in range(1, sinusoid_degree + 1):
        sinusoid_features = np.sin(degree * dataset)
        sinusoids = np.concatenate((sinusoids, sinusoid_features), axis=1)

    # Return generated sinusoidal features.
    return sinusoids

"""Normalize features"""

import numpy as np


def normalize(features):
    """Normalize features.

    Normalizes input features X. Returns a normalized version of X where the mean value of
    each feature is 0 and deviation is close to 1.

    :param features: set of features.
    :return: normalized set of features.
    """

    # Copy original array to prevent it from changes.
    features_normalized = np.copy(features).astype(float)

    # Get average values for each feature (column) in X.
    features_mean = np.mean(features, 0)

    # Calculate the standard deviation for each feature.
    features_deviation = np.std(features, 0)

    # Subtract mean values from each feature (column) of every example (row)
    # to make all features be spread around zero.
    if features.shape[0] > 1:
        features_normalized -= features_mean

    # Normalize each feature values so that all features are close to [-1:1] boundaries.
    # Also prevent division by zero error.
    features_deviation[features_deviation == 0] = 1
    features_normalized /= features_deviation

    return features_normalized, features_mean, features_deviation

"""Prepares the dataset for training"""

import numpy as np
from .normalize import normalize
from .generate_sinusoids import generate_sinusoids
from .generate_polynomials import generate_polynomials


def prepare_for_training(data, polynomial_degree=0, sinusoid_degree=0, normalize_data=True):
    """Prepares data set for training on prediction"""

    # Calculate the number of examples.
    num_examples = data.shape[0]

    # Prevent original data from being modified.
    data_processed = np.copy(data)

    # Normalize data set.
    features_mean = 0
    features_deviation = 0
    data_normalized = data_processed
    if normalize_data:
        (
            data_normalized,
            features_mean,
            features_deviation
        ) = normalize(data_processed)

        # Replace processed data with normalized processed data.
        # We need to have normalized data below while we will adding polynomials and sinusoids.
        data_processed = data_normalized

    # Add sinusoidal features to the dataset.
    if sinusoid_degree > 0:
        sinusoids = generate_sinusoids(data_normalized, sinusoid_degree)
        data_processed = np.concatenate((data_processed, sinusoids), axis=1)

    # Add polynomial features to data set.
    if polynomial_degree > 0:
        polynomials = generate_polynomials(data_normalized, polynomial_degree, normalize_data)
        data_processed = np.concatenate((data_processed, polynomials), axis=1)

    # Add a column of ones to X.
    data_processed = np.hstack((np.ones((num_examples, 1)), data_processed))

    return data_processed, features_mean, features_deviation

1.2 初始化参数

    def __init__(self, data, labels, layers, normalize_data=False):
        data_processed = prepare_for_training(data, normalize_data = normalize_data)[0]
        self.data = data_processed
        self.labels = labels
        self.layers = layers    # 28*28*1=784 25(隐层可以改) 10(最后输出结果)
        self.normalize_data = normalize_data
        self.thetas = MultilayerPerceptron.thetas_init(layers)

1.3工具类sigmoid

    @staticmethod
    def sigmoid(z):
        """Sigmoid 函数"""
        return 1.0 / (1.0 + np.exp(-np.asarray(z)))

    @staticmethod
    def sigmoid_gradient(z):
        """计算Sigmoid 函数的梯度"""
        g = np.zeros_like(z)
        # ====================== 你的代码 ======================

        # 计算Sigmoid 函数的梯度g的值
        dz = MultilayerPerceptron.sigmoid(z)
        g = dz * (1 - dz)
        # =======================================================
        return g

1.4工具类矩阵变换

    '''
    将矩阵拉长变成1*n
    '''
    @staticmethod
    def thetas_unroll(thetas):
        num_thetas = len(thetas)
        unrolled_theta = np.array([])
        for num_thetas_index in range(num_thetas):
            unrolled_theta = np.hstack((unrolled_theta, thetas[num_thetas_index].flatten()))
        return unrolled_theta

    '''
    将1*n变成矩阵
    '''
    @staticmethod
    def thetas_roll(unrolled_thetas, layers):
        num_layers = len(layers)
        thetas = {}
        unrolled_shift = 0
        for index in range(num_layers - 1):
            in_count = int(layers[index])
            out_count = int(layers[index+1])

            theta_width = in_count + 1
            theta_height = out_count
            theta_volume = theta_width * theta_height
            start_index = unrolled_shift
            end_index = unrolled_shift + theta_volume

            layer_theta_unrolled = unrolled_thetas[start_index: end_index]
            thetas[index] = layer_theta_unrolled.reshape((theta_height, theta_width))
            unrolled_shift += theta_volume
        return thetas

1.5初始化theta

    '''
    初始化theta
    '''
    @staticmethod
    def thetas_init(layers):
        num_layers = len(layers)
        thetas = {}
        for layer_index in range(num_layers - 1):
            '''
            执行两次,得到两组参数矩阵:25*785 10*26
            '''
            in_count = int(layers[layer_index])
            out_count = int(layers[layer_index + 1])
            # print(type(in_count))
            # 这里考虑偏置项,偏置的个数和输出的结果是一致的
            randomTheta = np.random.rand(out_count, in_count + 1) * 0.05  # 随机初始化 值尽量小点
            # print(randomTheta)
            thetas[layer_index] = randomTheta

            print(thetas[layer_index].shape)
        return thetas

1.6正向传播

'''
    计算损失函数
    '''
    @staticmethod
    def cost_function(data, labels, thetas, layers):
        num_layers = len(layers)
        num_examples = data.shape[0]
        num_labels = layers[-1]

        # 正向传播
        predictions = MultilayerPerceptron.feedforward_propagation(data, thetas, layers)
        # 制作标签,每个样本对应的都是one-hot
        bitwise_labels = np.zeros((num_examples, num_labels))
        for example_index in range(num_examples):
            bitwise_labels[example_index][labels[example_index][0]] = 1
        # 这里有很大很大的疑问
        bit_set_cost = np.sum(np.log(predictions[bitwise_labels == 1])) # 预测正确的
        bit_not_set_cost = np.sum(np.log(1 - predictions[bitwise_labels == 1])) # 我感觉自己是正确的
        cost = (-1 / num_examples) * (bit_set_cost + bit_not_set_cost)
        return cost
    '''
    正向传播
    '''
    @staticmethod
    def feedforward_propagation(data, thetas, layers):
        num_layers = len(layers)
        num_examples = data.shape[0]
        in_layer_activation = data

        for index in range(num_layers - 1):
            theta = thetas[index]
            print(theta.shape)
            out_layer_activation = MultilayerPerceptron.sigmoid(np.dot(in_layer_activation, theta.T))   # 1700*785  785*25
            # 正常计算完是num_examples * 25 需要多加一列 变成num_examples * 26
            out_layer_activation = np.hstack((np.ones((num_examples, 1)), out_layer_activation))
            in_layer_activation = out_layer_activation

        # 去除偏置项
        return in_layer_activation[:, 1:]

1.7反向传播

    '''
    反向传播
    '''
    @staticmethod
    def back_propagation(data, labels, thetas, layers):
        num_layers = len(layers)
        num_examples = data.shape[0]
        num_features = data.shape[1]
        num_label_types = layers[-1]

        deltas = {}

        # 初始化操作
        for index in range(num_layers - 1):
            in_count = layers[index]
            out_count = layers[index + 1]
            # 这一步很难理解,但是实际上生成的是三层神经网络中间产生两次的中间矩阵
            # 第一个是 25 * 785 第二个是 10 * 26
            deltas[index] = np.zeros((out_count, in_count + 1))

        for example_index in range(num_examples):
            layer_inputs = {}
            layer_activations = {}
            layer_activation = data[example_index, :].reshape((num_features, 1))    # 785*1    初始元素
            layer_activations[0] = layer_activation
            # 逐层计算
            for index in range(num_layers - 1):
                layer_theta = thetas[index]  # 25*785 10*26
                # 与前向传播不同的是 这里与theta相乘的不是完整数据集 而是每个样本单独转置后的结果 785*1
                layer_input = MultilayerPerceptron.sigmoid(np.dot(layer_theta, layer_activation))
                layer_activation = np.vstack((np.array([[1]]), layer_input))
                layer_inputs[index + 1] = layer_input   # 后一层计算结果
                layer_activations[index + 1] = layer_activation # 后一层经过多加了一列的结果
            # !!!!!!!!!!!!
            output_layer_activation = layer_activation[1:, :]

            delta = {}
            # 标签处理
            bitwise_label = np.zeros((num_label_types, 1))
            bitwise_label[labels[example_index][0]] = 1

            # 计算输出层和真实值之间的差异
            delta[num_layers - 1] = output_layer_activation - bitwise_label # 10*1
            # 循环遍历 L L-1 L-2...2 这里直接套视频里的公式即可
            for index in range(num_layers - 2, 0, -1):
                layer_theta = thetas[index]
                next_delta = delta[index + 1]
                layer_input = layer_inputs[index]
                layer_input = np.vstack((np.array([[1]]), layer_input))
                # 按照公式推
                delta[index] = np.dot(layer_theta.T, next_delta) * MultilayerPerceptron.sigmoid_gradient(layer_input)
                # 过滤掉偏置参数
                delta[index] = delta[index][1:, :]
            for index in range(num_layers - 1):
                layer_delta = np.dot(delta[index+1], layer_activations[index].T)
                # 第一次是 25*785 第二次是10*26
                deltas[index] = deltas[index] + layer_delta

        for index in range(num_layers - 1):
            deltas[index] /= num_examples

        return deltas

1.8梯度下降

    '''
    梯度..
    '''
    @staticmethod
    def gradient_step(data, labels, optimized_theta, layers):
        theta = MultilayerPerceptron.thetas_roll(optimized_theta, layers)
        thetas_rolled_gradients = MultilayerPerceptron.back_propagation(data, labels, theta, layers)
        thetas_unrolled_gradients = MultilayerPerceptron.thetas_unroll(thetas_rolled_gradients)
        return thetas_unrolled_gradients



    '''
    梯度下降算法
    '''
    @staticmethod
    def gradient_descent(data, labels, unrolled_theta, layers, max_iter, alpha):
        optimized_theta = unrolled_theta    # 最终theta结果
        cost_history = []
        for index in range(max_iter):
            # 这里记得要及时更新theta
            cost = MultilayerPerceptron.cost_function(data, labels, MultilayerPerceptron.thetas_roll(optimized_theta, layers), layers)
            # cost = MultilayerPerceptron.cost_function(data, labels, MultilayerPerceptron.thetas_roll(unrolled_theta, layers), layers)
            cost_history.append(cost)
            # 得到最终梯度结果 进行参数更新操作
            theta_gradient = MultilayerPerceptron.gradient_step(data, labels, optimized_theta, layers)
            # 更新操作
            optimized_theta -= alpha * theta_gradient
        return optimized_theta, cost_history

1.9训练模块

    '''
    训练模块
    '''
    def train(self, max_iter = 1000, alpha = 0.1):
        unrolled_theta = MultilayerPerceptron.thetas_unroll(self.thetas)

        optimized_theta, cost_history = MultilayerPerceptron.gradient_descent(self.data, self.labels, unrolled_theta, self.layers, max_iter, alpha)

        self.thetas = MultilayerPerceptron.thetas_roll(optimized_theta, self.layers)

        return self.thetas, cost_history

二、MNIST数字识别

MNIST是知名数字数据集,大家可以百度搜索资源,这里使用的是csv文件进行识别。

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mping
import math

from ANN.MultilayerPerceptron import MultilayerPerceptron

data = pd.read_csv('data/mnist_csv/mnist_train.csv')
data2 = pd.read_csv('data/mnist_csv/mnist_test.csv')
# numbers_to_display = 25 # 一次展示25个图
# num_cell = math.ceil(math.sqrt(numbers_to_display))
# plt.figure(figsize=(10, 10))
# for index in range(numbers_to_display):
#     digit = data[index: index+1].values
#     # print(digit.shape)
#     digit_label = digit[0][0]
#     digit_pixels = digit[0][1:]
#     img_size = int(math.sqrt(digit_pixels.shape[0]))
#     frame = digit_pixels.reshape((img_size, img_size))  # 点点转为矩阵
#     plt.subplot(num_cell, num_cell, index + 1)
#     plt.imshow(frame, cmap='Greys')
#     plt.title(digit_label)
# plt.subplots_adjust(wspace=0.5, hspace=0.5) # 调整每个子图外边距
# plt.show()

train_data = data.sample(frac=0.1)
test_data = data2.sample(frac=0.1)
train_data = train_data.values
test_data = test_data.values

x_train = train_data[:, 1:]
y_train = train_data[:, [0]]
x_test = test_data[:, 1:]
y_test = test_data[:, [0]]

layers = [784, 25, 10]

normalize_data = True
max_iter = 300
alpha = 0.1

multilayer_perceptron = MultilayerPerceptron(x_train, y_train, layers, normalize_data)
thetas, costs = multilayer_perceptron.train(max_iter, alpha)
plt.plot(range(len(costs)), costs)
plt.xlabel('梯度下降step')
plt.ylabel('cost')
plt.show()

y_train_predictions = multilayer_perceptron.predict(x_train)
y_test_predictions = multilayer_perceptron.predict(x_test)

train_p = np.sum(y_train_predictions == y_train)/y_train.shape[0] * 100
test_p = np.sum(y_test_predictions == y_test)/y_test.shape[0] * 100
print("训练准确率:", train_p)
print("测试准确率:", test_p)


训练准确率: 73.8
测试准确率: 74.2

三、人脸识别

数据集在课本上给出的网站上,但是我们先对数据进行处理,将图片转化为合适的像素矩阵,标签也要转化为适合处理的矩阵。

import numpy as np
import os
from PIL import Image
import matplotlib.pyplot as plt
from ANN.MultilayerPerceptron import MultilayerPerceptron


def imgval(example):#定义将图片转化为矩阵的方法
    values=[]
    for i in range(0,example.width):#循环图片的每一行
        for j in range(0,example.height):#循环图片的每一列
            values.append(example.getpixel((i,j))/100)#对图片的rgb值进行缩小处理
    # values=np.array(values)#返回成numpy数组形式
    return values


'''
定义读取图片的方法
'''
def readimg(path):
    returndict = {}
    # os.walk是通过深度优先遍历 home是每次遍历的文件夹 files是读取每个子文件夹的文件
    for home, dirs, files in os.walk(path): # 读取该文件夹下所有的子文件夹
        for filename in files:  # 读取各个子文件夹下的图片
            val=[]
            im=Image.open(os.path.join(home, filename)) # 定义该图片路径
            val.append(im)
            namelist=filename.split("_")
            if namelist[1]=="left":#给图片打上目标值标签
                val.append([0])
            elif namelist[1]=="right":
                val.append([1])
            elif namelist[1]=="up":
                val.append([2])
            elif namelist[1]=="straight":
                val.append([3])
            # 我们这里把图片和标签拼接
            returndict[filename]=val
    return returndict#返回图片字典

'''
把所有图片转化为矩阵 标签转化为列表
'''
def picTwoXY(Imgs):
    x_train = []
    y_train = []
    for img in Imgs:
        x_train.append(imgval(img[0]))
        y_train.append(img[-1])
    return x_train, y_train

trainimgsrc='data/faces'    # 定义训练集文件夹
testimgsrc='data/test'  # 定义测试集文件夹
trainImgs = readimg(trainimgsrc)
testImgs = readimg(testimgsrc)
x_train, y_train = picTwoXY(trainImgs.values())
x_test, y_test = picTwoXY(testImgs.values())
x_train = np.array(x_train)
y_train = np.array(y_train)
x_test = np.array(x_test)
y_test = np.array(y_test)
print(type(x_train))
print(type(y_train))
print(x_train.shape)

layers = [960, 25, 4]

normalize_data = True
max_iter = 300
alpha = 0.1

multilayer_perceptron = MultilayerPerceptron(x_train, y_train, layers, normalize_data)
thetas, costs = multilayer_perceptron.train(max_iter, alpha)

plt.plot(range(len(costs)), costs)
plt.xlabel('梯度下降step')
plt.ylabel('cost')
plt.show()

y_train_predictions = multilayer_perceptron.predict(x_train)
y_test_predictions = multilayer_perceptron.predict(x_test)

train_p = np.sum(y_train_predictions == y_train)/y_train.shape[0] * 100
test_p = np.sum(y_test_predictions == y_test)/y_test.shape[0] * 100
print("训练准确率:", train_p)
print("测试准确率:", test_p)

四、总结

学习了ANN,手动实现正反向传播,但是准确率很差,浮动在70-80之间。手动实现的感觉就这水平了,没有pytorch框架运行的准确率高。

希望继续加油2022快点过去吧

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/10256.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

2021年认证杯SPSSPRO杯数学建模C题(第一阶段)破局共享汽车求解全过程文档及程序

2021年认证杯SPSSPRO杯数学建模 C题 破局共享汽车 原题再现&#xff1a; 自 2015 年以来&#xff0c;共享汽车行业曾经“百花齐放”&#xff0c;多个项目获得巨额融资。但因为模式过重、运营成本过高、无法盈利等问题&#xff0c;陆续有共享汽车公司因为资金链断裂而倒闭。据…

RocketMQ存储设计的奥妙

RocketMQ作为一款基于磁盘存储的中间件&#xff0c;具有无限积压能力&#xff0c;并提供高吞吐、低延迟的服务能力&#xff0c;其最核心的部分必然是它优雅的存储设计。 1、存储概述 RocketMQ存储的文件主要包括Commitlog文件、ConsumeQueue文件、Index文件。 RocketMQ将所有…

温振传感器有几种传输方式?

在现代化社会中&#xff0c;各种机器无时无刻参与着我们的日常生活&#xff0c;承担在我们的周围承担起重要作用&#xff0c;轴承、电机、泵体等也成为工业文明中关键存在&#xff0c;它们的温度和状态影响着整个工业自动化系统运行的健康和效率。 长期以来&#xff0c;传感器技…

数字集成电路设计(四、Verilog HDL数字逻辑设计方法)(一)

文章目录1.Verilog语言的设计思想和可综合特性2. 组合电路的设计2.1 数字加法器2.2 数据比较器2.3 数据选择器2.4 数字编码器2.4.1 3位二进制8线-3线编码器2.4.2 8线-3线优先编码器2.4.3 二进制转化十进制8421BCD编码器&#xff08;重要&#xff09;2.4.4 8421BCD十进制余3编码…

ue4使用Niagara粒子实现下雨效果,使用蓝图调节雨量

一、使用Niagara粒子系统实现下雨效果 1. 首先创建一个雨水的材质 新建 — 材质 2. 创建Niagara系统 新建 新建 — FX — Niagara系统 — 来自所选发射器的新系统 — 下一步 — 选择Fountain — 点击号&#xff0c;点击完成 删除下面的“Add Velocity in Cone” 添加“…

矩池云如何自定义端口,访问自己的web项目

本文将向您介绍如何在矩池云租用服务器的时候自定义端口&#xff0c;并将您的 web 项目部署到自定义端口&#xff0c;最后实现在本地通过自定义端口对应链接访问服务。 上传代码和数据 首先&#xff0c;您需要将本地的项目代码和数据上传到矩池云网盘。这里为了方便您测试使用…

类似ps的python工具lama cleaner

Lama Cleaner是个类似ps图片的工具&#xff0c;可以把图片中不想要的部分p掉&#xff0c;或者填补图片中丢失的部分。用下来感觉还蛮靠谱&#xff0c;对于不会ps的人是福音&#xff0c;记录一下。 相关介绍&#xff1a;https://github.com/Sanster/lama-cleaner 1.安装 安装…

react 中 ref 管理列表

背景 最近在看 react 新的官方文档 的时候&#xff0c;看到这么一个标题&#xff0c;How to manage a list of refs using a ref callback&#xff0c;就是一个图片的列表&#xff0c;类似这样 然后点击按钮的时候&#xff0c;通过 scrollIntoView 这个 api 来让他滚动&#…

python生成模拟微信气泡图片

0. 起因 众所周知&#xff0c;借刀杀人最为致命&#xff0c;聊天也是如此。 最近我的群聊画风逐渐变味&#xff1a; 当然&#xff0c;这种图片的生产成本很低&#xff0c;只需在设置页关闭昵称显示&#xff0c;把聊天背景重置为灰色&#xff0c;然后利用截图工具截图&#xf…

【金融项目】尚融宝项目(十三)

25、充值 25.1、需求介绍 25.1.1、投资人充值 **1、需求描述 ** 标的产生后&#xff0c;平台展示标的&#xff0c;投资人就可以在平台投资标的&#xff0c;获取收益&#xff1b;投资人投资标的必须满足以下条件&#xff1a; 充值过程与绑定过程一致&#xff0c;也是在平台发…

Delphi 11.2 Alexandria程序集代码

Delphi 11.2 Alexandria程序集代码 高DPI VCL设计器-VCL设计器现在在设计时使用类似Microsoft Windows的样式&#xff0c;这意味着除非禁用此功能&#xff0c;否则设计器中的控件始终使用此样式绘制。此样式与Windows当前使用的浅色或深色主题相匹配。 编辑器选项卡-在版本11.2…

【3D目标检测】Frustum PointNets for 3D Object Detection from RGB-D Data

目录概述细节网络结构视锥候选框3D实例分割边界框参数回归损失函数概述 首先本文是基于图像和点云的&#xff0c;属于早期的模态融合的成果&#xff0c;是串行的算法&#xff0c;而非并行的&#xff0c;更多的是考虑如何根据图像和点云这两个模态的数据进行3D目标检测。 提出动…

亚马逊平台不给力?来Starday,告诉你什么是真正的高阶玩法

距2021年的亚马逊封号潮已经过去了一段时间&#xff0c;但其影响却依然在跨境电商行业间回荡。从4月份起&#xff0c;亚马逊就开始对违反平台规则的卖家进行封号。此后打击规模持续扩大&#xff0c;到6月中下旬&#xff0c;深圳一批头部卖家均被亚马逊平台下架&#xff0c;遭到…

Coverage-based Greybox Fuzzing as Markov Chain

AFLFast: Coverage-based Greybox Fuzzing as Markov Chain 一、论文阅读 论文来自CCS2016 作者&#xff1a;Marcel Bhme 模糊测试领域巨佬 Abstract 基于覆盖的灰盒模糊测试 Coverage-based Greybox Fuzzing (CGF)。大多数测试用例执行少数高频路径&#xff0c;制定策略倾…

浪潮信息工程师:谈一谈设备透传虚拟机启动慢背后的原因及其优化方法 | 第 51 期

本周「龙蜥大讲堂」预告来啦&#xff01;龙蜥社区邀请了浪潮信息操作系统研发工程师崔士伟分享《设备透传虚拟机的快速启动优化》&#xff0c;快来扫码入群&#xff0c;预定前排小板凳观看直播吧&#xff01; 直播主题及内容介绍 直播主题&#xff1a;设备透传虚拟机的快速启…

360+城市空气质量指数-日度数据、良好天数统计(2001-2022年)

360城市空气质量指数-日度数据、良好天数统计&#xff08;2001-2022年&#xff09; 城市空气质量指数-日度数据、良好天数统计 1、包括&#xff1a;360个城市 2、时间&#xff1a;2001.1-2022.1月 3、样本量&#xff1a;1371937条 4、数据来源&#xff1a;空气质量在线…

使用Excel 表示汽车、摩托车10年免检时间、非常清晰。

1&#xff0c;汽车摩托车10年内年检问题 根据最新的国家法律&#xff1a; http://www.wenjiang.gov.cn/wjzzw/c152333/2022-09/30/content_66efe4febb8040758f3f079cf0baa310.shtml 搜索了下&#xff0c;找到了成都的规定&#xff1a; 近日&#xff0c;公安部、市场监管总局…

中电海康-中电52所面经

中电海康&#xff0c;中电52所面经中电海康面经一面&#xff08;电话面&#xff09;二面&#xff08;现场面&#xff09;自我回顾中电海康面经 一面&#xff08;电话面&#xff09; Redis的使用和配置多线程的使用&#xff0c;线程池的使用SpringBoot的核心注解和流程AOP IOC …

java项目-第133期ssm物流服务管理平台系统-java毕业设计

java项目-第133期ssm物流服务管理平台系统-毕业设计 【源码请到资源专栏下载】 今天分享的项目是《物流服务管理平台系统》 该项目分为前台和后台。主要分成三个角色&#xff1a;游客、普通管理员、管理员三个角色。 游客就是用户&#xff0c;只要是访问系统前台的用户都可以算…

华为防火墙的四种智能选路方式

FW支持四种智能选路方式&#xff0c;不同的智能选路方式可以满足不同的需求&#xff0c;管理员可以根据设备和网络的实际情况进行选择。 表1 智能选路方式 智能选路方式 定义 根据链路带宽负载分担 FW按照带宽比例将流量分配到各条链路上。带宽大的链路转发较多的流量&…