2 卷积神经网络CNN

news2024/11/16 20:55:25

文章目录

    • LeNet-5
    • AlexNet
    • GoogLeNet
    • ResNet

本章代码均在kaggle上运行成功

LeNet-5

import torch
import torch.nn as nn
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
from matplotlib_inline import backend_inline

backend_inline.set_matplotlib_formats('svg')
%matplotlib inline
# 准备数据
transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize(0.1307, 0.3081)])

train_dataset = datasets.MNIST(root='/kaggle/working/',
                               train=True,
                               transform=transform,
                               download=True)
test_dataset = datasets.MNIST(root='/kaggle/working/',
                              train=False,
                              transform=transform,
                              download=True)

# 批次加载器
train_loader = DataLoader(dataset=train_dataset, batch_size=256, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=256, shuffle=False)
# 构建网络
class CNN(nn.Module):

    def __init__(self):
        super(CNN, self).__init__()
        self.net = nn.Sequential(
            nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, padding=2),
            nn.Tanh(), nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(6, 16, kernel_size=5), nn.Tanh(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(16, 120, kernel_size=5), nn.Tanh(), nn.Flatten(),
            nn.Linear(120, 84), nn.Tanh(), nn.Linear(84, 10))

    def forward(self, x):
        return self.net(x)
# 查看网络结构
X = torch.rand(size=(1, 1, 28, 28))
for layer in CNN().net:
    X = layer(X)
    print(layer.__class__.__name__, 'output shape: \t', X.shape)
Conv2d output shape: 	 torch.Size([1, 6, 28, 28])
Tanh output shape: 	 torch.Size([1, 6, 28, 28])
AvgPool2d output shape: 	 torch.Size([1, 6, 14, 14])
Conv2d output shape: 	 torch.Size([1, 16, 10, 10])
Tanh output shape: 	 torch.Size([1, 16, 10, 10])
AvgPool2d output shape: 	 torch.Size([1, 16, 5, 5])
Conv2d output shape: 	 torch.Size([1, 120, 1, 1])
Tanh output shape: 	 torch.Size([1, 120, 1, 1])
Flatten output shape: 	 torch.Size([1, 120])
Linear output shape: 	 torch.Size([1, 84])
Tanh output shape: 	 torch.Size([1, 84])
Linear output shape: 	 torch.Size([1, 10])
# 判断是否有GPU加速
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# 定义模型
model = CNN()

# 使用DataParallel将模型并行到多个GPU
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    model = nn.DataParallel(model)

# 将模型放到device上
model.to(device)

# 定义损失函数和优化器
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.9)

# 训练参数
epochs = 5
losses = []

# 训练过程
for epoch in range(epochs):
    for (x, y) in train_loader:
        x, y = x.to(device), y.to(device)
        Pred = model(x)
        loss = loss_fn(Pred, y)
        losses.append(loss.item())
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

# 绘制损失曲线
plt.plot(range(len(losses)), losses)
plt.show()
Let's use 2 GPUs!

在这里插入图片描述

# 测试网络
correct = 0
total = 0

with torch.no_grad():
    for (x, y) in test_loader:
        x, y = x.to('cuda:0'), y.to('cuda:0')
        Pred = model(x)
        _, predicted = torch.max(Pred.data, dim=1)
        correct += torch.sum((predicted == y))
        total += y.size(0)

print(f'测试集准确率:{100*correct/total}%') # 比上一节深度学习网络预测效果好
测试集准确率:98.5199966430664%

AlexNet

只是网络结构稍作改变而已。这里运行较久,建议用GPU。

import torch
import torch.nn as nn
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
from matplotlib_inline import backend_inline

backend_inline.set_matplotlib_formats('svg')
%matplotlib inline

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Resize(224),
    transforms.Normalize(0.1307, 0.3081)
])
# 下载训练集与测试集
train_Data = datasets.FashionMNIST(root='/kaggle/working/',
                                   train=True,
                                   download=True,
                                   transform=transform)
test_Data = datasets.FashionMNIST(root='/kaggle/working/',
                                  train=False,
                                  download=True,
                                  transform=transform)

# 批次加载器
train_loader = DataLoader(train_Data, shuffle=True, batch_size=128)
test_loader = DataLoader(test_Data, shuffle=False, batch_size=128)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to /kaggle/working/FashionMNIST/raw/train-images-idx3-ubyte.gz


100%|██████████| 26421880/26421880 [00:05<00:00, 5088751.35it/s] 


Extracting /kaggle/working/FashionMNIST/raw/train-images-idx3-ubyte.gz to /kaggle/working/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to /kaggle/working/FashionMNIST/raw/train-labels-idx1-ubyte.gz


100%|██████████| 29515/29515 [00:00<00:00, 267832.55it/s]


Extracting /kaggle/working/FashionMNIST/raw/train-labels-idx1-ubyte.gz to /kaggle/working/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to /kaggle/working/FashionMNIST/raw/t10k-images-idx3-ubyte.gz


100%|██████████| 4422102/4422102 [00:00<00:00, 5078567.66it/s]


Extracting /kaggle/working/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to /kaggle/working/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to /kaggle/working/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz


100%|██████████| 5148/5148 [00:00<00:00, 24962169.93it/s]

Extracting /kaggle/working/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to /kaggle/working/FashionMNIST/raw
# 构建网络
class CNN(nn.Module):

    def __init__(self):
        super(CNN, self).__init__()
        self.net = nn.Sequential(
            nn.Conv2d(1, 96, kernel_size=11, stride=4, padding=1), nn.ReLU(),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(96, 256, kernel_size=5, padding=2), nn.ReLU(),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(256, 384, kernel_size=3, padding=1), nn.ReLU(),
            nn.Conv2d(384, 384, kernel_size=3, padding=1), nn.ReLU(),
            nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(),
            nn.MaxPool2d(kernel_size=3, stride=2), nn.Flatten(),
            nn.Linear(6400, 4096), nn.ReLU(), nn.Dropout(p=0.5),
            nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(p=0.5),
            nn.Linear(4096, 10))

    def forward(self, x):
        y = self.net(x)
        return y


# 判断是否有GPU加速
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 定义模型
model = CNN()
# 使用DataParallel将模型并行到多个GPU
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    model = nn.DataParallel(model)
# 将模型放到device上
model.to(device)

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)


# 训练函数
def train(epoch):
    running_loss = 0.0
    for batch_idx, (x, y) in enumerate(train_loader, 0):
        x, y = x.to(device), y.to(device)
        optimizer.zero_grad()
        Pred = model(x)
        loss = loss_fn(Pred, y)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:  # 每300个batch打印一次平均loss
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0


# 测试函数
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data,
                                     dim=1)  # 返回每一行中最大值的那个元素,以及其索引
            total += labels.size(0)
            correct += torch.sum((predicted == labels)).item()  # 统计预测正确的样本个数
    accuracy = 100 * correct / total
    accuracy_list.append(accuracy)
    print('Accuracy on test set: %d %%' % accuracy)


if __name__ == '__main__':
    accuracy_list = []
    # 每完成一次epoch就测试一遍
    for epoch in range(10):
        train(epoch)
        test()
    plt.plot(accuracy_list)
    plt.xlabel('epoch')
    plt.ylabel('accuracy')
    plt.grid()
    plt.show()
Let's use 2 GPUs!
[1,   300] loss: 1.432
Accuracy on test set: 78 %
[2,   300] loss: 0.465
Accuracy on test set: 85 %
[3,   300] loss: 0.352
Accuracy on test set: 86 %
[4,   300] loss: 0.306
Accuracy on test set: 87 %
[5,   300] loss: 0.275
Accuracy on test set: 89 %
[6,   300] loss: 0.248
Accuracy on test set: 88 %
[7,   300] loss: 0.227
Accuracy on test set: 89 %
[8,   300] loss: 0.212
Accuracy on test set: 90 %
[9,   300] loss: 0.201
Accuracy on test set: 90 %
[10,   300] loss: 0.186
Accuracy on test set: 90 %

在这里插入图片描述

GoogLeNet

import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision import datasets
import matplotlib.pyplot as plt
%matplotlib inline

# 展示高清图
from matplotlib_inline import backend_inline
backend_inline.set_matplotlib_formats('svg')
# 准备数据集
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize(0.1307, 0.3081)])
train_Data = datasets.FashionMNIST(root='/kaggle/working/',
                                   train=True,
                                   download=False,
                                   transform=transform)
test_Data = datasets.FashionMNIST(root='/kaggle/working/',
                                  train=False,
                                  download=False,
                                  transform=transform)

# 批次加载器
train_loader = DataLoader(train_Data, shuffle=True, batch_size=128)
test_loader = DataLoader(test_Data, shuffle=False, batch_size=128)

# Inception 块
class Inception(nn.Module):
    def __init__(self, in_channels):
        super(Inception, self).__init__()
        self.branch1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch2 = nn.Sequential(nn.Conv2d(in_channels, 16, kernel_size=1),
                                     nn.Conv2d(16, 24, kernel_size=3, padding=1),
                                     nn.Conv2d(24, 24, kernel_size=3, padding=1))
        self.branch3 = nn.Sequential(nn.Conv2d(in_channels, 16, kernel_size=1),
                                     nn.Conv2d(16, 24, kernel_size=5, padding=2))
        self.branch4 = nn.Conv2d(in_channels, 24, kernel_size=1)
        
    def forward(self, x):
        branch1 = self.branch1(x)
        branch2 = self.branch2(x)
        branch3 = self.branch3(x)
        branch4 = self.branch4(x)
        outputs = [branch1, branch2, branch3, branch4]
        return torch.cat(outputs, 1)
    
class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.net = nn.Sequential(nn.Conv2d(1, 10, kernel_size=5), nn.ReLU(),
                                 nn.MaxPool2d(kernel_size=2, stride=2),
                                 Inception(in_channels=10),nn.Conv2d(88, 20, kernel_size=5), nn.ReLU(),
                                 nn.MaxPool2d(kernel_size=2, stride=2),Inception(in_channels=20),
                                 nn.Flatten(),nn.Linear(1408, 10))
    def forward(self, x):
        y = self.net(x)
        return y
    
# 查看网络结构
X = torch.rand(size= (1, 1, 28, 28))
for layer in CNN().net:
    X = layer(X)
    print( layer.__class__.__name__, 'output shape: \t', X.shape )
Conv2d output shape: 	 torch.Size([1, 10, 24, 24])
ReLU output shape: 	 torch.Size([1, 10, 24, 24])
MaxPool2d output shape: 	 torch.Size([1, 10, 12, 12])
Inception output shape: 	 torch.Size([1, 88, 12, 12])
Conv2d output shape: 	 torch.Size([1, 20, 8, 8])
ReLU output shape: 	 torch.Size([1, 20, 8, 8])
MaxPool2d output shape: 	 torch.Size([1, 20, 4, 4])
Inception output shape: 	 torch.Size([1, 88, 4, 4])
Flatten output shape: 	 torch.Size([1, 1408])
Linear output shape: 	 torch.Size([1, 10])
# 判断是否有GPU加速
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 定义模型
model = CNN()
# 使用DataParallel将模型并行到多个GPU
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    model = nn.DataParallel(model)
# 将模型放到device上
model.to(device)

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# 训练函数
def train(epoch):
    running_loss = 0.0
    for batch_idx, (x, y) in enumerate(train_loader, 0):
        x, y = x.to(device), y.to(device)
        optimizer.zero_grad()
        Pred = model(x)
        loss = loss_fn(Pred, y)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:  # 每300个batch打印一次平均loss
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0


# 测试函数
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data,
                                     dim=1)  # 返回每一行中最大值的那个元素,以及其索引
            total += labels.size(0)
            correct += torch.sum((predicted == labels)).item()  # 统计预测正确的样本个数
    accuracy = 100 * correct / total
    accuracy_list.append(accuracy)
    print('Accuracy on test set: %.3f %%' % accuracy)


if __name__ == '__main__':
    accuracy_list = []
    # 每完成一次epoch就测试一遍
    for epoch in range(10):
        train(epoch)
        test()
    plt.plot(accuracy_list)
    plt.xlabel('epoch')
    plt.ylabel('accuracy')
    plt.grid()
    plt.show()
Let's use 2 GPUs!
[1,   300] loss: 0.937
Accuracy on test set: 79.810 %
[2,   300] loss: 0.520
Accuracy on test set: 82.550 %
[3,   300] loss: 0.437
Accuracy on test set: 85.050 %
[4,   300] loss: 0.399
Accuracy on test set: 85.750 %
[5,   300] loss: 0.371
Accuracy on test set: 86.450 %
[6,   300] loss: 0.350
Accuracy on test set: 87.140 %
[7,   300] loss: 0.337
Accuracy on test set: 87.560 %
[8,   300] loss: 0.323
Accuracy on test set: 87.580 %
[9,   300] loss: 0.312
Accuracy on test set: 87.650 %
[10,   300] loss: 0.305
Accuracy on test set: 88.240 %

在这里插入图片描述

ResNet

import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision import datasets
import matplotlib.pyplot as plt
%matplotlib inline

# 展示高清图
from matplotlib_inline import backend_inline
backend_inline.set_matplotlib_formats('svg')
# 准备数据集
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize(0.1307, 0.3081)])
train_Data = datasets.FashionMNIST(root='/kaggle/working/',
                                   train=True,
                                   download=False,
                                   transform=transform)
test_Data = datasets.FashionMNIST(root='/kaggle/working/',
                                  train=False,
                                  download=False,
                                  transform=transform)

# 批次加载器
train_loader = DataLoader(train_Data, shuffle=True, batch_size=128)
test_loader = DataLoader(test_Data, shuffle=False, batch_size=128)

# 残差块
class ResidualBlock(nn.Module):
    def __init__(self, channels):
        super(ResidualBlock, self).__init__()
        self.net = nn.Sequential(
        nn.Conv2d(channels, channels, kernel_size=3, padding=1),
        nn.ReLU(),
        nn.Conv2d(channels, channels, kernel_size=3, padding=1),
        )
    def forward(self, x):
        y = self.net(x)
        return nn.functional.relu(x+y) # 这里是关键
    
class CNN(nn.Module):
    
    def __init__(self):
        super(CNN, self).__init__()
        self.net = nn.Sequential(
        nn.Conv2d(1, 16, kernel_size=5), nn.ReLU(),
        nn.MaxPool2d(2), ResidualBlock(16),
        nn.Conv2d(16, 32, kernel_size=5), nn.ReLU(),
        nn.MaxPool2d(2), ResidualBlock(32),
        nn.Flatten(),
        nn.Linear(512, 10)
        )
        
    def forward(self, x):
        y = self.net(x)
        return y
    
# 查看网络结构
X = torch.rand(size= (1, 1, 28, 28))
for layer in CNN().net:
    X = layer(X)
    print( layer.__class__.__name__, 'output shape: \t', X.shape )
Conv2d output shape: 	 torch.Size([1, 16, 24, 24])
ReLU output shape: 	 torch.Size([1, 16, 24, 24])
MaxPool2d output shape: 	 torch.Size([1, 16, 12, 12])
ResidualBlock output shape: 	 torch.Size([1, 16, 12, 12])
Conv2d output shape: 	 torch.Size([1, 32, 8, 8])
ReLU output shape: 	 torch.Size([1, 32, 8, 8])
MaxPool2d output shape: 	 torch.Size([1, 32, 4, 4])
ResidualBlock output shape: 	 torch.Size([1, 32, 4, 4])
Flatten output shape: 	 torch.Size([1, 512])
Linear output shape: 	 torch.Size([1, 10])
# 判断是否有GPU加速
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 定义模型
model = CNN()
# 使用DataParallel将模型并行到多个GPU
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    model = nn.DataParallel(model)
# 将模型放到device上
model.to(device)

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.5)


# 训练函数
def train(epoch):
    running_loss = 0.0
    for batch_idx, (x, y) in enumerate(train_loader, 0):
        x, y = x.to(device), y.to(device)
        optimizer.zero_grad()
        Pred = model(x)
        loss = loss_fn(Pred, y)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:  # 每300个batch打印一次平均loss
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0


# 测试函数
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data,
                                     dim=1)  # 返回每一行中最大值的那个元素,以及其索引
            total += labels.size(0)
            correct += torch.sum((predicted == labels)).item()  # 统计预测正确的样本个数
    accuracy = 100 * correct / total
    accuracy_list.append(accuracy)
    print('Accuracy on test set: %.3f %%' % accuracy)


if __name__ == '__main__':
    accuracy_list = []
    # 每完成一次epoch就测试一遍
    for epoch in range(10):
        train(epoch)
        test()
    plt.plot(accuracy_list)
    plt.xlabel('epoch')
    plt.ylabel('accuracy')
    plt.grid()
    plt.show()
Let's use 2 GPUs!
[1,   300] loss: 0.595
Accuracy on test set: 84.480 %
[2,   300] loss: 0.338
Accuracy on test set: 87.170 %
[3,   300] loss: 0.288
Accuracy on test set: 88.620 %
[4,   300] loss: 0.264
Accuracy on test set: 88.880 %
[5,   300] loss: 0.245
Accuracy on test set: 89.740 %
[6,   300] loss: 0.233
Accuracy on test set: 90.020 %
[7,   300] loss: 0.217
Accuracy on test set: 89.720 %
[8,   300] loss: 0.206
Accuracy on test set: 89.990 %
[9,   300] loss: 0.194
Accuracy on test set: 90.500 %
[10,   300] loss: 0.188
Accuracy on test set: 89.310 %

在这里插入图片描述

其他相关文章:
《PyTorch 深度学习实践》第10讲 卷积神经网络(基础篇)
《PyTorch 深度学习实践》第11讲 卷积神经网络(高级篇)
7.6 残差网络(ResNet)——动手学深度学习
Deep Residual Learning for Image Recognition

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1960767.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

木马后门实验

实验拓扑 实验步骤 防火墙 配置防火墙—充当边界NAT路由器 边界防火墙实现内部 DHCP 分配和边界NAT需求&#xff0c;其配置如下 登录网页 编辑接口 配置e0/0 配置e0/1 编辑策略 测试&#xff1a;内部主机能获得IP&#xff0c;且能与外部kali通信 kali 接下来开启 kali 虚…

【视频讲解】后端增删改查接口有什么用?

B站视频地址 B站视频地址 前言 “后端增删改查接口有什么用”&#xff0c;其实这句话可以拆解为下面3个问题。 接口是什么意思&#xff1f;后端接口是什么意思&#xff1f;后端接口中的增删改查接口有什么用&#xff1f; 1、接口 概念&#xff1a;接口的概念在不同的领域中…

BUGKU-WEB-好像需要密码

如果点击start attrack 后出现 Payload set 1: Invalid number settings 的提示&#xff0c;先点hex 后点 decimal 再开始start attrack&#xff0c;这是一个软件bug&#xff0c;需要手动让它刷新。 解题思路 先随便输入测试&#xff1a;admin看看源码吧那就爆破了 据说&…

项目比赛经验分享:如何抓住“黄金一分钟”

项目比赛经验分享&#xff1a;如何抓住“黄金一分钟” 前言引起注意&#xff1a;用事实和故事开场明确痛点&#xff1a;描述问题和影响介绍解决方案&#xff1a;简明扼要激发兴趣&#xff1a;使用视觉辅助概述演讲结构&#xff1a;清晰的路线图我的开场白示例结语 前言 在创新的…

【小超嵌入式】 交叉编译工具安装过程

1、下载交叉编译工具链 ● 确定目标平台&#xff1a; 首先&#xff0c;你需要确定你的目标平台是什么&#xff0c;比如ARM、MIPS等。不同的目标平台需要不同的交叉编译工具链。 ● 获取工具链&#xff1a; 官方网站&#xff1a;通常可以从交叉编译器的官方网站下载适用于你的…

一番赏小程序开发,为消费者带来更多新鲜体验

一番赏作为经典的潮玩方式&#xff0c;深受消费者的喜爱&#xff0c;一番赏还会与不同的热门IP合作&#xff0c;不断推出新的赏品&#xff0c;吸引众多粉丝。赏品的内容非常丰富&#xff0c;从手办、公仔玩具等&#xff0c;还设有隐藏款和最终赏商品&#xff0c;对玩家拥有着非…

人工智能大模型发展的新形势及其省思

自2022年底OpenAI发布ChatGPT以来&#xff0c;大模型产业发展先后经历了百模大战、追求更大参数、刷榜竞分&#xff0c;直到近期各大厂商相继加入价格战&#xff0c;可谓热点纷呈。大模型的技术形态也从单纯文本发展到了多模态&#xff0c;从模拟人类大脑的认知功能发展到操控机…

暂存篇:高频面试题基本总结回顾(含笔试高频算法整理)

干货分享&#xff0c;感谢您的阅读&#xff01; &#xff08;暂存篇---后续会删除&#xff0c;完整版和持续更新见高频面试题基本总结回顾&#xff08;含笔试高频算法整理&#xff09;&#xff09; 备注&#xff1a;引用请标注出处&#xff0c;同时存在的问题请在相关博客留言…

韦东山瑞士军刀项目自学之分析部分GPIO_HAL库函数代码

GPIO_HAL部分库函数分析 主要是分析了宏定义&#xff0c;这些宏定义可以被写入到对应的寄存器之中&#xff0c;从引脚到GPIO组再到模式速度等等&#xff0c;每一个参数都对应着寄存器的一位或几位。以后自己还是根据库函数来开发吧&#xff0c;太麻烦了。

《浅谈如何培养树立正确的人工智能伦理观念》

目录 摘要&#xff1a; 一、引言 二、《机械公敌》的情节与主题概述 三、人工智能伦理与法律问题分析 1.伦理挑战 2.法律问题 四、培养正确的人工智能伦理观念的重要性 五、培养正确的人工智能伦理观念的途径与方法 1.加强教育与宣传 2.制定明确的伦理准则和规范 3.…

Java学习Day16:基础篇6

1.静态和非静态 2.调用静态和非静态的过程 注&#xff1a;在Java中&#xff0c;同类中&#xff0c;确实可以使用类的对象来调用静态方法&#xff0c;尽管这不是推荐的做法。静态方法属于类本身&#xff0c;而不是类的任何特定实例。因此&#xff0c;理论上讲&#xff0c;你应该…

分隔链表(LeetCode)

题目 给你一个链表的头节点 和一个特定值 &#xff0c;请你对链表进行分隔&#xff0c;使得所有 小于 的节点都出现在 大于或等于 的节点之前。 你应当 保留 两个分区中每个节点的初始相对位置。 示例1&#xff1a; 输入&#xff1a;&#xff0c; 输出&#xff1a; 示例2&a…

七言-绝美崇州

题记 今天&#xff0c;2024年07月30日&#xff0c;在看到《今日崇州》 发布的航拍风光照片之后&#xff0c;这才方知笔者虽已寄居崇州“西川第一天”街子古镇养老逾五年&#xff0c;竟然不知崇州拥有如此之多的青山绿水&#xff0c;集生态、宜居、智慧、文化、旅游丰富资源于一…

python if语句如何结束

python if语句如何结束&#xff1f;下面给大家介绍两种终止方法&#xff1a; break 用于提前终止循环&#xff1b; num 1 while num < 100:if num > 10:breakprint(num)num 2 print("结束") 结果如下&#xff1a; 1 3 5 7 9 结束 continue 用于跳出当前循…

为 Oh My Zsh 安装 Powerlevel10k 主题

继上一章 安装Zsh 与 oh my zsh 打开终端&#xff0c;运行以下命令&#xff0c;从 GitHub 上克隆 Powerlevel10k 代码库&#xff0c;并将文件放到 Oh My Zsh 的配置文件夹中 git clone https://github.com/romkatv/powerlevel10k.git $ZSH_CUSTOM/themes/powerlevel10k 用文本…

海外短剧平台部署与快速搭建实战指南

目录 一、海外短剧系统是什么 二、搭教程 技术选型 开发前端和后端 三、部分代码展示 随着网络覆盖的广泛扩展与全球化趋势的日益加深&#xff0c;构建面向海外的短视频剧集平台已演变为企业进军国际舞台、拓宽市场边界的关键策略。海外短剧系统不仅承载着将精心制作的短剧…

moment.js时间格式化插件使用

moment.js插件常用api备忘 moment.js插件功能远不不仅仅是在格式化日期上&#xff0c;还是有很多很好用奇淫技巧&#xff0c;使用起来也是更加方便&#xff0c;主要在vue项目中使用偏多&#xff0c;&#xff0c;但是有时候也不是总使用&#xff0c;将一些项目中可能会用&#x…

国内民营企业「数字化转型」典型案例

一、企业简介 三一集团成立于1989年&#xff0c;现有3家上市公司&#xff08;三一重工、三一国际、三一重能&#xff09;&#xff0c;公司总资产超2000亿元&#xff0c;在国内12个省市设有生产基地&#xff0c;在海外建有印度、美国、德国、巴西四大研发制造基地&#xff0c;业…

【国产化信创平台】麒麟银河V10系统虚拟机创建

目录 一、麒麟V10系统镜像下载 二、虚拟机创建流程 三、麒麟银河系统安装流程 一、麒麟V10系统镜像下载 https://www.kylinos.cn/# 官方访问还是会有问题&#xff0c;如果有需要麒麟银河Kylin系统V10的镜像文件&#xff0c;可以留下邮箱或者私信博主获取。 二、虚拟机创…

【LeetCode】16. 最接近的三数之和

三数之和这道题被反复考到&#xff0c;但是我一次都没给写出来&#xff0c;真是汗颜&#xff01;本题是三数之和的一道变形题&#xff0c;也是一道好题&#xff01;本题有两个关键点&#xff1a;其一&#xff0c;双指针是怎么个用法&#xff1f;在本题中是怎么实现的&#xff1…