PyTorch实践-CNN-验证码识别

news2024/11/5 16:36:03

1 需求

GitHub - xhh890921/cnn-captcha-pytorch: 小黑黑讲AI,AI实战项目《验证码识别》


2 接口

  1. 含义
    • optim.Adam接口中,lr参数代表学习率(Learning Rate)。学习率是优化算法中的一个关键超参数,它决定了在每次迭代过程中,模型参数沿着梯度方向更新的步长大小。简单来说,它控制着模型学习的速度。
  2. 工作原理
    • 以梯度下降为例,在每次迭代中,模型参数的更新公式一般为:,其中是模型参数,是学习率,是损失函数关于参数的梯度。在Adam优化器中,虽然更新过程比简单的梯度下降更复杂(涉及到一阶矩估计和二阶矩估计等),但学习率lr仍然起着类似的作用。
    • Adam会根据梯度的一阶矩估计(类似于均值)和二阶矩估计(类似于方差)来调整参数更新的方向和大小,而学习率lr则是在此基础上进一步缩放更新的步长。例如,当lr较大时,参数更新的步长就大,模型在参数空间中的移动速度快;当lr较小时,参数更新的步长小,模型在参数空间中的移动速度慢。
  3. 对训练的影响
    • 学习率过大
      • 如果学习率设置得过大,可能会导致模型在训练过程中无法收敛,甚至出现梯度爆炸的情况。例如,在训练神经网络时,参数可能会在每次迭代中过度更新,使得损失函数的值越来越大,而不是越来越小。以一个简单的线性回归模型为例,如果学习率过大,模型可能会在参数空间中 “跳过” 最优解,并且由于更新步长过大,很难再回到最优解附近。
    • 学习率过小
      • 当学习率设置得过小时,模型训练的速度会非常慢,需要更多的迭代次数才能达到较好的收敛效果。这会增加训练的时间和计算成本。例如,在一个复杂的深度学习模型(如卷积神经网络用于图像识别)的训练中,如果学习率过小,可能需要花费数倍甚至数十倍的时间才能达到与合适学习率相当的训练效果。
  4. 选择合适学习率的方法
    • 经验法则:通常可以先尝试一些常用的学习率,如 0.001、0.0001 等,观察模型在训练初期的表现,如损失函数的下降速度和稳定性。
    • 学习率调度(Learning Rate Scheduling):可以根据训练的阶段动态地调整学习率。例如,在训练初期可以使用较大的学习率让模型快速学习数据的大致模式,随着训练的进行,逐渐减小学习率,使模型能够更精细地调整参数以接近最优解。常见的学习率调度策略包括阶梯式下降(在特定的训练阶段降低学习率)、余弦退火(根据余弦函数的形状来降低学习率)等。
    • 超参数搜索方法:使用超参数搜索算法,如网格搜索(Grid Search)、随机搜索(Random Search)或更高级的贝叶斯优化(Bayesian Optimization)来寻找合适的学习率。这些方法通过在一定范围内尝试不同的学习率值,并根据模型在验证集上的性能来确定最优的学习率。

3 示例

config.json

{
  "train_data_path": "./data/train-digit/",
  "test_data_path": "./data/test-digit/",
  "train_num": 2000,
  "test_num": 1000,
  "characters": "0123456789",
  "digit_num": 1,
  "img_width": 200,
  "img_height": 100,
  
  "resize_width": 128,
  "resize_height": 128,
  "batch_size": 128,
  "epoch_num": 200,
  "learning_rate": 0.0001,

  "model_save_path": "./model/",
  "model_name": "captcha.1digit.2k",
  "test_model_path": "./model/captcha.1digit.2k"
}

generate.py

# 导入验证码模块ImageCaptcha和随机数模块random
from captcha.image import ImageCaptcha
import random

# 定义函数generate_data,用于生成验证码图片
# num是需要生成的验证码图片数量
# count是验证码图中包含的字符数量
# chars保存验证码中包含的字符
# path是图片结果的保存路径
# width是height是图片的宽和高
def generate_data(num, count, chars, path, width, height):
    # 使用变量i,循环生成num个验证码图片
    for i in range(num):
        # 打印当前的验证码编号
        print("generate %d"%(i))
        # 使用ImageCaptcha,创建验证码生成器generator
        generator = ImageCaptcha(width=width, height=height)
        random_str = "" #保存验证码图片上的字符
        # 向random_str中,循环添加count个字符
        for j in range(count):
            # 每个字符,使用random.choice,随机的从chars中选择
            choose = random.choice(chars)
            random_str += choose
        # 调用generate_image,生成验证码图片img
        img = generator.generate_image(random_str)
        # 在验证码上加干扰点
        generator.create_noise_dots(img, '#000000', 4, 40)
        # 在验证码上加干扰线
        generator.create_noise_curve(img, '#000000')
        # 设置文件名,命名规则为,验证码字符串random_str,加下划线,加数据编号
        file_name = path + random_str + '_' + str(i) + '.jpg'
        img.save(file_name) # 保存文件

import json
import os

if __name__ == '__main__':
    # 使用open函数,打开config.json配置文件
    with open("config.json", "r") as f:
        # 使用json.load读取解析json,结果保存在config
        config = json.load(f)

    # 接着从配置中获取各项参数
    # 具体使用config加中括号中括号中为参数名,这样的方式读取配置内容
    train_data_path = config["train_data_path"]  # 训练数据路径
    test_data_path = config["test_data_path"]  # 测试数据路径

    train_num = config["train_num"]  # 训练样本个数
    test_num = config["test_num"] # 测试样本个数

    characters = config["characters"]  # 验证码使用的字符集
    digit_num = config["digit_num"]  # 图片上的字符数量
    img_width = config["img_width"]  # 图片的宽度
    img_height = config["img_height"]  # 图片的高度


    # 检查数据路径上的文件夹是否存在
    # 如果不存在,则创建保存数据的文件夹
    if not os.path.exists(train_data_path):
        os.makedirs(train_data_path)
    if not os.path.exists(test_data_path):
        os.makedirs(test_data_path)

    # 调用generate_data,生成训练数据
    generate_data(train_num, digit_num, characters,
                  train_data_path, img_width, img_height)
    # 调用generate_data,生成测试数据
    generate_data(test_num, digit_num, characters,
                  test_data_path, img_width, img_height)

dataset.py

from torch.utils.data import Dataset
from PIL import Image
import torch
import os

# 设置CaptchaDataset继承Dataset,用于读取验证码数据
class CaptchaDataset(Dataset):
    # init函数用于初始化
    # 函数传入数据的路径data_dir和数据转换对象transform
    # 将验证码使用的字符集characters,通过参数传入
    def __init__(self, data_dir, transform, characters):
        self.file_list = list() #保存每个训练数据的路径
        # 使用os.listdir,获取data_dir中的全部文件
        files = os.listdir(data_dir)
        for file in files: #遍历files
            # 将目录路径与文件名组合为文件路径
            path = os.path.join(data_dir, file)
            # 将path添加到file_list列表
            self.file_list.append(path)
        # 将数据转换对象transform保存到类中
        self.transform = transform

        # 创建一个字符到数字的字典
        self.char2int = {}
        # 在创建字符到数字的字典时,使用外界传入的字符集characters
        for i, char in enumerate(characters):
            self.char2int[char] = i

    def __len__(self):
        # 直接返回数据集中的样本数量
        # 重写该方法后可以使用len(dataset)语法,来获取数据集的大小
        return len(self.file_list)

    # 函数传入索引index,函数应当返回与该索引对应的数据和标签
    # 通过dataset[i],就可以获取到第i个样本了
    def __getitem__(self, index):
        file_path = self.file_list[index] #获取数据的路径
        # 打开文件,并使用convert('L'),将图片转换为灰色
        # 不需要通过颜色来判断验证码中的字符,转为灰色后,可以提升模型的鲁棒性
        image = Image.open(file_path).convert('L')
        # 使用transform转换数据,将图片数据转为张量数据
        image = self.transform(image)
        # 获取该数据图片中的字符标签
        label_char = os.path.basename(file_path).split('_')[0]

        # 在获取到该数据图片中的字符标签label_char后
        label = list()
        for char in label_char: # 遍历字符串label_char
            # 将其中的字符转为数字,添加到列表label中
            label.append(self.char2int[char])
        # 将label转为张量,作为训练数据的标签
        label = torch.tensor(label, dtype=torch.long)
        return image, label #返回image和label
        
        
from torch.utils.data import DataLoader
from torchvision import transforms
import json

if __name__ == '__main__':
    with open("config.json", "r") as f:
        config = json.load(f)

    height = config["resize_height"]  # 图片的高度
    width = config["resize_width"]  # 图片的宽度
    # 定义数据转换对象transform
    # 使用transforms.Compose定义数据预处理流水线
    # 在transform添加Resize和ToTensor两个数据处理操作
    transform = transforms.Compose([
        transforms.Resize((height, width)),  # 将图片缩放到指定的大小
        transforms.ToTensor()])  # 将图片数据转换为张量

    data_path = config["train_data_path"]  # 训练数据储存路径
    characters = config["characters"]  # 验证码使用的字符集
    batch_size = config["batch_size"]
    epoch_num = config["epoch_num"]

    # 定义CaptchaDataset对象dataset
    dataset = CaptchaDataset(data_path, transform, characters)
    # 定义数据加载器data_load
    # 其中参数dataset是数据集
    # batch_size=8代表每个小批量数据的大小是8
    # shuffle = True表示每个epoch,都会随机打乱数据的顺序
    data_load = DataLoader(dataset,
                           batch_size = batch_size,
                           shuffle = True)

    # 编写一个循环,模拟小批量梯度下降迭代时的数据读取
    # 外层循环,代表了整个训练数据集的迭代轮数,3个epoch就是3轮循环
    # 对于每个epoch,都会遍历全部的训练数据
    for epoch in range(epoch_num):
        print("epoch = %d"%(epoch))
        # 内层循环代表了,在一个迭代轮次中,以小批量的方式
        # 使用dataloader对数据进行遍历
        # batch_idx表示当前遍历的批次
        # data和label表示这个批次的训练数据和标记
        for batch_idx, (data, label) in enumerate(data_load):
            print("batch_idx = %d label = %s"%(batch_idx, label))

model.py

import torch.nn as nn

# 设置类CNNModel,它继承了torch.nn中的Module模块
class CNNModel(nn.Module):
    # 定义卷积神经网络
    # 修改初始化函数init的参数列表
    # 需要将训练图片的高height、宽width、
    # 图片中的字符数量digit_num,类别数量class_num传入
    def __init__(self, height, width, digit_num, class_num):
        super(CNNModel, self).__init__()
        self.digit_num = digit_num # 将digit_num保存在类中

        # 定义第1个卷积层组conv1
        # 其中包括了1个卷积层
        # 1个ReLU激活函数和1个2乘2的最大池化
        self.conv1 = nn.Sequential(
            # 卷积层使用Conv2d定义
            # 包括了1个输入通道,8个输出通道
            # 卷积核的大小是3乘3的
            # 使用padding='same'进行填充
            # 这样可以保证输入和输出的特征图大小相同
            nn.Conv2d(1, 32, kernel_size=3, padding='same'),
            nn.ReLU(),
            nn.MaxPool2d(2),
            nn.Dropout(0.25))

        # 第2个卷积层组,和conv1具有相同的结
        self.conv2 = nn.Sequential(
            # 包括8个输入通道和16个输出通道
            nn.Conv2d(32, 64, kernel_size=3, padding='same'),
            nn.ReLU(),
            nn.MaxPool2d(2),
            nn.Dropout(0.25))

        # 第3个卷积层组,和conv1具有相同的结
        self.conv3 = nn.Sequential(
            # 包括16个输入通道和16个输出通道
            nn.Conv2d(64, 64, kernel_size=3, padding='same'),
            nn.ReLU(),
            nn.MaxPool2d(2),
            nn.Dropout(0.25))

        # 完成三个卷积层的计算后,计算全连接层的输入数据数量input_num
        # 它等于图片的高和宽,分别除以8,再乘以输出特征图的个数16
        # 除以8的原因是,由于经过了3个2*2的最大池化
        # 因此图片的高和宽,都被缩小到原来的1/8
        input_num = (height//8) * (width//8) * 64
        self.fc1 = nn.Sequential(
            nn.Linear(input_num, 1024),
            nn.ReLU(),
            nn.Dropout(0.25))

        # 将输出层的神经元个数设置为class_num
        self.fc2 = nn.Sequential(
            nn.Linear(1024, class_num),
        )
        # 后面训练会使用交叉熵损失函数CrossEntropyLoss
        # softmax函数会定义在损失函数中,所以这里就不显示的定义了

    # 前向传播函数
    # 函数输入一个四维张量x
    # 这四个维度分别是样本数量、输入通道、图片的高度和宽度
    def forward(self, x): # [n, 1, 128, 128]
        # 将输入张量x按照顺序,输入至每一层中进行计算
        # 每层都会使张量x的维度发生变化
        out = self.conv1(x) # [n, 8, 64, 64]
        out = self.conv2(out) # [n, 16, 32, 32]
        out = self.conv3(out) # [n, 16, 16, 16]
        # 使用view函数,将张量的维度从n*16*16*16转为n*4096
        out = out.view(out.size(0), -1) # [n, 4096]
        out = self.fc1(out) # [n, 128]
        # 经过3个卷积层与2个全连接层后,会计算得到n*40的张量
        out = self.fc2(out) # [n, 40]

        # 使用初始化时传入的digit_num
        # 也就是将模型的最终输出,修改为n*digit_num*字符种类
        out = out.view(out.size(0), self.digit_num, -1)
        return out

import json
if __name__ == '__main__':
    with open("config.json", "r") as f:
        config = json.load(f)

    height = config["resize_height"]  # 图片的高度
    width = config["resize_width"]  # 图片的宽度
    characters = config["characters"]  # 验证码使用的字符集
    digit_num = config["digit_num"]
    class_num = len(characters) * digit_num

    # 定义一个CNNModelUp1实例
    model = CNNModel(height, width, digit_num, class_num)
    print(model) #将其打印,观察打印结果可以了解模型的结构
    print("")

train.py

# 直接导入dataset.py中的CaptchaDataset类
from dataset import CaptchaDataset
# 直接导入model.py中的CNNModel类
from model import CNNModel

import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torch import optim
import json
import os

if __name__ == '__main__':
    # 打开配置文件
    with open("config.json", "r") as f:
        config = json.load(f)

    # 读取resize_height和resize_width两个参数
    # 它们代表图片数据最终缩放的高和宽,用于创建transform
    height = config["resize_height"]  # 图片的高度
    width = config["resize_width"]  # 图片的宽度
    # 定义数据转换对象transform
    # 使用transforms.Compose定义数据预处理流水线
    # 在transform添加Resize和ToTensor两个数据处理操作
    transform = transforms.Compose([
        transforms.RandomRotation(10), # 添加旋转方案
        transforms.Resize((height, width)),  # 将图片缩放到指定的大小
        transforms.ToTensor()])  # 将图片数据转换为张量

    train_data_path = config["train_data_path"]  # 获取训练数据路径
    characters = config["characters"]  # 验证码字符集
    batch_size = config["batch_size"] # 批量大小
    epoch_num = config["epoch_num"] # 迭代轮数
    digit_num = config["digit_num"] # 字符个数
    learning_rate = config["learning_rate"] #迭代速率
    # 计算类别个数class_num,等于使用的字符数量*字符个数
    class_num = len(characters) * digit_num

    model_save_path = config["model_save_path"] #获取模型的保存路径
    model_name = config["model_name"] #模型名称
    model_save_name = model_save_path + "/" + model_name
    # 创建模型文件夹
    if not os.path.exists(model_save_path):
        os.makedirs(model_save_path)

    print("resize_height = %d"%(height))
    print("resize_width = %d" %(width))
    print("train_data_path = %s"%(train_data_path))
    print("characters = %s" % (characters))
    print("batch_size = %d" % (batch_size))
    print("epoch_num = %d" % (epoch_num))
    print("digit_num = %d" % (digit_num))
    print("class_num = %d" % (class_num))
    print("learning_rate = %lf" % (learning_rate))
    print("model_save_name = %s" % (model_save_name))
    print("")

    # 定义CaptchaDataset对象train_data
    train_data = CaptchaDataset(train_data_path, transform, characters)
    # 使用DataLoader,定义数据加载器train_load
    # 其中参数train_data是训练集
    # batch_size=64代表每个小批量数据的大小是64
    # shuffle = True表示每一轮训练,都会随机打乱数据的顺序
    train_load = DataLoader(train_data,
                            batch_size = batch_size,
                            shuffle = True)
    # 训练集有3000个数据,由于每个小批量大小是64,
    # 3000个数据就会分成47个小批量,前46个小批量包括64个数据,
    # 最后一个小批量包括56个数据。46*64+56=3000

    # 定义设备对象device,这里如果cuda可用则使用GPU,否则使用CPU
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
       
    # 创建一个CNNModel模型对象,并转移到GPU上
    model = CNNModel(height, width, digit_num, class_num).to(device)
    model.train()    
    
    # 需要指定迭代速率。默认情况下是0.001,我们将迭代速率修改0.0001
    # 因为面对更复杂的数据,较小的迭代速率可以使迭代更稳定
    optimizer = optim.Adam(model.parameters(), lr=learning_rate)
    criterion = nn.CrossEntropyLoss()  # 创建一个交叉熵损失函数

    print("Begin training:")
    # 提升迭代轮数,从50轮训练提升至200轮训练
    for epoch in range(epoch_num):  # 外层循环,代表了整个训练数据集的遍历次数
        # 内层循环代表了,在一个epoch中,以批量的方式,使用train_load对于数据进行遍历
        # batch_idx 表示当前遍历的批次
        # (data, label) 表示这个批次的训练数据和标记。
        for batch_idx, (data, label) in enumerate(train_load):
            # 将数据data和标签label转移到GPU上
            data, label = data.to(device), label.to(device)

            # 使用当前的模型,预测训练数据data,结果保存在output中
            output = model(data)

            # 修改损失值loss的计算方法
            # 将4位验证码的每一位的损失,都累加到一起
            loss = torch.tensor(0.0).to(device)
            for i in range(digit_num): #使用i,循环4位验证码
                # 每一位验证码的模型计算输出为output[:, i, :]
                # 标记为label[:, i]
                # 交叉熵损失函数criterion,计算一位验证码的损失
                # 将4位验证码的损失,累加到loss
                loss += criterion(output[:, i, :], label[:, i])

            loss.backward()  # 计算损失函数关于模型参数的梯度
            optimizer.step()  # 更新模型参数
            optimizer.zero_grad()  # 将梯度清零,以便于下一次迭代

            # 计算训练时每个batch的正确率acc
            predicted = torch.argmax(output, dim=2)
            correct = (predicted == label).all(dim=1).sum().item()
            acc = correct / data.size(0)

            # 对于每个epoch,每训练10个batch,打印一次当前的损失
            if batch_idx % 10 == 0:
                print(f"Epoch {epoch + 1}/{epoch_num} "
                      f"| Batch {batch_idx}/{len(train_load)} "
                      f"| Loss: {loss.item():.4f} "
                      f"| accuracy {correct}/{data.size(0)}={acc:.3f}")

        # 每10轮训练,就保存一次checkpoint模型,用来调试使用
        if (epoch + 1) % 10 == 0:
            checkpoint = model_save_path + "/check.epoch" + str(epoch+1)
            torch.save(model.state_dict(), checkpoint)
            print("checkpoint saved : %s" % (checkpoint))

    # 程序的最后,使用配置中的路径,保存训练结果
    torch.save(model.state_dict(), model_save_name)
    print("model saved : %s" % (model_save_name))

test.py

from dataset import CaptchaDataset
from model import CNNModel

import torch
from torch.utils.data import DataLoader
import torchvision.transforms as transforms

import json

if __name__ == '__main__':
    with open("config.json", "r") as f:
        config = json.load(f)

    height = config["resize_height"]  # 图片的高度
    width = config["resize_width"]  # 图片的宽度

    # 定义数据转换对象transform
    # 将图片缩放到指定的大小,并将图片数据转换为张量
    transform = transforms.Compose([
        transforms.Resize((height, width)),
        transforms.ToTensor()])

    test_data_path = config["test_data_path"]  # 训练数据储存路径
    characters = config["characters"]  # 验证码使用的字符集
    digit_num = config["digit_num"]
    class_num = len(characters) * digit_num
    test_model_path = config["test_model_path"]

    print("resize_height = %d" % (height))
    print("resize_width = %d" % (width))
    print("test_data_path = %s" % (test_data_path))
    print("characters = %s" % (characters))
    print("digit_num = %d" % (digit_num))
    print("class_num = %d" % (class_num))
    print("test_model_path = %s" % (test_model_path))
    print("")

    # 使用CaptchaDataset构造测试数据集
    test_data = CaptchaDataset(test_data_path, transform, characters)

    # 使用DataLoader读取test_data
    # 不需要设置任何参数,这样会一个一个数据的读取
    test_loader = DataLoader(test_data)

    # 定义设备对象device,这里如果cuda可用则使用GPU,否则使用CPU
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    
    # 创建一个CNNModel模型对象,并转移到GPU上
    model = CNNModel(height, width, digit_num, class_num).to(device)
    model.eval()
    
    # 调用load_state_dict,读取已经训练好的模型文件captcha.digit
    model.load_state_dict(torch.load(test_model_path))

    right = 0  # 设置right变量,保存预测正确的样本数量
    all = 0  # all保存全部的样本数量
    # 遍历test_loader中的数据
    # x表示样本的特征张量,y表示样本的标签
    for (x, y) in test_loader:
        x, y = x.to(device), y.to(device)  # 转移数据至GPU
        pred = model(x)  # 使用模型预测x的结果,保存在pred中
        # 使用pred.argmax(dim=2).squeeze(0),获取4位验证码数据的预测结果
        # y.squeeze(0)是4验证码的标记结果
        if torch.equal(pred.argmax(dim=2).squeeze(0),
                       y.squeeze(0)):
            right += 1  # 如果相同,那么right加1
        all += 1  # 每次循环,all变量加1

    # 循环结束后,计算模型的正确率
    acc = right * 1.0 / all
    print("test accuracy = %d / %d = %.3lf" % (right, all, acc))

D:\Python310\python.exe D:/project/PycharmProjects/CNN/train.py
resize_height = 128
resize_width = 128
train_data_path = ./data/train-digit/
characters = 0123456789
batch_size = 128
epoch_num = 200
digit_num = 1
class_num = 10
learning_rate = 0.000100
model_save_name = ./model//captcha.1digit.2k

Begin training:
Epoch 1/200 | Batch 0/16 | Loss: 2.3091 | accuracy 15/128=0.117
Epoch 1/200 | Batch 10/16 | Loss: 2.3238 | accuracy 10/128=0.078
Epoch 2/200 | Batch 0/16 | Loss: 2.3016 | accuracy 14/128=0.109
Epoch 2/200 | Batch 10/16 | Loss: 2.3000 | accuracy 15/128=0.117
Epoch 3/200 | Batch 0/16 | Loss: 2.3062 | accuracy 13/128=0.102
Epoch 3/200 | Batch 10/16 | Loss: 2.3053 | accuracy 12/128=0.094
Epoch 4/200 | Batch 0/16 | Loss: 2.3071 | accuracy 15/128=0.117
Epoch 4/200 | Batch 10/16 | Loss: 2.3018 | accuracy 18/128=0.141
Epoch 5/200 | Batch 0/16 | Loss: 2.2999 | accuracy 14/128=0.109
Epoch 5/200 | Batch 10/16 | Loss: 2.3003 | accuracy 17/128=0.133
Epoch 6/200 | Batch 0/16 | Loss: 2.3056 | accuracy 10/128=0.078
Epoch 6/200 | Batch 10/16 | Loss: 2.3008 | accuracy 17/128=0.133
Epoch 7/200 | Batch 0/16 | Loss: 2.3007 | accuracy 10/128=0.078
Epoch 7/200 | Batch 10/16 | Loss: 2.3061 | accuracy 10/128=0.078
Epoch 8/200 | Batch 0/16 | Loss: 2.3027 | accuracy 16/128=0.125
Epoch 8/200 | Batch 10/16 | Loss: 2.3041 | accuracy 11/128=0.086
Epoch 9/200 | Batch 0/16 | Loss: 2.3063 | accuracy 14/128=0.109
Epoch 9/200 | Batch 10/16 | Loss: 2.3000 | accuracy 12/128=0.094
Epoch 10/200 | Batch 0/16 | Loss: 2.2981 | accuracy 17/128=0.133
Epoch 10/200 | Batch 10/16 | Loss: 2.3018 | accuracy 17/128=0.133
checkpoint saved : ./model//check.epoch10
Epoch 11/200 | Batch 0/16 | Loss: 2.3048 | accuracy 13/128=0.102
Epoch 11/200 | Batch 10/16 | Loss: 2.3009 | accuracy 18/128=0.141
Epoch 12/200 | Batch 0/16 | Loss: 2.3007 | accuracy 5/128=0.039
Epoch 12/200 | Batch 10/16 | Loss: 2.3052 | accuracy 13/128=0.102
Epoch 13/200 | Batch 0/16 | Loss: 2.3016 | accuracy 15/128=0.117
Epoch 13/200 | Batch 10/16 | Loss: 2.2970 | accuracy 16/128=0.125
Epoch 14/200 | Batch 0/16 | Loss: 2.2986 | accuracy 19/128=0.148
Epoch 14/200 | Batch 10/16 | Loss: 2.3021 | accuracy 14/128=0.109
Epoch 15/200 | Batch 0/16 | Loss: 2.2987 | accuracy 17/128=0.133
Epoch 15/200 | Batch 10/16 | Loss: 2.3041 | accuracy 14/128=0.109
Epoch 16/200 | Batch 0/16 | Loss: 2.2994 | accuracy 16/128=0.125
Epoch 16/200 | Batch 10/16 | Loss: 2.3019 | accuracy 16/128=0.125
Epoch 17/200 | Batch 0/16 | Loss: 2.2933 | accuracy 14/128=0.109
Epoch 17/200 | Batch 10/16 | Loss: 2.2991 | accuracy 12/128=0.094
Epoch 18/200 | Batch 0/16 | Loss: 2.3012 | accuracy 16/128=0.125
Epoch 18/200 | Batch 10/16 | Loss: 2.3045 | accuracy 13/128=0.102
Epoch 19/200 | Batch 0/16 | Loss: 2.2907 | accuracy 25/128=0.195
Epoch 19/200 | Batch 10/16 | Loss: 2.3016 | accuracy 10/128=0.078
Epoch 20/200 | Batch 0/16 | Loss: 2.3050 | accuracy 13/128=0.102
Epoch 20/200 | Batch 10/16 | Loss: 2.2988 | accuracy 14/128=0.109
checkpoint saved : ./model//check.epoch20
Epoch 21/200 | Batch 0/16 | Loss: 2.2999 | accuracy 17/128=0.133
Epoch 21/200 | Batch 10/16 | Loss: 2.2937 | accuracy 15/128=0.117
Epoch 22/200 | Batch 0/16 | Loss: 2.3047 | accuracy 16/128=0.125
Epoch 22/200 | Batch 10/16 | Loss: 2.2853 | accuracy 18/128=0.141
Epoch 23/200 | Batch 0/16 | Loss: 2.2850 | accuracy 19/128=0.148
Epoch 23/200 | Batch 10/16 | Loss: 2.2959 | accuracy 13/128=0.102
Epoch 24/200 | Batch 0/16 | Loss: 2.2884 | accuracy 18/128=0.141
Epoch 24/200 | Batch 10/16 | Loss: 2.2940 | accuracy 18/128=0.141
Epoch 25/200 | Batch 0/16 | Loss: 2.2775 | accuracy 18/128=0.141
Epoch 25/200 | Batch 10/16 | Loss: 2.2858 | accuracy 15/128=0.117
Epoch 26/200 | Batch 0/16 | Loss: 2.2522 | accuracy 27/128=0.211
Epoch 26/200 | Batch 10/16 | Loss: 2.3032 | accuracy 16/128=0.125
Epoch 27/200 | Batch 0/16 | Loss: 2.2583 | accuracy 24/128=0.188
Epoch 27/200 | Batch 10/16 | Loss: 2.2422 | accuracy 28/128=0.219
Epoch 28/200 | Batch 0/16 | Loss: 2.2255 | accuracy 29/128=0.227
Epoch 28/200 | Batch 10/16 | Loss: 2.2325 | accuracy 16/128=0.125
Epoch 29/200 | Batch 0/16 | Loss: 2.1752 | accuracy 28/128=0.219
Epoch 29/200 | Batch 10/16 | Loss: 2.2192 | accuracy 23/128=0.180
Epoch 30/200 | Batch 0/16 | Loss: 2.2291 | accuracy 18/128=0.141
Epoch 30/200 | Batch 10/16 | Loss: 2.1861 | accuracy 25/128=0.195
checkpoint saved : ./model//check.epoch30
Epoch 31/200 | Batch 0/16 | Loss: 2.1700 | accuracy 35/128=0.273
Epoch 31/200 | Batch 10/16 | Loss: 2.0598 | accuracy 33/128=0.258
Epoch 32/200 | Batch 0/16 | Loss: 2.1042 | accuracy 29/128=0.227
Epoch 32/200 | Batch 10/16 | Loss: 2.0796 | accuracy 27/128=0.211
Epoch 33/200 | Batch 0/16 | Loss: 2.1144 | accuracy 23/128=0.180
Epoch 33/200 | Batch 10/16 | Loss: 2.1632 | accuracy 26/128=0.203
Epoch 34/200 | Batch 0/16 | Loss: 2.0593 | accuracy 38/128=0.297
Epoch 34/200 | Batch 10/16 | Loss: 2.0564 | accuracy 37/128=0.289
Epoch 35/200 | Batch 0/16 | Loss: 1.9282 | accuracy 42/128=0.328
Epoch 35/200 | Batch 10/16 | Loss: 2.0059 | accuracy 36/128=0.281
Epoch 36/200 | Batch 0/16 | Loss: 2.0065 | accuracy 35/128=0.273
Epoch 36/200 | Batch 10/16 | Loss: 1.9090 | accuracy 42/128=0.328
Epoch 37/200 | Batch 0/16 | Loss: 1.9358 | accuracy 39/128=0.305
Epoch 37/200 | Batch 10/16 | Loss: 1.9197 | accuracy 45/128=0.352
Epoch 38/200 | Batch 0/16 | Loss: 1.9248 | accuracy 42/128=0.328
Epoch 38/200 | Batch 10/16 | Loss: 1.9072 | accuracy 40/128=0.312
Epoch 39/200 | Batch 0/16 | Loss: 1.9429 | accuracy 41/128=0.320
Epoch 39/200 | Batch 10/16 | Loss: 1.9401 | accuracy 39/128=0.305
Epoch 40/200 | Batch 0/16 | Loss: 1.8600 | accuracy 44/128=0.344
Epoch 40/200 | Batch 10/16 | Loss: 1.8164 | accuracy 46/128=0.359
checkpoint saved : ./model//check.epoch40
Epoch 41/200 | Batch 0/16 | Loss: 1.8458 | accuracy 48/128=0.375
Epoch 41/200 | Batch 10/16 | Loss: 1.7130 | accuracy 54/128=0.422
Epoch 42/200 | Batch 0/16 | Loss: 1.6807 | accuracy 53/128=0.414
Epoch 42/200 | Batch 10/16 | Loss: 1.8174 | accuracy 41/128=0.320
Epoch 43/200 | Batch 0/16 | Loss: 1.8646 | accuracy 40/128=0.312
Epoch 43/200 | Batch 10/16 | Loss: 1.6046 | accuracy 54/128=0.422
Epoch 44/200 | Batch 0/16 | Loss: 1.7627 | accuracy 43/128=0.336
Epoch 44/200 | Batch 10/16 | Loss: 1.7279 | accuracy 48/128=0.375
Epoch 45/200 | Batch 0/16 | Loss: 1.6728 | accuracy 50/128=0.391
Epoch 45/200 | Batch 10/16 | Loss: 1.6171 | accuracy 53/128=0.414
Epoch 46/200 | Batch 0/16 | Loss: 1.6969 | accuracy 51/128=0.398
Epoch 46/200 | Batch 10/16 | Loss: 1.6196 | accuracy 48/128=0.375
Epoch 47/200 | Batch 0/16 | Loss: 1.6617 | accuracy 56/128=0.438
Epoch 47/200 | Batch 10/16 | Loss: 1.5410 | accuracy 67/128=0.523
Epoch 48/200 | Batch 0/16 | Loss: 1.6146 | accuracy 55/128=0.430
Epoch 48/200 | Batch 10/16 | Loss: 1.7213 | accuracy 44/128=0.344
Epoch 49/200 | Batch 0/16 | Loss: 1.5919 | accuracy 61/128=0.477
Epoch 49/200 | Batch 10/16 | Loss: 1.5982 | accuracy 51/128=0.398
Epoch 50/200 | Batch 0/16 | Loss: 1.6092 | accuracy 59/128=0.461
Epoch 50/200 | Batch 10/16 | Loss: 1.4322 | accuracy 65/128=0.508
checkpoint saved : ./model//check.epoch50
Epoch 51/200 | Batch 0/16 | Loss: 1.5115 | accuracy 65/128=0.508
Epoch 51/200 | Batch 10/16 | Loss: 1.5191 | accuracy 58/128=0.453
Epoch 52/200 | Batch 0/16 | Loss: 1.5553 | accuracy 64/128=0.500
Epoch 52/200 | Batch 10/16 | Loss: 1.5587 | accuracy 60/128=0.469
Epoch 53/200 | Batch 0/16 | Loss: 1.5137 | accuracy 61/128=0.477
Epoch 53/200 | Batch 10/16 | Loss: 1.3685 | accuracy 67/128=0.523
Epoch 54/200 | Batch 0/16 | Loss: 1.6554 | accuracy 50/128=0.391
Epoch 54/200 | Batch 10/16 | Loss: 1.4803 | accuracy 59/128=0.461
Epoch 55/200 | Batch 0/16 | Loss: 1.3825 | accuracy 66/128=0.516
Epoch 55/200 | Batch 10/16 | Loss: 1.4612 | accuracy 62/128=0.484
Epoch 56/200 | Batch 0/16 | Loss: 1.3605 | accuracy 73/128=0.570
Epoch 56/200 | Batch 10/16 | Loss: 1.4856 | accuracy 66/128=0.516
Epoch 57/200 | Batch 0/16 | Loss: 1.5354 | accuracy 51/128=0.398
Epoch 57/200 | Batch 10/16 | Loss: 1.4573 | accuracy 59/128=0.461
Epoch 58/200 | Batch 0/16 | Loss: 1.3566 | accuracy 61/128=0.477
Epoch 58/200 | Batch 10/16 | Loss: 1.3901 | accuracy 63/128=0.492
Epoch 59/200 | Batch 0/16 | Loss: 1.3130 | accuracy 70/128=0.547
Epoch 59/200 | Batch 10/16 | Loss: 1.1667 | accuracy 76/128=0.594
Epoch 60/200 | Batch 0/16 | Loss: 1.3881 | accuracy 70/128=0.547
Epoch 60/200 | Batch 10/16 | Loss: 1.2703 | accuracy 68/128=0.531
checkpoint saved : ./model//check.epoch60
Epoch 61/200 | Batch 0/16 | Loss: 1.4010 | accuracy 62/128=0.484
Epoch 61/200 | Batch 10/16 | Loss: 1.3181 | accuracy 72/128=0.562
Epoch 62/200 | Batch 0/16 | Loss: 1.2716 | accuracy 69/128=0.539
Epoch 62/200 | Batch 10/16 | Loss: 1.3523 | accuracy 62/128=0.484
Epoch 63/200 | Batch 0/16 | Loss: 1.2137 | accuracy 78/128=0.609
Epoch 63/200 | Batch 10/16 | Loss: 1.2490 | accuracy 75/128=0.586
Epoch 64/200 | Batch 0/16 | Loss: 1.2601 | accuracy 77/128=0.602
Epoch 64/200 | Batch 10/16 | Loss: 1.2207 | accuracy 72/128=0.562
Epoch 65/200 | Batch 0/16 | Loss: 1.1812 | accuracy 73/128=0.570
Epoch 65/200 | Batch 10/16 | Loss: 1.2019 | accuracy 74/128=0.578
Epoch 66/200 | Batch 0/16 | Loss: 1.0996 | accuracy 77/128=0.602
Epoch 66/200 | Batch 10/16 | Loss: 1.1076 | accuracy 72/128=0.562
Epoch 67/200 | Batch 0/16 | Loss: 1.2806 | accuracy 71/128=0.555
Epoch 67/200 | Batch 10/16 | Loss: 1.2237 | accuracy 74/128=0.578
Epoch 68/200 | Batch 0/16 | Loss: 1.1196 | accuracy 81/128=0.633
Epoch 68/200 | Batch 10/16 | Loss: 1.1982 | accuracy 78/128=0.609
Epoch 69/200 | Batch 0/16 | Loss: 1.0038 | accuracy 93/128=0.727
Epoch 69/200 | Batch 10/16 | Loss: 1.2466 | accuracy 72/128=0.562
Epoch 70/200 | Batch 0/16 | Loss: 1.0274 | accuracy 79/128=0.617
Epoch 70/200 | Batch 10/16 | Loss: 1.0536 | accuracy 82/128=0.641
checkpoint saved : ./model//check.epoch70
Epoch 71/200 | Batch 0/16 | Loss: 1.1594 | accuracy 79/128=0.617
Epoch 71/200 | Batch 10/16 | Loss: 1.0447 | accuracy 80/128=0.625
Epoch 72/200 | Batch 0/16 | Loss: 1.2550 | accuracy 68/128=0.531
Epoch 72/200 | Batch 10/16 | Loss: 1.1217 | accuracy 79/128=0.617
Epoch 73/200 | Batch 0/16 | Loss: 1.0504 | accuracy 78/128=0.609
Epoch 73/200 | Batch 10/16 | Loss: 1.2043 | accuracy 77/128=0.602
Epoch 74/200 | Batch 0/16 | Loss: 1.0929 | accuracy 74/128=0.578
Epoch 74/200 | Batch 10/16 | Loss: 1.0416 | accuracy 82/128=0.641
Epoch 75/200 | Batch 0/16 | Loss: 0.9702 | accuracy 89/128=0.695
Epoch 75/200 | Batch 10/16 | Loss: 0.9303 | accuracy 95/128=0.742
Epoch 76/200 | Batch 0/16 | Loss: 0.8531 | accuracy 93/128=0.727
Epoch 76/200 | Batch 10/16 | Loss: 1.0092 | accuracy 87/128=0.680
Epoch 77/200 | Batch 0/16 | Loss: 1.0739 | accuracy 78/128=0.609
Epoch 77/200 | Batch 10/16 | Loss: 1.0276 | accuracy 81/128=0.633
Epoch 78/200 | Batch 0/16 | Loss: 0.9078 | accuracy 91/128=0.711
Epoch 78/200 | Batch 10/16 | Loss: 0.9602 | accuracy 80/128=0.625
Epoch 79/200 | Batch 0/16 | Loss: 0.9347 | accuracy 85/128=0.664
Epoch 79/200 | Batch 10/16 | Loss: 0.9257 | accuracy 87/128=0.680
Epoch 80/200 | Batch 0/16 | Loss: 1.0276 | accuracy 84/128=0.656
Epoch 80/200 | Batch 10/16 | Loss: 0.8795 | accuracy 88/128=0.688
checkpoint saved : ./model//check.epoch80
Epoch 81/200 | Batch 0/16 | Loss: 0.7719 | accuracy 96/128=0.750
Epoch 81/200 | Batch 10/16 | Loss: 0.9031 | accuracy 90/128=0.703
Epoch 82/200 | Batch 0/16 | Loss: 0.8802 | accuracy 91/128=0.711
Epoch 82/200 | Batch 10/16 | Loss: 0.8708 | accuracy 88/128=0.688
Epoch 83/200 | Batch 0/16 | Loss: 0.8398 | accuracy 91/128=0.711
Epoch 83/200 | Batch 10/16 | Loss: 0.7149 | accuracy 99/128=0.773
Epoch 84/200 | Batch 0/16 | Loss: 0.7306 | accuracy 101/128=0.789
Epoch 84/200 | Batch 10/16 | Loss: 0.8610 | accuracy 92/128=0.719
Epoch 85/200 | Batch 0/16 | Loss: 0.8118 | accuracy 92/128=0.719
Epoch 85/200 | Batch 10/16 | Loss: 0.8698 | accuracy 94/128=0.734
Epoch 86/200 | Batch 0/16 | Loss: 0.7987 | accuracy 93/128=0.727
Epoch 86/200 | Batch 10/16 | Loss: 0.7173 | accuracy 101/128=0.789
Epoch 87/200 | Batch 0/16 | Loss: 0.7868 | accuracy 93/128=0.727
Epoch 87/200 | Batch 10/16 | Loss: 0.9372 | accuracy 80/128=0.625
Epoch 88/200 | Batch 0/16 | Loss: 0.8355 | accuracy 91/128=0.711
Epoch 88/200 | Batch 10/16 | Loss: 0.7740 | accuracy 93/128=0.727
Epoch 89/200 | Batch 0/16 | Loss: 0.8853 | accuracy 86/128=0.672
Epoch 89/200 | Batch 10/16 | Loss: 0.7612 | accuracy 91/128=0.711
Epoch 90/200 | Batch 0/16 | Loss: 0.6926 | accuracy 99/128=0.773
Epoch 90/200 | Batch 10/16 | Loss: 0.6736 | accuracy 97/128=0.758
checkpoint saved : ./model//check.epoch90
Epoch 91/200 | Batch 0/16 | Loss: 0.7096 | accuracy 95/128=0.742
Epoch 91/200 | Batch 10/16 | Loss: 0.7188 | accuracy 103/128=0.805
Epoch 92/200 | Batch 0/16 | Loss: 0.7054 | accuracy 96/128=0.750
Epoch 92/200 | Batch 10/16 | Loss: 0.6021 | accuracy 110/128=0.859
Epoch 93/200 | Batch 0/16 | Loss: 0.7780 | accuracy 96/128=0.750
Epoch 93/200 | Batch 10/16 | Loss: 0.7090 | accuracy 103/128=0.805
Epoch 94/200 | Batch 0/16 | Loss: 0.6440 | accuracy 102/128=0.797
Epoch 94/200 | Batch 10/16 | Loss: 0.8302 | accuracy 88/128=0.688
Epoch 95/200 | Batch 0/16 | Loss: 0.7757 | accuracy 96/128=0.750
Epoch 95/200 | Batch 10/16 | Loss: 0.6106 | accuracy 104/128=0.812
Epoch 96/200 | Batch 0/16 | Loss: 0.6474 | accuracy 96/128=0.750
Epoch 96/200 | Batch 10/16 | Loss: 0.6675 | accuracy 102/128=0.797
Epoch 97/200 | Batch 0/16 | Loss: 0.5350 | accuracy 106/128=0.828
Epoch 97/200 | Batch 10/16 | Loss: 0.8105 | accuracy 93/128=0.727
Epoch 98/200 | Batch 0/16 | Loss: 0.7731 | accuracy 87/128=0.680
Epoch 98/200 | Batch 10/16 | Loss: 0.6888 | accuracy 96/128=0.750
Epoch 99/200 | Batch 0/16 | Loss: 0.6044 | accuracy 106/128=0.828
Epoch 99/200 | Batch 10/16 | Loss: 0.5313 | accuracy 101/128=0.789
Epoch 100/200 | Batch 0/16 | Loss: 0.7274 | accuracy 96/128=0.750
Epoch 100/200 | Batch 10/16 | Loss: 0.6472 | accuracy 100/128=0.781
checkpoint saved : ./model//check.epoch100
Epoch 101/200 | Batch 0/16 | Loss: 0.6915 | accuracy 98/128=0.766
Epoch 101/200 | Batch 10/16 | Loss: 0.5370 | accuracy 109/128=0.852
Epoch 102/200 | Batch 0/16 | Loss: 0.5760 | accuracy 104/128=0.812
Epoch 102/200 | Batch 10/16 | Loss: 0.7622 | accuracy 93/128=0.727
Epoch 103/200 | Batch 0/16 | Loss: 0.5385 | accuracy 102/128=0.797
Epoch 103/200 | Batch 10/16 | Loss: 0.6802 | accuracy 103/128=0.805
Epoch 104/200 | Batch 0/16 | Loss: 0.5285 | accuracy 110/128=0.859
Epoch 104/200 | Batch 10/16 | Loss: 0.5555 | accuracy 110/128=0.859
Epoch 105/200 | Batch 0/16 | Loss: 0.6075 | accuracy 102/128=0.797
Epoch 105/200 | Batch 10/16 | Loss: 0.5659 | accuracy 101/128=0.789
Epoch 106/200 | Batch 0/16 | Loss: 0.4936 | accuracy 108/128=0.844
Epoch 106/200 | Batch 10/16 | Loss: 0.6707 | accuracy 102/128=0.797
Epoch 107/200 | Batch 0/16 | Loss: 0.5391 | accuracy 105/128=0.820
Epoch 107/200 | Batch 10/16 | Loss: 0.4698 | accuracy 105/128=0.820
Epoch 108/200 | Batch 0/16 | Loss: 0.4267 | accuracy 108/128=0.844
Epoch 108/200 | Batch 10/16 | Loss: 0.5509 | accuracy 102/128=0.797
Epoch 109/200 | Batch 0/16 | Loss: 0.4462 | accuracy 107/128=0.836
Epoch 109/200 | Batch 10/16 | Loss: 0.5380 | accuracy 105/128=0.820
Epoch 110/200 | Batch 0/16 | Loss: 0.4637 | accuracy 110/128=0.859
Epoch 110/200 | Batch 10/16 | Loss: 0.4375 | accuracy 109/128=0.852
checkpoint saved : ./model//check.epoch110
Epoch 111/200 | Batch 0/16 | Loss: 0.5567 | accuracy 105/128=0.820
Epoch 111/200 | Batch 10/16 | Loss: 0.4808 | accuracy 108/128=0.844
Epoch 112/200 | Batch 0/16 | Loss: 0.4961 | accuracy 109/128=0.852
Epoch 112/200 | Batch 10/16 | Loss: 0.5008 | accuracy 104/128=0.812
Epoch 113/200 | Batch 0/16 | Loss: 0.4603 | accuracy 112/128=0.875
Epoch 113/200 | Batch 10/16 | Loss: 0.4817 | accuracy 108/128=0.844
Epoch 114/200 | Batch 0/16 | Loss: 0.3971 | accuracy 111/128=0.867
Epoch 114/200 | Batch 10/16 | Loss: 0.4703 | accuracy 105/128=0.820
Epoch 115/200 | Batch 0/16 | Loss: 0.5089 | accuracy 102/128=0.797
Epoch 115/200 | Batch 10/16 | Loss: 0.4242 | accuracy 112/128=0.875
Epoch 116/200 | Batch 0/16 | Loss: 0.5037 | accuracy 103/128=0.805
Epoch 116/200 | Batch 10/16 | Loss: 0.4972 | accuracy 102/128=0.797
Epoch 117/200 | Batch 0/16 | Loss: 0.4382 | accuracy 109/128=0.852
Epoch 117/200 | Batch 10/16 | Loss: 0.3487 | accuracy 116/128=0.906
Epoch 118/200 | Batch 0/16 | Loss: 0.3746 | accuracy 112/128=0.875
Epoch 118/200 | Batch 10/16 | Loss: 0.3572 | accuracy 114/128=0.891
Epoch 119/200 | Batch 0/16 | Loss: 0.3941 | accuracy 110/128=0.859
Epoch 119/200 | Batch 10/16 | Loss: 0.4587 | accuracy 110/128=0.859
Epoch 120/200 | Batch 0/16 | Loss: 0.3700 | accuracy 114/128=0.891
Epoch 120/200 | Batch 10/16 | Loss: 0.3846 | accuracy 112/128=0.875
checkpoint saved : ./model//check.epoch120
Epoch 121/200 | Batch 0/16 | Loss: 0.4735 | accuracy 110/128=0.859
Epoch 121/200 | Batch 10/16 | Loss: 0.5561 | accuracy 104/128=0.812
Epoch 122/200 | Batch 0/16 | Loss: 0.3554 | accuracy 115/128=0.898
Epoch 122/200 | Batch 10/16 | Loss: 0.4541 | accuracy 113/128=0.883
Epoch 123/200 | Batch 0/16 | Loss: 0.4274 | accuracy 110/128=0.859
Epoch 123/200 | Batch 10/16 | Loss: 0.3901 | accuracy 112/128=0.875
Epoch 124/200 | Batch 0/16 | Loss: 0.3440 | accuracy 118/128=0.922
Epoch 124/200 | Batch 10/16 | Loss: 0.3341 | accuracy 113/128=0.883
Epoch 125/200 | Batch 0/16 | Loss: 0.3978 | accuracy 111/128=0.867
Epoch 125/200 | Batch 10/16 | Loss: 0.4012 | accuracy 113/128=0.883
Epoch 126/200 | Batch 0/16 | Loss: 0.3910 | accuracy 114/128=0.891
Epoch 126/200 | Batch 10/16 | Loss: 0.4164 | accuracy 113/128=0.883
Epoch 127/200 | Batch 0/16 | Loss: 0.3342 | accuracy 114/128=0.891
Epoch 127/200 | Batch 10/16 | Loss: 0.3473 | accuracy 120/128=0.938
Epoch 128/200 | Batch 0/16 | Loss: 0.3794 | accuracy 111/128=0.867
Epoch 128/200 | Batch 10/16 | Loss: 0.4186 | accuracy 110/128=0.859
Epoch 129/200 | Batch 0/16 | Loss: 0.3165 | accuracy 117/128=0.914
Epoch 129/200 | Batch 10/16 | Loss: 0.3586 | accuracy 112/128=0.875
Epoch 130/200 | Batch 0/16 | Loss: 0.3648 | accuracy 113/128=0.883
Epoch 130/200 | Batch 10/16 | Loss: 0.4095 | accuracy 115/128=0.898
checkpoint saved : ./model//check.epoch130
Epoch 131/200 | Batch 0/16 | Loss: 0.3751 | accuracy 114/128=0.891
Epoch 131/200 | Batch 10/16 | Loss: 0.2695 | accuracy 122/128=0.953
Epoch 132/200 | Batch 0/16 | Loss: 0.3491 | accuracy 115/128=0.898
Epoch 132/200 | Batch 10/16 | Loss: 0.2876 | accuracy 118/128=0.922
Epoch 133/200 | Batch 0/16 | Loss: 0.3161 | accuracy 116/128=0.906
Epoch 133/200 | Batch 10/16 | Loss: 0.3067 | accuracy 115/128=0.898
Epoch 134/200 | Batch 0/16 | Loss: 0.3532 | accuracy 117/128=0.914
Epoch 134/200 | Batch 10/16 | Loss: 0.3171 | accuracy 116/128=0.906
Epoch 135/200 | Batch 0/16 | Loss: 0.3430 | accuracy 113/128=0.883
Epoch 135/200 | Batch 10/16 | Loss: 0.3494 | accuracy 116/128=0.906
Epoch 136/200 | Batch 0/16 | Loss: 0.3088 | accuracy 116/128=0.906
Epoch 136/200 | Batch 10/16 | Loss: 0.3662 | accuracy 115/128=0.898
Epoch 137/200 | Batch 0/16 | Loss: 0.3178 | accuracy 117/128=0.914
Epoch 137/200 | Batch 10/16 | Loss: 0.4010 | accuracy 112/128=0.875
Epoch 138/200 | Batch 0/16 | Loss: 0.3349 | accuracy 114/128=0.891
Epoch 138/200 | Batch 10/16 | Loss: 0.3311 | accuracy 114/128=0.891
Epoch 139/200 | Batch 0/16 | Loss: 0.3263 | accuracy 115/128=0.898
Epoch 139/200 | Batch 10/16 | Loss: 0.3045 | accuracy 117/128=0.914
Epoch 140/200 | Batch 0/16 | Loss: 0.2755 | accuracy 117/128=0.914
Epoch 140/200 | Batch 10/16 | Loss: 0.2942 | accuracy 116/128=0.906
checkpoint saved : ./model//check.epoch140
Epoch 141/200 | Batch 0/16 | Loss: 0.2904 | accuracy 115/128=0.898
Epoch 141/200 | Batch 10/16 | Loss: 0.2317 | accuracy 121/128=0.945
Epoch 142/200 | Batch 0/16 | Loss: 0.4009 | accuracy 112/128=0.875
Epoch 142/200 | Batch 10/16 | Loss: 0.2950 | accuracy 117/128=0.914
Epoch 143/200 | Batch 0/16 | Loss: 0.2833 | accuracy 114/128=0.891
Epoch 143/200 | Batch 10/16 | Loss: 0.2006 | accuracy 121/128=0.945
Epoch 144/200 | Batch 0/16 | Loss: 0.3718 | accuracy 117/128=0.914
Epoch 144/200 | Batch 10/16 | Loss: 0.4305 | accuracy 106/128=0.828
Epoch 145/200 | Batch 0/16 | Loss: 0.2323 | accuracy 118/128=0.922
Epoch 145/200 | Batch 10/16 | Loss: 0.2974 | accuracy 120/128=0.938
Epoch 146/200 | Batch 0/16 | Loss: 0.2393 | accuracy 120/128=0.938
Epoch 146/200 | Batch 10/16 | Loss: 0.2414 | accuracy 120/128=0.938
Epoch 147/200 | Batch 0/16 | Loss: 0.2520 | accuracy 117/128=0.914
Epoch 147/200 | Batch 10/16 | Loss: 0.1956 | accuracy 123/128=0.961
Epoch 148/200 | Batch 0/16 | Loss: 0.3122 | accuracy 112/128=0.875
Epoch 148/200 | Batch 10/16 | Loss: 0.2806 | accuracy 119/128=0.930
Epoch 149/200 | Batch 0/16 | Loss: 0.2155 | accuracy 120/128=0.938
Epoch 149/200 | Batch 10/16 | Loss: 0.2039 | accuracy 119/128=0.930
Epoch 150/200 | Batch 0/16 | Loss: 0.2909 | accuracy 115/128=0.898
Epoch 150/200 | Batch 10/16 | Loss: 0.2923 | accuracy 119/128=0.930
checkpoint saved : ./model//check.epoch150
Epoch 151/200 | Batch 0/16 | Loss: 0.2236 | accuracy 119/128=0.930
Epoch 151/200 | Batch 10/16 | Loss: 0.2395 | accuracy 116/128=0.906
Epoch 152/200 | Batch 0/16 | Loss: 0.2158 | accuracy 122/128=0.953
Epoch 152/200 | Batch 10/16 | Loss: 0.3395 | accuracy 115/128=0.898
Epoch 153/200 | Batch 0/16 | Loss: 0.1672 | accuracy 122/128=0.953
Epoch 153/200 | Batch 10/16 | Loss: 0.2050 | accuracy 122/128=0.953
Epoch 154/200 | Batch 0/16 | Loss: 0.1663 | accuracy 123/128=0.961
Epoch 154/200 | Batch 10/16 | Loss: 0.3110 | accuracy 115/128=0.898
Epoch 155/200 | Batch 0/16 | Loss: 0.2082 | accuracy 121/128=0.945
Epoch 155/200 | Batch 10/16 | Loss: 0.1615 | accuracy 126/128=0.984
Epoch 156/200 | Batch 0/16 | Loss: 0.1987 | accuracy 120/128=0.938
Epoch 156/200 | Batch 10/16 | Loss: 0.2378 | accuracy 120/128=0.938
Epoch 157/200 | Batch 0/16 | Loss: 0.2627 | accuracy 119/128=0.930
Epoch 157/200 | Batch 10/16 | Loss: 0.2107 | accuracy 119/128=0.930
Epoch 158/200 | Batch 0/16 | Loss: 0.2405 | accuracy 117/128=0.914
Epoch 158/200 | Batch 10/16 | Loss: 0.1911 | accuracy 121/128=0.945
Epoch 159/200 | Batch 0/16 | Loss: 0.2335 | accuracy 116/128=0.906
Epoch 159/200 | Batch 10/16 | Loss: 0.1842 | accuracy 124/128=0.969
Epoch 160/200 | Batch 0/16 | Loss: 0.1570 | accuracy 122/128=0.953
Epoch 160/200 | Batch 10/16 | Loss: 0.2303 | accuracy 118/128=0.922
checkpoint saved : ./model//check.epoch160
Epoch 161/200 | Batch 0/16 | Loss: 0.1888 | accuracy 122/128=0.953
Epoch 161/200 | Batch 10/16 | Loss: 0.1389 | accuracy 123/128=0.961
Epoch 162/200 | Batch 0/16 | Loss: 0.2047 | accuracy 121/128=0.945
Epoch 162/200 | Batch 10/16 | Loss: 0.1748 | accuracy 120/128=0.938
Epoch 163/200 | Batch 0/16 | Loss: 0.1451 | accuracy 124/128=0.969
Epoch 163/200 | Batch 10/16 | Loss: 0.1395 | accuracy 124/128=0.969
Epoch 164/200 | Batch 0/16 | Loss: 0.1824 | accuracy 120/128=0.938
Epoch 164/200 | Batch 10/16 | Loss: 0.1795 | accuracy 120/128=0.938
Epoch 165/200 | Batch 0/16 | Loss: 0.1478 | accuracy 123/128=0.961
Epoch 165/200 | Batch 10/16 | Loss: 0.1997 | accuracy 123/128=0.961
Epoch 166/200 | Batch 0/16 | Loss: 0.1808 | accuracy 120/128=0.938
Epoch 166/200 | Batch 10/16 | Loss: 0.1875 | accuracy 119/128=0.930
Epoch 167/200 | Batch 0/16 | Loss: 0.1764 | accuracy 118/128=0.922
Epoch 167/200 | Batch 10/16 | Loss: 0.1592 | accuracy 124/128=0.969
Epoch 168/200 | Batch 0/16 | Loss: 0.2030 | accuracy 118/128=0.922
Epoch 168/200 | Batch 10/16 | Loss: 0.1260 | accuracy 123/128=0.961
Epoch 169/200 | Batch 0/16 | Loss: 0.1836 | accuracy 119/128=0.930
Epoch 169/200 | Batch 10/16 | Loss: 0.2194 | accuracy 120/128=0.938
Epoch 170/200 | Batch 0/16 | Loss: 0.2251 | accuracy 120/128=0.938
Epoch 170/200 | Batch 10/16 | Loss: 0.1552 | accuracy 123/128=0.961
checkpoint saved : ./model//check.epoch170
Epoch 171/200 | Batch 0/16 | Loss: 0.0859 | accuracy 127/128=0.992
Epoch 171/200 | Batch 10/16 | Loss: 0.1966 | accuracy 121/128=0.945
Epoch 172/200 | Batch 0/16 | Loss: 0.1674 | accuracy 120/128=0.938
Epoch 172/200 | Batch 10/16 | Loss: 0.1515 | accuracy 124/128=0.969
Epoch 173/200 | Batch 0/16 | Loss: 0.1992 | accuracy 115/128=0.898
Epoch 173/200 | Batch 10/16 | Loss: 0.1338 | accuracy 123/128=0.961
Epoch 174/200 | Batch 0/16 | Loss: 0.1419 | accuracy 124/128=0.969
Epoch 174/200 | Batch 10/16 | Loss: 0.1699 | accuracy 121/128=0.945
Epoch 175/200 | Batch 0/16 | Loss: 0.2120 | accuracy 120/128=0.938
Epoch 175/200 | Batch 10/16 | Loss: 0.2010 | accuracy 119/128=0.930
Epoch 176/200 | Batch 0/16 | Loss: 0.2256 | accuracy 120/128=0.938
Epoch 176/200 | Batch 10/16 | Loss: 0.1252 | accuracy 122/128=0.953
Epoch 177/200 | Batch 0/16 | Loss: 0.1566 | accuracy 123/128=0.961
Epoch 177/200 | Batch 10/16 | Loss: 0.1291 | accuracy 122/128=0.953
Epoch 178/200 | Batch 0/16 | Loss: 0.1606 | accuracy 120/128=0.938
Epoch 178/200 | Batch 10/16 | Loss: 0.1472 | accuracy 125/128=0.977
Epoch 179/200 | Batch 0/16 | Loss: 0.1642 | accuracy 121/128=0.945
Epoch 179/200 | Batch 10/16 | Loss: 0.1051 | accuracy 125/128=0.977
Epoch 180/200 | Batch 0/16 | Loss: 0.2038 | accuracy 121/128=0.945
Epoch 180/200 | Batch 10/16 | Loss: 0.1333 | accuracy 122/128=0.953
checkpoint saved : ./model//check.epoch180
Epoch 181/200 | Batch 0/16 | Loss: 0.2143 | accuracy 120/128=0.938
Epoch 181/200 | Batch 10/16 | Loss: 0.1642 | accuracy 121/128=0.945
Epoch 182/200 | Batch 0/16 | Loss: 0.1173 | accuracy 123/128=0.961
Epoch 182/200 | Batch 10/16 | Loss: 0.1296 | accuracy 125/128=0.977
Epoch 183/200 | Batch 0/16 | Loss: 0.1144 | accuracy 126/128=0.984
Epoch 183/200 | Batch 10/16 | Loss: 0.1317 | accuracy 124/128=0.969
Epoch 184/200 | Batch 0/16 | Loss: 0.1667 | accuracy 124/128=0.969
Epoch 184/200 | Batch 10/16 | Loss: 0.0716 | accuracy 126/128=0.984
Epoch 185/200 | Batch 0/16 | Loss: 0.1296 | accuracy 122/128=0.953
Epoch 185/200 | Batch 10/16 | Loss: 0.1412 | accuracy 124/128=0.969
Epoch 186/200 | Batch 0/16 | Loss: 0.1750 | accuracy 121/128=0.945
Epoch 186/200 | Batch 10/16 | Loss: 0.1369 | accuracy 121/128=0.945
Epoch 187/200 | Batch 0/16 | Loss: 0.2256 | accuracy 121/128=0.945
Epoch 187/200 | Batch 10/16 | Loss: 0.1291 | accuracy 122/128=0.953
Epoch 188/200 | Batch 0/16 | Loss: 0.1657 | accuracy 120/128=0.938
Epoch 188/200 | Batch 10/16 | Loss: 0.0768 | accuracy 126/128=0.984
Epoch 189/200 | Batch 0/16 | Loss: 0.1616 | accuracy 122/128=0.953
Epoch 189/200 | Batch 10/16 | Loss: 0.1312 | accuracy 121/128=0.945
Epoch 190/200 | Batch 0/16 | Loss: 0.1196 | accuracy 126/128=0.984
Epoch 190/200 | Batch 10/16 | Loss: 0.0910 | accuracy 128/128=1.000
checkpoint saved : ./model//check.epoch190
Epoch 191/200 | Batch 0/16 | Loss: 0.1195 | accuracy 123/128=0.961
Epoch 191/200 | Batch 10/16 | Loss: 0.1772 | accuracy 121/128=0.945
Epoch 192/200 | Batch 0/16 | Loss: 0.1274 | accuracy 124/128=0.969
Epoch 192/200 | Batch 10/16 | Loss: 0.1134 | accuracy 123/128=0.961
Epoch 193/200 | Batch 0/16 | Loss: 0.1581 | accuracy 123/128=0.961
Epoch 193/200 | Batch 10/16 | Loss: 0.0965 | accuracy 126/128=0.984
Epoch 194/200 | Batch 0/16 | Loss: 0.1425 | accuracy 123/128=0.961
Epoch 194/200 | Batch 10/16 | Loss: 0.1087 | accuracy 124/128=0.969
Epoch 195/200 | Batch 0/16 | Loss: 0.1437 | accuracy 122/128=0.953
Epoch 195/200 | Batch 10/16 | Loss: 0.1568 | accuracy 123/128=0.961
Epoch 196/200 | Batch 0/16 | Loss: 0.0746 | accuracy 127/128=0.992
Epoch 196/200 | Batch 10/16 | Loss: 0.1321 | accuracy 124/128=0.969
Epoch 197/200 | Batch 0/16 | Loss: 0.1514 | accuracy 121/128=0.945
Epoch 197/200 | Batch 10/16 | Loss: 0.1016 | accuracy 126/128=0.984
Epoch 198/200 | Batch 0/16 | Loss: 0.1348 | accuracy 123/128=0.961
Epoch 198/200 | Batch 10/16 | Loss: 0.1297 | accuracy 123/128=0.961
Epoch 199/200 | Batch 0/16 | Loss: 0.1765 | accuracy 121/128=0.945
Epoch 199/200 | Batch 10/16 | Loss: 0.1166 | accuracy 122/128=0.953
Epoch 200/200 | Batch 0/16 | Loss: 0.0859 | accuracy 126/128=0.984
Epoch 200/200 | Batch 10/16 | Loss: 0.1667 | accuracy 121/128=0.945
checkpoint saved : ./model//check.epoch200
model saved : ./model//captcha.1digit.2k

Process finished with exit code 0

D:\Python310\python.exe D:/project/PycharmProjects/CNN/test.py
resize_height = 128
resize_width = 128
test_data_path = ./data/test-digit/
characters = 0123456789
digit_num = 1
class_num = 10
test_model_path = ./model/captcha.1digit.2k

test accuracy = 859 / 1000 = 0.859

4 参考资料

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2233679.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

「Mac畅玩鸿蒙与硬件29」UI互动应用篇6 - 多选问卷小应用

本篇将带你实现一个多选问卷小应用,用户可以勾选选项并点击提交按钮查看选择的结果。通过本教程,你将学习如何使用 Checkbox 组件、动态渲染列表、状态管理及用户交互,构建完整的应用程序。 关键词 UI互动应用Checkbox 组件状态管理动态列表…

【linux 多进程并发】0203 网络资源的多进程处理,子进程完全继承网络套接字,避免“惊群”问题

0203 网络资源的多进程处理 ​专栏内容: postgresql使用入门基础手写数据库toadb并发编程 个人主页:我的主页 管理社区:开源数据库 座右铭:天行健,君子以自强不息;地势坤,君子以厚德载物. 一、概…

江协科技STM32学习- P32 MPU6050

🚀write in front🚀 🔎大家好,我是黄桃罐头,希望你看完之后,能对你有所帮助,不足请指正!共同学习交流 🎁欢迎各位→点赞👍 收藏⭐️ 留言📝​…

程序设计方法与实践-时空权衡

什么是时空权衡? 时空权衡是算法设计中的一个众所周知的问题,也就是对算法的空间和时间效率做出权衡,它大概有分两种形式: 对输入的部分数据或者全部数据作预处理,然后对于获得额外信息储存起来,从而加快…

STM32F1学习——TIM

一、STM32中的定时器 在STM32中分为三种定时器,分别是基本定时器,通用定时器和高级定时器,每种定时器都是向下兼容的。 二、定时器详细介绍 a、基本定时器 基本定时器主要由下面与分频器、计数器 和 自动重装寄存器三个组成的时基单元&#…

W5500-EVB-Pico2评估板介绍

目录 1 概述 2 板载资源 2.1 硬件规格 2.2 硬件规格 2.3 工作条件 3 参考资料 3.1 RP2350 数据手册 3.2 W5500 数据手册 3.3 原理图 原理图 & 物料清单 & Gerber 文件 3.3 尺寸图 (单位 : mm) 3.4 参考例程 认证 CE FCC AWS 资质 Microsoft Azure 认证…

FFmpeg 4.3 音视频-多路H265监控录放C++开发十二:在屏幕上显示多路视频播放,可以有不同的分辨率,格式和帧率。

上图是在安防领域的要求,一般都是一个屏幕上有显示多个摄像头捕捉到的画面,这一节,我们是从文件中读取多个文件,显示在屏幕上。

Oracle视频基础1.4.3练习

15个视频 1.4.3 できない dbca删除数据库 id ls cd cd dbs ls ls -l dbca# delete a database 勾选 # chris 勾选手动删除数据库 ls ls -l ls -l cd /u01/oradata ls cd /u01/admin/ ls cd chris/ ls clear 初始化参数文件,admin,数据文件#新版本了…

一个由Deno和React驱动的静态网站生成器

大家好,今天给大家分享一个由 Deno React 驱动的静态网站生成器Pagic。 项目介绍 Pagic 是一个由 Deno React 驱动的静态网站生成器。它配置简单,支持将 md/tsx 文件渲染成静态页面,而且还有大量的官方或第三方主题和插件可供扩展。 核心…

1分钟解决Excel打开CSV文件出现乱码问题

一、编码问题 1、不同编码格式 CSV 文件有多种编码格式,如 UTF - 8、UTF - 16、ANSI 等。如果 CSV 文件是 UTF - 8 编码,而 Excel 默认使用的是 ANSI 编码打开,就可能出现乱码。例如,许多从网络应用程序或非 Windows 系统生成的 …

发布天工AI高级搜索功能,昆仑万维做最懂科研学术的AI搜索

今天,昆仑万维天工AI正式发布最新版本的AI高级搜索功能。 一年时光,栉风沐雨。昆仑万维致力于通过领先的AI技术,为全球用户提供创新的智能搜索和信息处理解决方案。无论是金融、科技领域的专业搜索还是文档分析,「天工AI高级搜索…

mac找到主目录下的文件夹

访达-(上方状态栏显示)-然后在

安装fpm,解决*.deb=> *.rpm

要从生成 .deb 包转换为 .rpm 包,可以按照以下步骤修改打包脚本 1. 使用 fpm 工具 fpm 是一个强大的跨平台打包工具,可以将 .deb 包重新打包成 .rpm,也可以直接从源文件打包成 .rpm。 安装 fpm sudo apt-get install ruby-dev sudo gem in…

分布式光伏管理办法

随着分布式光伏项目的不断增加,传统的管理方式已经难以满足高效、精准的管理需求。光伏业务管理系统作为一种集信息化、智能化于一体的管理工具,正在逐步成为分布式光伏项目管理的重要支撑。 光伏业务管理系统通过数字化手段实现对光伏业务全流程的精细化…

数据结构:LRUCache

什么是LRUCache 首先我们来看看什么是cache 缓存(Cache)通常用于两个速度不同的介质之间,以提高数据访问的速度和效率。这里有几个典型的应用场景: 处理器和内存之间: 处理器(CPU)的运算速度远…

智能提醒助理系列-springboot项目彩虹日志+TraceID

本系列文章记录“智能提醒助理”产品建设历程,记录实践经验、巩固知识点、锻炼总结能力。 本篇介绍如何让springboot启动日志“彩打” 提升日志识别度,同时增加TraceID,便于同一请求,全链路的追踪。 一、需求出发点 提升日志识别度…

窨井监测遥测终端RTU IP68防水强信号穿透力

在窨井的潮湿 黑暗和腐蚀性环境中 常规物联网设备往往难以生存 如何突破层层环境挑战 轻松应对极端条件 确保信号 24h不掉线,不延迟 不仅是对技术的突破 更是对恶劣环境的征服 ↓↓↓ 坚守 ——严苛环境下的工业设备 计讯物联工业级设备,专为恶劣环境设计…

150道MySQL高频面试题,学完吊打面试官--如何实现索引机制

前言 本专栏为150道MySQL大厂高频面试题讲解分析,这些面试题都是通过MySQL8.0官方文档和阿里巴巴官方手册还有一些大厂面试官提供的资料。 MySQL应用广泛,在多个开发语言中都处于重要地位,所以最好都要掌握MySQL的精华面试题,这也…

基于Matlab 模拟停车位管理系统【源码 GUI】

系统对进入停车位的车辆进行车牌识别,将识别出来的车牌号显示出来;然后对车主进行人脸识别,框出车主照片的人脸部分作为车主信息的标记,记录在系统库中。车辆在库期间,系统使用者可以随意查看车辆与车主信息的获取过程…

微信小程序 https://pcapi-xiaotuxian-front-devtest.itheima.net 不在以下 request 合法域名

微信小程序在调用接口的时候出现以上报错,接口没有问题,是因为小程序自动校验了合法域名 打开本地设置: 勾选不校验合法域名,即可 效果如下: