Siamese+Resnet进行相似度计算

news2024/9/22 13:26:26

Siamese+Resnet进行相似度计算

  • 基本介绍
  • 效果
    • 肺部+resnet34
    • 肺部+Resnet50
    • 人脸+自定义网络
  • 完整代码

基本介绍

使用SiameseNet进行肺部相似度计算,同样可以用于人脸识别等场景。
特征提取网络结果为Resnet,可以为Resnet34、Resnet50等。
数据组织结构如下图所示:

  • lung:下面包含训练集training 和测试机testing。training下面为各个类别图片的文件夹。
  • model_data: 为resnet预训练模型存放地址
  • result:保存测试结果和训练的日志。
  • Train_Siamese_with_Resnet.py为训练脚本。主要需要根据情况修改如下参数配置:
    - MY_DATA:选择哪个作为训练数据。直接选择data文件夹下的某个文件夹名字即可,如MY_DATA=“lung”
    - Config类:主要配置batchsize和epoch

在这里插入图片描述

效果

肺部+resnet34

训练损失述

效果1
效果2
效果3

肺部+Resnet50

训练损失

效果1效果2
效果3

人脸+自定义网络

训练损失
效果1
效果2
效果3

完整代码

#!/usr/bin/python
# -*- coding: UTF-8 -*-
"""
@author:uncle德鲁
@file:siamesenet.py
@time:2023/07/29
"""
import os
import torchvision
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torch.utils.data import DataLoader, Dataset
import matplotlib.pyplot as plt
import torchvision.utils
import numpy as np
import random
from PIL import Image
import torch
from torch.autograd import Variable
import PIL.ImageOps
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
from torch.hub import load_state_dict_from_url
import sys
import datetime
from torchsummary import summary
torch.autograd.set_detect_anomaly(True)


class Logger(object):
    def __init__(self, filename, stream=sys.stdout):
        self.terminal = stream
        self.log = open(filename, 'a')

    def write(self, message):
        self.terminal.write(message)
        self.log.write(message)

    def flush(self):
        pass


MY_DATA = "lung_mask"

# 现在的时间
now = datetime.datetime.now()
formatted_time = now.strftime("%Y-%m-%d_%H-%M")
sys.stdout = Logger("./result/train_loss_{}.log".format(formatted_time), sys.stdout)


def imshow(img, img_name, text=None, title=None):
    npimg = img.numpy()
    plt.axis("off")
    if text:
        plt.text(75, 8, text, style='italic', fontweight='bold',
                 bbox={'facecolor': 'white', 'alpha': 0.8, 'pad': 10})
    if title:
        plt.title(title)

    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.savefig(img_name)
    plt.clf()


def show_plot(iteration, loss, img_name):
    plt.plot(iteration, loss)
    plt.savefig(img_name)
    plt.clf()


class Config:
    my_data = MY_DATA
    training_dir = "./data/{}/training/".format(my_data)
    testing_dir = "./data/{}/testing/".format(my_data)
    train_batch_size = 4
    train_number_epochs = 10


class SiameseNetworkDataset(Dataset):
    def __init__(self, imageFolderDataset, transform=None, should_invert=True):
        self.imageFolderDataset = imageFolderDataset
        self.transform = transform
        self.should_invert = should_invert

    def __getitem__(self, index):
        img0_tuple = random.choice(self.imageFolderDataset.imgs)
        # we need to make sure approx 50% of images are in the same class
        should_get_same_class = random.randint(0, 1)
        if should_get_same_class:
            while True:
                # keep looping till the same class image is found
                img1_tuple = random.choice(self.imageFolderDataset.imgs)
                if img0_tuple[1] == img1_tuple[1]:
                    break
        else:
            while True:
                # keep looping till a different class image is found
                img1_tuple = random.choice(self.imageFolderDataset.imgs)
                if img0_tuple[1] != img1_tuple[1]:
                    break

        img0 = Image.open(img0_tuple[0])
        img1 = Image.open(img1_tuple[0])
        img0 = img0.convert("L")
        img1 = img1.convert("L")

        if self.should_invert:
            img0 = PIL.ImageOps.invert(img0)
            img1 = PIL.ImageOps.invert(img1)

        if self.transform is not None:
            img0 = self.transform(img0)
            img1 = self.transform(img1)

        return img0, img1, torch.from_numpy(
            np.array([int(img1_tuple[1] != img0_tuple[1])], dtype=np.float32))

    def __len__(self):
        return len(self.imageFolderDataset.imgs)


class BasicBlock(nn.Module):
    """
    # 定义 BasicBlock 模块
    # ResNet18/34的残差结构, 用的是2个3x3大小的卷积
    """
    expansion = 1   # 残差结构中, 判断主分支的卷积核个数是否发生变化,不变则为1

    def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):   # downsample 对应虚线残差结构
        super(BasicBlock, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                               kernel_size=(3, 3), stride=(stride, stride), padding=1, bias=False
                               )
        self.bn1 = nn.BatchNorm2d(out_channel)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                               kernel_size=(3, 3), stride=(1, 1), padding=1, bias=False
                               )
        self.bn2 = nn.BatchNorm2d(out_channel)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:  # 虚线残差结构,需要下采样
            identity = self.downsample(x)   # 捷径分支short cut

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        out += identity
        out = self.relu(out)

        return out


class Bottleneck(nn.Module):
    """
    # 定义 Bottleneck 模块
    # ResNet50/101/152的残差结构,用的是1x1+3x3+1x1的卷积
    #   注意:原论文中,在虚线残差结构的主分支上,第一个1x1卷积层的步距是2,第二个3x3卷积层步距是1。
    #  但在pytorch官方实现过程中是第一个1x1卷积层的步距是1,第二个3x3卷积层步距是2,
    #   这么做的好处是能够在top1上提升大概0.5%的准确率。
    #   可参考Resnet v1.5 https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch
    """
    expansion = 4   # 残差结构中第三层卷积核个数是第1/2层卷积核个数的4倍

    def __init__(self, in_channel, out_channel, stride=1,
                 downsample=None, groups=1, width_per_group=64):
        super(Bottleneck, self).__init__()

        width = int(out_channel * (width_per_group / 64.)) * groups

        self.conv1 = nn.Conv2d(
            in_channels=in_channel,
            out_channels=width,
            kernel_size=(1, 1),
            stride=(1, 1),
            bias=False)
        self.bn1 = nn.BatchNorm2d(width)

        self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
                               kernel_size=(3, 3), stride=(stride, stride), bias=False, padding=1
                               )
        self.bn2 = nn.BatchNorm2d(width)

        self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel * self.expansion,
                               kernel_size=(1, 1), stride=(1, 1), bias=False)
        self.bn3 = nn.BatchNorm2d(out_channel * self.expansion)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)   # 捷径分支short cut

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += identity
        out = self.relu(out)

        return out


class ResNet(nn.Module):
    """
    # 残差网络结构
    """
    # block = BasicBlock or Bottleneck
    # blocks_num 为残差结构中 conv2_x~conv5_x 中残差块个数, 一个列表

    def __init__(self, block, blocks_num, num_classes=1000, include_top=True, groups=1, width_per_group=64):
        super(ResNet, self).__init__()
        self.include_top = include_top
        self.in_channel = 64
        self.groups = groups
        self.width_per_group = width_per_group

        self.conv1 = nn.Conv2d(1,
                               self.in_channel,
                               kernel_size=(7, 7),
                               stride=(2, 2),
                               padding=3,
                               bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, blocks_num[0])
        self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)
        self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)
        self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)
        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)
            self.fc = nn.Linear(512 * block.expansion, num_classes)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')

    # channel 为残差结构中第1层卷积核个数
    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
        # ResNet50/101/152 的残差结构, block.expansion=4
        if stride != 1 or self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(nn.Conv2d(self.in_channel,
                                                 channel *
                                                 block.expansion,
                                                 kernel_size=(1, 1),
                                                 stride=(stride, stride),
                                                 bias=False),
                                       nn.BatchNorm2d(channel * block.expansion))

        layers = []
        layers.append(block(self.in_channel,
                            channel,
                            downsample=downsample,
                            stride=stride,
                            groups=self.groups,
                            width_per_group=self.width_per_group,
                            ))
        self.in_channel = channel * block.expansion

        for _ in range(1, block_num):
            layers.append(block(self.in_channel,
                                channel,
                                groups=self.groups,
                                width_per_group=self.width_per_group,
                                ))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        if self.include_top:
            x = self.avgpool(x)
            x = torch.flatten(x, 1)
            x = self.fc(x)

        return x


def resnet34(num_classes=1000, include_top=True):
    """
    # resnet34 结构
    # https://download.pytorch.org/models/resnet34-333f7ec4.pth
    """
    return ResNet(BasicBlock, [3, 4, 6, 3],
                  num_classes=num_classes, include_top=include_top)


def resnet50(num_classes=1000, include_top=True):
    """
    # resnet50 结构
    # https://download.pytorch.org/models/resnet50-19c8e357.pth
    """
    return ResNet(Bottleneck, [3, 4, 6, 3],
                  num_classes=num_classes, include_top=include_top)


def resnet101(num_classes=1000, include_top=True):
    """
    # resnet101 结构
    # https://download.pytorch.org/models/resnet101-5d3b4d8f.pth
    """
    return ResNet(Bottleneck, [3, 4, 23, 3],
                  num_classes=num_classes, include_top=include_top)


def resnext50_32x4d(num_classes=1000, include_top=True):
    """
    # resnext50_32x4d 结构
    # https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth
    """
    groups = 32
    width_per_group = 4
    return ResNet(Bottleneck, [3, 4, 6, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)


def resnext101_32x8d(num_classes=1000, include_top=True):
    """
    # resnext101_32x8d 结构
    # https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth
    """
    groups = 32
    width_per_group = 8
    return ResNet(Bottleneck, [3, 4, 23, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)


class SiameseNetwork(nn.Module):
    def __init__(self, num_classes=1000):
        super().__init__()

        # self.resnet = resnet50(num_classes=num_classes, include_top=True)
        self.resnet = resnet34(num_classes=num_classes, include_top=True)

    def initialize_weights(self):
        for module in self.modules():
            if isinstance(module, nn.Conv2d):
                # Initialize the weights of convolutional layers
                nn.init.xavier_uniform_(module.weight)
                if module.bias is not None:
                    nn.init.zeros_(module.bias)
            elif isinstance(module, nn.BatchNorm2d):
                # Initialize the weights and biases of batch normalization layers
                nn.init.ones_(module.weight)
                nn.init.zeros_(module.bias)
            elif isinstance(module, nn.Linear):
                # Initialize the weights and biases of linear layers
                nn.init.xavier_uniform_(module.weight)
                if module.bias is not None:
                    nn.init.zeros_(module.bias)

    def forward(self, x):
        raise NotImplementedError


class SiameseNetworkQuadret(SiameseNetwork):
    def __init__(self, **kwargs):
        super().__init__(**kwargs)

    def forward(self, x):
        x1, x2, x3, x4 = x
        x1, _ = self.resnet(x1)
        x2, _ = self.resnet(x2)
        x3, _ = self.resnet(x3)
        x4, _ = self.resnet(x4)
        return x1, x2


class SiameseNetworkTriplet(SiameseNetwork):
    def __init__(self, **kwargs):
        super().__init__(**kwargs)

    def forward(self, x):
        x1, x2, x3 = x
        x1 = self.resnet(x1)
        x2 = self.resnet(x2)
        x3 = self.resnet(x3)

        return x1, x2, x3


class SiameseNetworkDouble(SiameseNetwork):
    def __init__(self, **kwargs):
        super().__init__(**kwargs)

    def forward(self, x1, x2):
        x1 = self.resnet(x1)
        x2 = self.resnet(x2)
        return x1, x2


# Loss Function


class ContrastiveLoss(torch.nn.Module):
    """
    Contrastive loss function.
    Based on: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
    """

    def __init__(self, margin=2.0):
        super(ContrastiveLoss, self).__init__()
        self.margin = margin

    def forward(self, output1, output2, label):
        euclidean_distance = F.pairwise_distance(
            output1, output2, keepdim=True)
        loss_contrastive = torch.mean((1 - label) * torch.pow(euclidean_distance, 2) +
                                      label * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))
        return loss_contrastive


def run():
    base_dir = "./result/{}/".format(MY_DATA)
    if not os.path.exists(base_dir):
        os.makedirs(base_dir)
    folder_dataset = dset.ImageFolder(root=Config.training_dir)
    siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset,
                                            transform=transforms.Compose([transforms.Resize((100, 100)),
                                                                          transforms.ToTensor()]),
                                            should_invert=False)

    # train
    train_dataloader = DataLoader(siamese_dataset,
                                  shuffle=True,
                                  num_workers=4,
                                  batch_size=Config.train_batch_size)
    net = SiameseNetworkDouble().cuda()
    print(net)
    print("-" * 200)
    criterion = ContrastiveLoss()
    optimizer = optim.Adam(net.parameters(), lr=0.0005)

    counter = []
    loss_history = []
    iteration_number = 0
    for epoch in range(0, Config.train_number_epochs):
        for i, data in enumerate(train_dataloader, 0):
            img0, img1, label = data
            img0, img1, label = img0.cuda(), img1.cuda(), label.cuda()
            optimizer.zero_grad()
            output1, output2 = net(img0, img1)
            loss_contrastive = criterion(output1, output2, label)
            loss_contrastive.backward()
            optimizer.step()
            if i % 20 == 0:
                print("Epoch {}/{}: Current batch loss = {:4f}\n".format(epoch,
                                                                         Config.train_number_epochs,
                                                                         loss_contrastive.item()))
                iteration_number += 20
                counter.append(iteration_number)
                loss_history.append(loss_contrastive.item())

    show_plot(counter, loss_history, img_name="{}/train_loss.jpg".format(base_dir))

    # test
    folder_dataset_test = dset.ImageFolder(root=Config.testing_dir)
    siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset_test,
                                            transform=transforms.Compose([transforms.Resize((100, 100)),
                                                                          transforms.ToTensor()]),
                                            should_invert=False)

    test_dataloader = DataLoader(
        siamese_dataset,
        num_workers=4,
        batch_size=1,
        shuffle=True)
    dataiter = iter(test_dataloader)
    x0, _, _ = next(dataiter)

    for i in range(10):
        _, x1, label2 = next(dataiter)
        concatenated = torch.cat((x0, x1), 0)

        output1, output2 = net(Variable(x0).cuda(), Variable(x1).cuda())
        euclidean_distance = F.pairwise_distance(output1, output2)
        imshow(img=torchvision.utils.make_grid(concatenated),
               img_name="{}/img_{}.png".format(base_dir, i + 1),
               text='Dissimilarity: {:.2f}'.format(euclidean_distance.item()))
    pass


if __name__ == '__main__':
    # net = resnet34(num_classes=10, include_top=True).cuda()
    # x = torch.rand(1, 3, 224, 224)
    # x = x.cuda()
    # print(net(x).shape)
    run()

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/815089.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

基于STM32设计的数码相册

一、项目介绍 项目是基于STM32设计的数码相册,能够通过LCD显示屏解码显示主流的图片,支持bmp、jpg、gif等格式。用户可以通过按键或者触摸屏来切换图片,同时还可以旋转显示,并能够自适应居中显示,小尺寸图片居中显示&…

复习之kickstart无人职守安装脚本

一、kickstart简介 kickstart是红帽发行版中的一种安装方式,它通过以配置文件的方式来记录linux系统安装的各项参数和想要安装的软件。只要配置正确,整个安装过程中无需人工交互参与,达到无人值守安装的目的。 二、kickstar文件的生成 进入/…

销售易和管易云接口打通对接实战

销售易和管易云接口打通对接实战 来源系统:销售易 销售易CRM支持企业从营销、销售到服务的全流程自动化业务场景,创新性地利用AI、大数据、物联网等新型互联网技术打造双中台型CRM;既能帮助B2B企业连接外部经销商、服务商、产品以及最终用户,…

提升稳定性与动态响应,深入探究PID串级多闭环控制的应用价值

引言: PID(比例-积分-微分)控制作为自动控制系统中常用的控制算法,可以通过对系统的反馈进行调整,实现目标状态的稳定控制。而PID串级多闭环控制是在基本PID控制的基础上,引入多个PID控制器,形成…

某coin数据加密接口分析

新建项目,然后添加frida代码提示 frida 代码提示安装--vscode / node npm i types/frida-gum 任务 : sign 和 data,止于mobilekey是设备号,测试可以随机 sign 加密在 native 层 动态调试配置: 把ida 的 dbsgv 文件下的 android_server 复…

SFL218、SFL214、SFL216、SFL218B双喷嘴挡板两级电液伺服阀

SFZ141直接驱动式伺服阀 SFL317电反馈三级伺服阀 SFL316电反馈三级伺服阀 SFL218A双喷嘴挡板两级电液伺服阀 SFL218双喷嘴挡板两级电液伺服阀 SFL214双喷嘴挡板两级电液伺服阀 SFL216双喷嘴挡板两级电液伺服阀 SFL218B双喷嘴挡板两级电液伺服阀 SFL212B双喷嘴挡板两级电…

HTSA101伺服流量阀放大器

电液伺服阀放大器HTSA101特点: 可用拨码方式选择比例、积分(PI)控制前面板有电源、阀电流和继电器指示灯可开关选择阀电流的输出电流范围可选输出电流或者电压信号来匹配伺服阀或者比例阀采用标准 DIN rail 规格带有颤振信号、颤振信号的幅值和频率可调标准的DIN 导…

Day05-作业(SpringBootWeb请求响应)

作业1:联网创建SpringBoot工程,完成如下需求 测试接口数据,提取码:5555(将上述json文件,下载并导入postman)https://pan.baidu.com/s/1rwUfKTCgncB_xxarzOUpfA 需求: springboot的版本选择 2…

ALLEGRO之View

本文主要介绍ALLEGRO中的View菜单。 (1)Zoom By Points:按照选型区域放大; (2)Zoom Fit:适合窗口放大; (3)Zoom In:放大; &#xf…

Java---Shiro框架

第一章 入门概述 1.1 什么是shiro Apache Shiro 是一个功能强大且易于使用的 Java 安全(权限)框架。Shiro 可以完成:认证、授权、加密、会话管理、与 Web 集成、缓存 等。借助 Shiro 您可以快速轻地保护任何应用程序——从最小的移动应用程序到最大的 Web 和企业应用程序。 …

释放三年版本:Aspose.Total For NET [21.7/22.7/23.7]

请各位对号入座,选择自己需求范围,你懂的,你懂的,你懂的 Aspose.Total for .NET is the most complete package of all .NET File Format Automation APIs offered by Aspose. It empowers developers to create, edit, render, …

日撸java_day54-55

文章目录 第 54 、55 天: 基于 M-distance 的推荐代码运行截图 第 54 、55 天: 基于 M-distance 的推荐 1.M-distance, 就是根据平均分来计算两个用户 (或项目) 之间的距离. 2.邻居不用 k 控制. 距离小于 radius (即 ϵ ) 的都是邻居. 使用 M-distance 时, 这种方式效果更好. …

tinkerCAD案例:28. Build a Mobile Amplifier 构建移动放大器(3)

tinkerCAD案例:28. Build a Mobile Amplifier 构建移动放大器(3) 原文 step 1 “爵士乐”放大器 Lesson Overview: 课程概述: Now we’re going to decorate our design! 现在我们要装饰我们的设计! step 2 In this step we will ref…

纯CSS实现手风琴效果(常用样式)

【效果图】&#xff1a; 【html代码】&#xff1a; <div class"rowd"><ul class"fold_wrap"><li><a href"#"><div class"pic_auto pic_auto1 trans"></div><div class"adv_intro flex&…

qt子进程和父进程读写数据通信

进程A&#xff08;例如主程序&#xff09;创建了一个QProcess B&#xff0c;这个B就称为A的子进程&#xff0c;而A称为B的父进程。 这也称为进程间通信&#xff0c;有多种方式&#xff1a; TCP/IPLocal Server/Socket共享内存D-Bus &#xff08;Unix库&#xff09;QProcess会…

Java版本企业电子招投标采购系统源码+功能模块功能描述+数字化采购管理 采购招投标

功能模块&#xff1a; 待办消息&#xff0c;招标公告&#xff0c;中标公告&#xff0c;信息发布 描述&#xff1a; 全过程数字化采购管理&#xff0c;打造从供应商管理到采购招投标、采购合同、采购执行的全过程数字化管理。通供应商门户具备内外协同的能力&#xff0c;为外部…

Android复习(Android基础-四大组件)—— Activity

Activity作为四大组件之首&#xff0c;是使用最为频繁的一种组件&#xff0c;中文直接翻译为"活动"&#xff0c;不过如果被翻译为"界面"会更好理解。正常情况&#xff0c;除了Window&#xff0c;Dialog和Toast &#xff0c; 我们能见到的界面只有Activity。…

【phaser微信抖音小游戏开发003】游戏状态state场景规划

经过目录优化后的执行结果&#xff1a; 经历过上001&#xff0c;002的规划&#xff0c;我们虽然实现了helloworld .但略显有些繁杂&#xff0c;我们将做以下的修改。修改后的目录和文件结构如图。 game.js//小游戏的重要文件&#xff0c;从这个开始。 main.js 游戏的初始化&a…

集合框架、多线程、IO流

目录 集合框架 Java迭代器&#xff08;Iterator&#xff09; Java集合类 Collection派生 Map接口派生&#xff1a; Java集合List ArrayList Vector LinkedList Java集合Set HashSet LinkedHashSet TreeSet Java集合Queue&#xff08;队列&#xff09; PriorityQue…

AP5101 高压线性恒流电源驱动 输入 24-36V 输出3串18V LED线性恒流驱动方案

1,输入 24V-36V 输出3串18V 直亮 参考BOM 表如下 2,输入 24V-36V 输出3串18V 直亮 参考线路图 如下​ 3&#xff0c;产品描述 AP5101B 是一款高压线性 LED 恒流芯片&#xff0c;外围简单、内置功率管&#xff0c;适用于6- 60V 输入的高精度降压 LED 恒流驱动芯片。最大…