【Diffusers库】第四篇 训练一个扩散模型(Unconditional)

news2024/12/27 12:01:23

目录

  • 写在前面的话
  • 下载数据
  • 模型配置文件
  • 加载数据
  • 创建一个UNet2DModel
  • 创建一个调度器
  • 训练模型
  • 完整版代码:

写在前面的话

  这是我们研发的用于 消费决策的AI助理 ,我们会持续优化,欢迎体验与反馈。微信扫描二维码,添加即可。
  官方链接:https://ailab.smzdm.com/

************************************************************** 分割线 *******************************************************************

  本教程将讲述 如何在Smithsonian Butterflies数据集的子集上,从头开始训练UNet2DModel,最终训练个【无条件图片生成模型】,就是不能进行文生图的啊,我觉得比较适合垂直领域的数据训练。

下载数据

  训练的数据集在这个:https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset。可以使用代码进行下载。
   完整的代码在最后,因为网络的原因,调代码花了一些时间(官网默认上传hugging face,我没上传),所以要运行的话,copy最后的全部代码。我的显卡是3050,8G显存。

from datasets import load_dataset
dataset = load_dataset("huggan/smithsonian_butterflies_subset")

  代码运行完成后,它的默认下载路径在:

/Users/用户名/.cache/huggingface/datasets

  进入该目录后,可以看见下载的文件夹。

模型配置文件

  为了方便起见,训练一个包含超参数的配置文件:

from dataclasses import dataclass


@dataclass
class TrainingConfig:
    image_size = 128  # the generated image resolution
    train_batch_size = 16
    eval_batch_size = 16  # how many images to sample during evaluation
    num_epochs = 50
    gradient_accumulation_steps = 1
    learning_rate = 1e-4
    lr_warmup_steps = 500
    save_image_epochs = 10
    save_model_epochs = 30
    mixed_precision = "fp16"  # `no` for float32, `fp16` for automatic mixed precision
    output_dir = "ddpm-butterflies-128"  # the model name locally and on the HF Hub

    push_to_hub = True  # whether to upload the saved model to the HF Hub
    hub_private_repo = False
    overwrite_output_dir = True  # overwrite the old model when re-running the notebook
    seed = 0


config = TrainingConfig()

加载数据

from datasets import load_dataset

config.dataset_name = "huggan/smithsonian_butterflies_subset"
dataset = load_dataset(config.dataset_name, split="train")

  大家也可以添加一下,Smithsonian Butterflies 数据集中一些其他数据(创建一个ImageFolder文件夹),但是在 配置文件中 要进行添加对应的变量 imagefolder。当然,也可以使用自己的数据。

import matplotlib.pyplot as plt

fig, axs = plt.subplots(1, 4, figsize=(16, 4))
for i, image in enumerate(dataset[:4]["image"]):
    axs[i].imshow(image)
    axs[i].set_axis_off()
fig.show()

在这里插入图片描述
  不过,这些图像的大小都不一样,所以你需要先对它们进行预处理:

  1. 统一图像尺寸:缩放到配置文件中的指定尺寸;
  2. 数据增强:通过裁剪、翻转等方法
  3. 标准化:将像素值的范围控制在[-1, 1]
from torchvision import transforms

preprocess = transforms.Compose(
    [
        transforms.Resize((config.image_size, config.image_size)),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.5], [0.5]),
    ]
)

  对图像进行预处理,将图像通道转化为RGB

def transform(examples):
    images = [preprocess(image.convert("RGB")) for image in examples["image"]]
    return {"images": images}


dataset.set_transform(transform)

  可以再次可视化图像,以确认它们是否已经被调整。之后就可以将数据集打包到DataLoader中进行训练了!

import torch
train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True)

创建一个UNet2DModel

from diffusers import UNet2DModel

model = UNet2DModel(
    sample_size=config.image_size,  # the target image resolution
    in_channels=3,  # the number of input channels, 3 for RGB images
    out_channels=3,  # the number of output channels
    layers_per_block=2,  # how many ResNet layers to use per UNet block
    block_out_channels=(128, 128, 256, 256, 512, 512),  # the number of output channels for each UNet block
    down_block_types=(
        "DownBlock2D",  # a regular ResNet downsampling block
        "DownBlock2D",
        "DownBlock2D",
        "DownBlock2D",
        "AttnDownBlock2D",  # a ResNet downsampling block with spatial self-attention
        "DownBlock2D",
    ),
    up_block_types=(
        "UpBlock2D",  # a regular ResNet upsampling block
        "AttnUpBlock2D",  # a ResNet upsampling block with spatial self-attention
        "UpBlock2D",
        "UpBlock2D",
        "UpBlock2D",
        "UpBlock2D",
    ),
)

  还有一个方法,快速检查样本图像的形状是否与模型输出形状匹配。

sample_image = dataset[0]["images"].unsqueeze(0)

print("Input shape:", sample_image.shape)
print("Output shape:", model(sample_image, timestep=0).sample.shape)

  还需要一个调度器来为图像添加一些噪声。

创建一个调度器

  调度器的作用在不同的场景下会生成不同的作用,这取决于您是使用模型进行训练还是推理。
  在推理过程中,调度器从噪声中生成图像。
  在训练过程中,调度器从图像上生成噪声。
  可以看下DDPMScheduler调度器给图像增加噪声的效果:

import torch
from PIL import Image
from diffusers import DDPMScheduler

noise_scheduler = DDPMScheduler(num_train_timesteps=1000)
noise = torch.randn(sample_image.shape)
timesteps = torch.LongTensor([50])
noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps)

Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0])

在这里插入图片描述
  模型的训练对象,就是去预测这些被覆盖在图像上的噪声。在这个训练过程中,loss可以被计算。

import torch.nn.functional as F

noise_pred = model(noisy_image, timesteps).sample
loss = F.mse_loss(noise_pred, noise)

训练模型

  到目前为止,已经完成了开始训练模型的大部分内容,剩下的就是将所有内容组合在一起。
再添加一个优化器和一个学习率调度器:

from diffusers.optimization import get_cosine_schedule_with_warmup

optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate)
lr_scheduler = get_cosine_schedule_with_warmup(
    optimizer=optimizer,
    num_warmup_steps=config.lr_warmup_steps,
    num_training_steps=(len(train_dataloader) * config.num_epochs),
)

  你还需要一种方法去评估模型,可以使用 DDPMPipeline 去生成一个batch,然后将他存为一个grid。

from diffusers import DDPMPipeline
import math
import os


def make_grid(images, rows, cols):
    w, h = images[0].size
    grid = Image.new("RGB", size=(cols * w, rows * h))
    for i, image in enumerate(images):
        grid.paste(image, box=(i % cols * w, i // cols * h))
    return grid


def evaluate(config, epoch, pipeline):
    # Sample some images from random noise (this is the backward diffusion process).
    # The default pipeline output type is `List[PIL.Image]`
    images = pipeline(
        batch_size=config.eval_batch_size,
        generator=torch.manual_seed(config.seed),
    ).images

    # Make a grid out of the images
    image_grid = make_grid(images, rows=4, cols=4)

    # Save the images
    test_dir = os.path.join(config.output_dir, "samples")
    os.makedirs(test_dir, exist_ok=True)
    image_grid.save(f"{test_dir}/{epoch:04d}.png")

  现在开始梳理 整个模型训练的循环过程:

from accelerate import Accelerator
from huggingface_hub import HfFolder, Repository, whoami
from tqdm.auto import tqdm
from pathlib import Path
import os


def get_full_repo_name(model_id: str, organization: str = None, token: str = None):
    if token is None:
        token = HfFolder.get_token()
    if organization is None:
        username = whoami(token)["name"]
        return f"{username}/{model_id}"
    else:
        return f"{organization}/{model_id}"


def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler):
    # Initialize accelerator and tensorboard logging
    accelerator = Accelerator(
        mixed_precision=config.mixed_precision,
        gradient_accumulation_steps=config.gradient_accumulation_steps,
        log_with="tensorboard",
        logging_dir=os.path.join(config.output_dir, "logs"),
    )
    if accelerator.is_main_process:
        if config.push_to_hub:
            repo_name = get_full_repo_name(Path(config.output_dir).name)
            repo = Repository(config.output_dir, clone_from=repo_name)
        elif config.output_dir is not None:
            os.makedirs(config.output_dir, exist_ok=True)
        accelerator.init_trackers("train_example")

    # Prepare everything
    # There is no specific order to remember, you just need to unpack the
    # objects in the same order you gave them to the prepare method.
    model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
        model, optimizer, train_dataloader, lr_scheduler
    )

    global_step = 0

    # Now you train the model
    for epoch in range(config.num_epochs):
        progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process)
        progress_bar.set_description(f"Epoch {epoch}")

        for step, batch in enumerate(train_dataloader):
            clean_images = batch["images"]
            # Sample noise to add to the images
            noise = torch.randn(clean_images.shape).to(clean_images.device)
            bs = clean_images.shape[0]

            # Sample a random timestep for each image
            timesteps = torch.randint(
                0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device
            ).long()

            # Add noise to the clean images according to the noise magnitude at each timestep
            # (this is the forward diffusion process)
            noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps)

            with accelerator.accumulate(model):
                # Predict the noise residual
                noise_pred = model(noisy_images, timesteps, return_dict=False)[0]
                loss = F.mse_loss(noise_pred, noise)
                accelerator.backward(loss)

                accelerator.clip_grad_norm_(model.parameters(), 1.0)
                optimizer.step()
                lr_scheduler.step()
                optimizer.zero_grad()

            progress_bar.update(1)
            logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step}
            progress_bar.set_postfix(**logs)
            accelerator.log(logs, step=global_step)
            global_step += 1

        # After each epoch you optionally sample some demo images with evaluate() and save the model
        if accelerator.is_main_process:
            pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler)

            if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1:
                evaluate(config, epoch, pipeline)

            if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1:
                if config.push_to_hub:
                    repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=True)
                else:
                    pipeline.save_pretrained(config.output_dir)

  现在你终于可以使用 Accelerate的notebook_launcher函数来启动训练了。将训练循环、所有训练参数以及要用于训练的进程数(你可以将其更改为可用的GPU数量)传递给这个函数:

from accelerate import notebook_launcher
args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler)
notebook_launcher(train_loop, args, num_processes=1)

  在训练完成后,就可以看最后生成图像的模型了。

import glob
sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png"))
Image.open(sample_images[-1])

完整版代码:

# -*- coding:utf-8 _*-
# Author : Robin Chen
# Time : 2024/3/27 20:06
# File : train_diffusion.py
# Purpose: train a unconditional diffusion model

from diffusers import DDPMPipeline, DDPMScheduler, UNet2DModel
from accelerate import Accelerator
from huggingface_hub import HfFolder, Repository, whoami
from tqdm.auto import tqdm
from pathlib import Path
import os
from accelerate import notebook_launcher
from diffusers.optimization import get_cosine_schedule_with_warmup
from PIL import Image
import torch.nn.functional as F
import torch
from torchvision import transforms
from dataclasses import dataclass
from datasets import load_dataset

# from huggingface_hub import notebook_login

os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'
# notebook_login()

dataset = load_dataset("huggan/smithsonian_butterflies_subset")


@dataclass
class TrainingConfig:
    image_size = 128  # the generated image resolution
    train_batch_size = 16
    eval_batch_size = 16  # how many images to sample during evaluation
    num_epochs = 50
    gradient_accumulation_steps = 1
    learning_rate = 1e-4
    lr_warmup_steps = 500
    save_image_epochs = 10
    save_model_epochs = 30
    mixed_precision = "fp16"  # `no` for float32, `fp16` for automatic mixed precision
    output_dir = "ddpm-butterflies-128"  # the model name locally and on the HF Hub

    push_to_hub = False  # True  # whether to upload the saved model to the HF Hub
    hub_private_repo = False
    overwrite_output_dir = True  # overwrite the old model when re-running the notebook
    seed = 0



def transform(examples):
    images = [preprocess(image.convert("RGB")) for image in examples["image"]]
    return {"images": images}


def make_grid(images, rows, cols):
    w, h = images[0].size
    grid = Image.new("RGB", size=(cols * w, rows * h))
    for i, image in enumerate(images):
        grid.paste(image, box=(i % cols * w, i // cols * h))
    return grid


def evaluate(config, epoch, pipeline):
    # Sample some images from random noise (this is the backward diffusion process).
    # The default pipeline output type is `List[PIL.Image]`
    images = pipeline(
        batch_size=config.eval_batch_size,
        generator=torch.manual_seed(config.seed),
    ).images

    # Make a grid out of the images
    image_grid = make_grid(images, rows=4, cols=4)

    # Save the images
    test_dir = os.path.join(config.output_dir, "samples")
    os.makedirs(test_dir, exist_ok=True)
    image_grid.save(f"{test_dir}/{epoch:04d}.png")


def get_full_repo_name(model_id: str, organization: str = None, token: str = None):
    if token is None:
        token = HfFolder.get_token()
    if organization is None:
        username = whoami(token)["name"]
        return f"{username}/{model_id}"
    else:
        return f"{organization}/{model_id}"


config = TrainingConfig()


config.dataset_name = "huggan/smithsonian_butterflies_subset"

dataset = load_dataset(config.dataset_name, split="train")


preprocess = transforms.Compose(
    [
        transforms.Resize((config.image_size, config.image_size)),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.5], [0.5]),
    ]
)


dataset.set_transform(transform)

train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True)


model = UNet2DModel(
    sample_size=config.image_size,  # the target image resolution
    in_channels=3,  # the number of input channels, 3 for RGB images
    out_channels=3,  # the number of output channels
    layers_per_block=2,  # how many ResNet layers to use per UNet block
    block_out_channels=(128, 128, 256, 256, 512, 512),  # the number of output channels for each UNet block
    down_block_types=(
        "DownBlock2D",  # a regular ResNet downsampling block
        "DownBlock2D",
        "DownBlock2D",
        "DownBlock2D",
        "AttnDownBlock2D",  # a ResNet downsampling block with spatial self-attention
        "DownBlock2D",
    ),
    up_block_types=(
        "UpBlock2D",  # a regular ResNet upsampling block
        "AttnUpBlock2D",  # a ResNet upsampling block with spatial self-attention
        "UpBlock2D",
        "UpBlock2D",
        "UpBlock2D",
        "UpBlock2D",
    ),
)

sample_image = dataset[0]["images"].unsqueeze(0)

print("Input shape:", sample_image.shape)
print("Output shape:", model(sample_image, timestep=0).sample.shape)

noise_scheduler = DDPMScheduler(num_train_timesteps=1000)
noise = torch.randn(sample_image.shape)
timesteps = torch.LongTensor([50])
noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps)

Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0])



noise_pred = model(noisy_image, timesteps).sample
loss = F.mse_loss(noise_pred, noise)


optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate)
lr_scheduler = get_cosine_schedule_with_warmup(
    optimizer=optimizer,
    num_warmup_steps=config.lr_warmup_steps,
    num_training_steps=(len(train_dataloader) * config.num_epochs),
)




def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler):
    # Initialize accelerator and tensorboard logging
    accelerator = Accelerator(
        mixed_precision=config.mixed_precision,
        gradient_accumulation_steps=config.gradient_accumulation_steps,
        log_with="tensorboard",
    ) #         logging_dir=os.path.join(config.output_dir, "logs"),
    if accelerator.is_main_process:
        if config.push_to_hub:
            repo_name = get_full_repo_name(Path(config.output_dir).name)
            # repo = Repository(config.output_dir, clone_from=repo_name)
        elif config.output_dir is not None:
            os.makedirs(config.output_dir, exist_ok=True)
        accelerator.init_trackers("train_example")

    # Prepare everything
    # There is no specific order to remember, you just need to unpack the
    # objects in the same order you gave them to the prepare method.
    model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
        model, optimizer, train_dataloader, lr_scheduler
    )

    global_step = 0

    # Now you train the model
    for epoch in range(config.num_epochs):
        progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process)
        progress_bar.set_description(f"Epoch {epoch}")

        for step, batch in enumerate(train_dataloader):
            clean_images = batch["images"]
            # Sample noise to add to the images
            noise = torch.randn(clean_images.shape).to(clean_images.device)
            bs = clean_images.shape[0]

            # Sample a random timestep for each image
            timesteps = torch.randint(
                0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device
            ).long()

            # Add noise to the clean images according to the noise magnitude at each timestep
            # (this is the forward diffusion process)
            noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps)

            with accelerator.accumulate(model):
                # Predict the noise residual
                noise_pred = model(noisy_images, timesteps, return_dict=False)[0]
                loss = F.mse_loss(noise_pred, noise)
                accelerator.backward(loss)

                accelerator.clip_grad_norm_(model.parameters(), 1.0)
                optimizer.step()
                lr_scheduler.step()
                optimizer.zero_grad()

            progress_bar.update(1)
            logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step}
            progress_bar.set_postfix(**logs)
            accelerator.log(logs, step=global_step)
            global_step += 1

        # After each epoch you optionally sample some demo images with evaluate() and save the model
        if accelerator.is_main_process:
            pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler)

            if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1:
                evaluate(config, epoch, pipeline)

            if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1:
                if config.push_to_hub:
                    repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=True)
                else:
                    pipeline.save_pretrained(config.output_dir)


if __name__ == "__main__":
    args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler)
    notebook_launcher(train_loop, args, num_processes=1)

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1552123.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

2024通信展会|中国国际信息通信展览会|北京通信展

2024通信展会|中国国际信息通信展览会|北京通信展 2024年中国信息通信展将于2024年9月25-27日在北京.国家会议中心举办,展会将为我们带来无尽的惊喜和机遇。让我们一起期待这场盛大的科技盛会! 2024年中国国际信息通信展览会(简称&#xff1…

【分类评估指标,精确率,召回率,】from sklearn.metrics import classification_report

from: https://zhuanlan.zhihu.com/p/368196647 多分类 from sklearn.metrics import classification_report y_true [0, 1, 2, 2, 2] y_pred [0, 0, 2, 2, 1] target_names [class 0, class 1, class 2] # print(classification_report(y_true, y_pred, targe…

基于Transformer的医学图像分类研究

医学图像分类目前面临的挑战 医学图像分类需要研究人员同时具备医学图像分析和数字图像的知识背景。由于图像尺度、数据格式和数据类别分布的影响,现有的模型方法,如传统的机器学习的识别方法和基于深度卷积神经网络的方法,取得的识别准确度…

linux 环境安装配置

安装java17 1.下载安装包 wget https://download.oracle.com/java/17/latest/jdk-17_linux-x64_bin.tar.gz 2.解压到自定义目录/usr/local/java mkdir /usr/local/java tar zxvf jdk-17_linux-x64_bin.tar.gz -C /usr/local/java 3.配置环境变量 echo export PATH$PATH:/…

Ansible-1

Ansible是一款自动化运维、批量管理服务器的工具,批量系统配置、程序部署、运行命令等功能。基于Python开发,基于ssh进行管理,不需要在被管理端安装任何软件。Ansible在管理远程主机的时候,只有是通过各种模块进行操作的。 需要关…

指针数组的有趣程序【C语言】

文章目录 指针数组的有趣程序指针数组是什么?指针数组的魅力指针数组的应用示例:命令行计算器有趣的颜色打印 结语 指针数组的有趣程序 在C语言的世界里,指针是一种强大的工具,它不仅能够指向变量,还能指向数组&#…

OpenHarmony实战开发-List组件的使用之设置项

介绍 在本篇CodeLab中,我们将使用List组件、Toggle组件以及Router接口,实现一个简单的设置页,点击将跳转到对应的详细设置页面。效果图如下: 相关概念 CustomDialog:CustomDialog装饰器用于装饰自定义弹窗。List&…

支付系统就该这么设计,稳的一批!!

Part one 支付系统总览 核心系统交互 业务图谱 Part two 核心系统解析 交易核心 支付核心 渠道网关 资金核算 Part three 服务治理 平台统一上下文 数据一致性治理 DB拆分 异步化 Part four 生产实践 性能压测 稳定性治理 核心链路分离 服务依赖降级 前言 支付永…

2024.3.21|华北水利水电大学江淮校区ACM社团训练赛

2024.3.21|华北水利水电大学江淮校区ACM社团训练赛 1.数字拆解 2.矩阵修改 3.因子数 4.回文数 5.中位数 心有猛虎,细嗅蔷薇。你好朋友,这里是锅巴的C\C学习笔记,常言道,不积跬步无以至千里,希望有朝一日我们积累的滴…

敏捷BI看永洪科技,连续六届BI商业智能第一名

敏捷性对BI商业智能的重要性不言而喻。在一个快速变化的商业环境中,企业需要敏锐的洞察力和及时的反应能力来应对不断涌现的挑战和机遇。敏捷BI的核心理念是在保持质量的前提下,以快速、灵活、创新的方式获取、分析和利用数据,为企业决策提供…

并发编程之Callable方法的详细解析(带小案例)

Callable &#xff08;第三种线程实现方式&#xff09; Callable与Runnable的区别 Callable与Runnable的区别 实现方法名称不一样 有返回值 抛出了异常 ​class Thread1 implements Runnable{Overridepublic void run() { ​} } ​ class Thread2 implements Callable<…

【数学符合】

数学符合 ■ ∑ ■ ∑

中国国际通信大会2024|中国通信展览会|通信展览会

中国国际通信大会2024|中国通信展览会|通信展览会 中国国际信息通信展览会&#xff08;ICT展&#xff09;是亚太地区最具影响力的信息通信技术盛会之一。每年一度的ICT展汇聚了来自全球各行各业的专业人士&#xff0c;为各领域的科技公司、创新企业以及技术爱好者们提供一个难得…

Aurora插件安装

介绍 Latext是一种基于TEX的排版系统。 CTeX中文套装是基于Windows下的MiKTeX系统&#xff0c;集成了编辑器WinEdt和PostScrip处理软件Ghostscript和GSview等主要工具。CTeX中文套装在MikTeX的基础上增加了对中文的完整支持。 CTeX&#xff1a; CTeX套装 - CTEX 下载安装 然后…

libVLC 捕获鼠标、键盘事件

在实现播放器的时候&#xff0c;我们需要捕获键盘、鼠标事件进行视频快进、快退&#xff0c;或者双击全屏/退出全屏窗口、鼠标右键弹出菜单栏。默认情况下&#xff0c;在使用libVLC库的时候&#xff0c;我们无法捕获这些事件&#xff0c;因为我们将Qt的视频窗口传递给了libVLC。…

游泳防水耳机什么牌子好?内行人精选4个精品,不入后悔!

游泳是我们生活中一项非常重要的运动&#xff0c;它不仅可以锻炼我们的身体&#xff0c;还可以让我们放松心情。然而&#xff0c;在水下听音乐或接受指导&#xff0c;常常因为防水问题而变得困难重重。为了让大家在游泳时也能享受到美妙的音乐或者清晰的语音指导&#xff0c;我…

16.JRE和JDK

程序员在编写代码的时候其实是需要一些环境&#xff0c;例如我们之前写的HelloWorld。我们需要的东西有JVM、核心类库、开发工具。 1、JVM&#xff08;Java Virtual Machine&#xff09;&#xff1a;Java虚拟机&#xff0c;真正运行Java程序的地方。没有虚拟机&#xff0c;代码…

Spark-Scala语言实战(6)

在之前的文章中&#xff0c;我们学习了如何在scala中定义与使用类和对象&#xff0c;并做了几道例题。想了解的朋友可以查看这篇文章。同时&#xff0c;希望我的文章能帮助到你&#xff0c;如果觉得我的文章写的不错&#xff0c;请留下你宝贵的点赞&#xff0c;谢谢。 Spark-S…

springboot+mybatis快速搭建入门项目

简介 本文介绍了如何使用idea搭建一个简易springboot后端项目&#xff0c;该项目可以接受前端http请求&#xff0c;经由服务端并访问数据库&#xff0c;最后返回查询结果。该简易项目从零开始搭建&#xff0c;涵盖controller/service/dao层&#xff0c;简单易懂易上手&#xf…

离心式风机运行效率测算

1.总压静压动压&#xff1b; 2.动压0.5空气体密度风速2&#xff1b; 风机所需功率P&#xff08;KW&#xff09;&#xff1a;PQp/&#xff08;36001000η0η1&#xff09; Q—风量&#xff0c;m3/h&#xff1b; p—风机的全风压&#xff0c;Pa&#xff1b; η0—风机的内效率&a…