【深度学习】AIGC ,ControlNet 论文,原理,训练,部署,实战,教程(三)

news2024/11/13 9:38:52

文章目录

  • 源码资源下载
  • Python环境
  • 试玩controlnet
  • 训练
    • 数据准备
    • 选一个Stable diffusion模型
    • 开始训练

第一篇:https://qq742971636.blog.csdn.net/article/details/131531168

源码资源下载

目前 ControlNet 1.1 还在建设,本文这里使用源码 https://github.com/lllyasviel/ControlNet/tree/main。

此外还需要下载模型文件:https://huggingface.co/lllyasviel/ControlNet

发布在huggingface了,如何下载huggingface的模型文件,使用指令:

$ git lfs install
$ git clone https://huggingface.co/lllyasviel/ControlNet

详细log:

$ git lfs install
Git LFS initialized.

kevin@DESKTOP-J33EKGT MINGW64 /f
$ git clone https://huggingface.co/lllyasviel/ControlNet
Cloning into 'ControlNet'...
remote: Enumerating objects: 52, done.
remote: Counting objects: 100% (52/52), done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 52 (delta 16), reused 52 (delta 16), pack-reused 0
Unpacking objects: 100% (52/52), 7.06 KiB | 141.00 KiB/s, done.

Filtering content: 100% (16/16), 11.80 GiB | 6.47 MiB/s, done.
Encountered 8 file(s) that may not have been copied correctly on Windows:
        models/control_sd15_seg.pth
        models/control_sd15_hed.pth
        models/control_sd15_normal.pth
        models/control_sd15_canny.pth
        models/control_sd15_scribble.pth
        models/control_sd15_mlsd.pth
        models/control_sd15_depth.pth
        models/control_sd15_openpose.pth

See: `git lfs help smudge` for more details.

Windows 的Git不能超过4GB,已知的BUG。所以这八个文件直接点下载吧,或者用Linux的Git去下载。

最终整个工程如下:
在这里插入图片描述

Python环境

用aliyun镜像才能安装完。这里先安装了一下diffusers。

conda create -n py38_diffusers python=3.8 -y
conda activate py38_diffusers
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge -y
cd diffusers-main/
pip install -e .
cd examples/
cd controlnet/
pip install -r requirements.txt
accelerate config default

cd /ssd/xiedong/workplace/ControlNet

pip install  tb-nightly -i https://mirrors.aliyun.com/pypi/simple  # 用aliyun镜像才能安装完
pip install -r req.txt  # 用清华镜像快一些

req.txt 如下:

gradio==3.16.2
albumentations==1.3.0
opencv-python
opencv-contrib-python==4.3.0.36
imageio==2.9.0
imageio-ffmpeg==0.4.2
pytorch-lightning==1.5.0
omegaconf==2.1.1
test-tube>=0.7.5
streamlit==1.12.1
einops==0.3.0
transformers==4.19.2
webdataset==0.2.5
kornia==0.6
open_clip_torch==2.0.2
invisible-watermark>=0.1.5
streamlit-drawable-canvas==0.8.0
torchmetrics==0.6.0
timm==0.6.12
addict==2.4.0
yapf==0.32.0
prettytable==3.6.0
safetensors==0.2.7
basicsr==1.4.2

可以选择性安装:

pip install xformers

试玩controlnet

执行:

python gradio_scribble2image_interactive.py

网络问题,可能一些脚本会有问题,我这里没问题:

在这里插入图片描述

访问http://127.0.0.1:7860/,可以得到:

在这里插入图片描述
执行过程:
在这里插入图片描述
gpu显存占用:8847MiB

训练

数据准备

我的数据是准备训练scribble:

fake_image2scribble.py

from share import *
import config
import os

os.environ["CUDA_VISIBLE_DEVICES"] = '0'
import cv2
import einops
import gradio as gr
import numpy as np
import torch
import random

from pytorch_lightning import seed_everything
from annotator.util import resize_image, HWC3
from annotator.hed import HEDdetector, nms
from cldm.model import create_model, load_state_dict
from cldm.ddim_hacked import DDIMSampler

apply_hed = HEDdetector()


def image2hed(input_image):
    input_image = HWC3(input_image)
    detected_map = apply_hed(resize_image(input_image, 512))
    detected_map = HWC3(detected_map)
    img = resize_image(input_image, 512)
    H, W, C = img.shape

    detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
    detected_map = nms(detected_map, 127, 3.0)
    detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0)
    detected_map[detected_map > 4] = 255
    detected_map[detected_map < 255] = 0
    hed_result = 255 - detected_map
    return hed_result


if __name__ == "__main__":
    target = r'/ssd/xiedong/datasets/back_img_nohaveback'
    save_img = r'/ssd/xiedong/datasets/back_img_nohaveback_scribble'
    if not os.path.exists(save_img):
        os.makedirs(save_img)
    for i in os.listdir(target):
        img = cv2.imread(os.path.join(target, i))
        hed_result = image2hed(img)
        cv2.imwrite(os.path.join(save_img, i), hed_result)

    print("done")

官网教程示意图如下,我这里准备用scribble作为source image,Prompt我也自己准备了。

在这里插入图片描述

官网的训练教程:
https://github.com/lllyasviel/ControlNet/blob/main/docs/train.md
官网的数据 fill50k数据:
https://huggingface.co/datasets/fusing/fill50k

无所谓,最终搞个json出来:

在这里插入图片描述
再写个dataset loader:

import json
import cv2
import numpy as np

from torch.utils.data import Dataset


class MyDataset(Dataset):
    def __init__(self):
        self.data = []
        with open('./prompt.json', 'r') as f:
            self.data = json.load(f)

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        item = self.data[idx]

        source_filename = item['source']
        target_filename = item['target']
        prompt = item['prompt']

        source = cv2.imread(source_filename)
        target = cv2.imread(target_filename)

        # Do not forget that OpenCV read images in BGR order.
        source = cv2.cvtColor(source, cv2.COLOR_BGR2RGB)
        target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)

        # Normalize source images to [0, 1].
        source = source.astype(np.float32) / 255.0

        # Normalize target images to [-1, 1].
        target = (target.astype(np.float32) / 127.5) - 1.0

        return dict(jpg=target, txt=prompt, hint=source)


if __name__ == '__main__':
    # 打印第一个
    dataset = MyDataset()
    print(dataset[0])

选一个Stable diffusion模型

Then you need to decide which Stable Diffusion Model you want to control. In this example, we will just use standard SD1.5. You can download it from the official page of Stability. You want the file “v1-5-pruned.ckpt”.

(Or “v2-1_512-ema-pruned.ckpt” if you are using SD2.)

然后你需要连接一个控制网到SD模型。架构是

在这里插入图片描述
请注意,ControlNet内的所有权重也都是从SD复制的,因此没有任何层是从头开始训练的,并且您仍在微调整个模型。

我们为您提供了一个简单的脚本来轻松实现这一点。如果您的SD文件名为“./models/v1-5-pruned.ckpt”,并且您希望脚本将处理后的模型(SD+ControlNet)保存在位置“./models/control_sd15_ini.ckpt”,您只需运行:

【国内网络环境问题,可能要执行很多次,最快的解决办法我想你知道的。】
【./.cache/huggingface/hub/models–openai–clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/pytorch_model.bin】

python tool_add_control.py ./models/v1-5-pruned.ckpt ./models/control_sd15_ini.ckpt

Or if you are using SD2:

python tool_add_control_sd21.py ./models/v2-1_512-ema-pruned.ckpt ./models/control_sd21_ini.ckpt

This is the correct output from my machine:

(py38_diffusers) gpu16: /ssd/xiedong/workplace/ControlNet $ python tool_add_control.py ./models/v1-5-pruned.ckpt ./models/control_sd15_ini.ckpt
logging improved.
No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loaded model config from [./models/cldm_v15.yaml]
These weights are newly added: logvar
These weights are newly added: control_model.zero_convs.0.0.weight
These weights are newly added: control_model.zero_convs.0.0.bias
These weights are newly added: control_model.zero_convs.1.0.weight
These weights are newly added: control_model.zero_convs.1.0.bias
These weights are newly added: control_model.zero_convs.2.0.weight
These weights are newly added: control_model.zero_convs.2.0.bias
These weights are newly added: control_model.zero_convs.3.0.weight
These weights are newly added: control_model.zero_convs.3.0.bias
These weights are newly added: control_model.zero_convs.4.0.weight
These weights are newly added: control_model.zero_convs.4.0.bias
These weights are newly added: control_model.zero_convs.5.0.weight
These weights are newly added: control_model.zero_convs.5.0.bias
These weights are newly added: control_model.zero_convs.6.0.weight
These weights are newly added: control_model.zero_convs.6.0.bias
These weights are newly added: control_model.zero_convs.7.0.weight
These weights are newly added: control_model.zero_convs.7.0.bias
These weights are newly added: control_model.zero_convs.8.0.weight
These weights are newly added: control_model.zero_convs.8.0.bias
These weights are newly added: control_model.zero_convs.9.0.weight
These weights are newly added: control_model.zero_convs.9.0.bias
These weights are newly added: control_model.zero_convs.10.0.weight
These weights are newly added: control_model.zero_convs.10.0.bias
These weights are newly added: control_model.zero_convs.11.0.weight
These weights are newly added: control_model.zero_convs.11.0.bias
These weights are newly added: control_model.input_hint_block.0.weight
These weights are newly added: control_model.input_hint_block.0.bias
These weights are newly added: control_model.input_hint_block.2.weight
These weights are newly added: control_model.input_hint_block.2.bias
These weights are newly added: control_model.input_hint_block.4.weight
These weights are newly added: control_model.input_hint_block.4.bias
These weights are newly added: control_model.input_hint_block.6.weight
These weights are newly added: control_model.input_hint_block.6.bias
These weights are newly added: control_model.input_hint_block.8.weight
These weights are newly added: control_model.input_hint_block.8.bias
These weights are newly added: control_model.input_hint_block.10.weight
These weights are newly added: control_model.input_hint_block.10.bias
These weights are newly added: control_model.input_hint_block.12.weight
These weights are newly added: control_model.input_hint_block.12.bias
These weights are newly added: control_model.input_hint_block.14.weight
These weights are newly added: control_model.input_hint_block.14.bias
These weights are newly added: control_model.middle_block_out.0.weight
These weights are newly added: control_model.middle_block_out.0.bias
Done.

开始训练

代码很简单,超参几乎都在./models/cldm_v15.yaml。

import pytorch_lightning as pl
from torch.utils.data import DataLoader
from scribble_datasets_en import MyDataset
from cldm.logger import ImageLogger
from cldm.model import create_model, load_state_dict


# Configs
resume_path = './models/control_sd15_ini.ckpt'
batch_size = 4
logger_freq = 300
learning_rate = 1e-5
sd_locked = True
only_mid_control = False


# First use cpu to load models. Pytorch Lightning will automatically move it to GPUs.
model = create_model('./models/cldm_v15.yaml').cpu()
model.load_state_dict(load_state_dict(resume_path, location='cpu'))
model.learning_rate = learning_rate
model.sd_locked = sd_locked
model.only_mid_control = only_mid_control


# Misc
dataset = MyDataset()
dataloader = DataLoader(dataset, num_workers=1, batch_size=batch_size, shuffle=True)
logger = ImageLogger(batch_frequency=logger_freq)
trainer = pl.Trainer(gpus=1, precision=32, callbacks=[logger])


# Train!
trainer.fit(model, dataloader)

此外:
sd_locked = True
only_mid_control = False
在这里插入图片描述
在这里插入图片描述
训练开始

ControlNet$ python train_scribble_en.py
No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.6.self_attn.vweight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm1, 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.engs.position_ids', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.1fc2.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.10.mlp.fc1.b'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vmodel.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.b'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias'ion_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.weight', 'visdel.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.e.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_mocoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_modeler.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.e.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.16attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.8.m.weight', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm2.weight', 'visdel.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'visiel.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layersf_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.5.self_attn.out_pro', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.biassion_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vmodel.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.weight',on_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'visiol.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.weight',on_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.we 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proht', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.bivision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.7.layer_norm2.weight', 'viodel.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'visdel.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'visiel.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.enlayers.7.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder..20.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.15.self_attn..bias', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'visiol.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.10attn.out_proj.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.2fc1.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.self_attn.oj.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.w, 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'visuaection.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.23attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layerp.fc2.bias', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.ou.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_aproj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.16.layer_norght', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_pas', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_atroj.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.layer_nias', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.layer_noight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn..weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.oj.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.8.self_attnj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias'ion_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'visdel.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias',on_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', '_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_modeler.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_modeler.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.8.self_attnj.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.mlweight', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_modeder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.enlayers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoyers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.fc2.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vmodel.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_modeler.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoders.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoders.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.enlayers.13.layer_norm2.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layelayer_norm2.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.17.mlp.fc2.weighision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_mncoder.layers.12.self_attn.out_proj.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encayers.14.mlp.fc2.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.17_norm2.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.19.self_attn.qbias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.wei'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', '_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.lay.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.l8.layer_norm1.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.norm2.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.bivision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.wei'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'visiel.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encayers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'logit_scale', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_mocoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.l4.layer_norm2.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layersp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.out_pro', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_mododer.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_mocoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoyers.6.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.mlweight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.outweight', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2t', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias'ion_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.wei'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'text_projection.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoders.12.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.12.layer.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.mlp.as', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vmodel.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vmodel.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'visdel.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.22attn.k_proj.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.2_attn.out_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoders.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layersp.fc1.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.11.layer_nors', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.out_proht', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.0.self_attn.v_pas', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vmodel.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoders.2.layer_norm2.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.lay.mlp.fc2.bias', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.5.layer_norm1.bivision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layeself_attn.v_proj.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.la8.layer_norm2.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.1.self_aproj.weight', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.selfq_proj.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.ttn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.15attn.out_proj.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.lay.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoders.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.layer_norm2.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing ForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequessification model from a BertForSequenceClassification model).
Loaded model config from [./models/cldm_v15.yaml]
Loaded state_dict from [./models/control_sd15_ini.ckpt]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:118: UserWarning: You defined a `tion_step` but have no `val_dataloader`. Skipping val loop.
  rank_zero_warn("You defined a `validation_step` but have no `val_dataloader`. Skipping val loop.")
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:280: LightningDeprecationWarning:`LightningModule.on_train_batch_start` hook signature has changed in v1.5. The `dataloader_idx` argument will be removed in v1.7.
  rank_zero_deprecation(
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:287: LightningDeprecationWarning:`Callback.on_train_batch_end` hook signature has changed in v1.5. The `dataloader_idx` argument will be removed in v1.7.
  rank_zero_deprecation(
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [1]

  | Name              | Type               | Params
---------------------------------------------------------
0 | model             | DiffusionWrapper   | 859 M
1 | first_stage_model | AutoencoderKL      | 83.7 M
2 | cond_stage_model  | FrozenCLIPEmbedder | 123 M
3 | control_model     | ControlNet         | 361 M
---------------------------------------------------------
1.2 B     Trainable params
206 M     Non-trainable params
1.4 B     Total params
5,710.058 Total estimated model params size (MB)
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:110: UserWarning: The dataloader, train_data, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 56 which is the number of cpus on thisne) in the `DataLoader` init to improve performance.
  rank_zero_warn(
Epoch 0:   0%|                                                                                                                               | 0/1056 [00:00<?, /ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py:56: UserWarning: Trying to infer the `batch_size` n ambiguous collection. The batch size we found is 4. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
  warning_cache.warn(
Data shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
Epoch 0:  21%|| 224/1056 [04:53<18:11,  1.31s/it, loss=0.0611, v_num=2, train/loss_simple_step=0.0623, train/loss_vlb_step=0.000236, train/loss_step=0.0623, glEpoch 0:  28%|| 300/1056 [06:23<16:07,  1.28s/it, loss=0.0611, v_num=2, train/loss_simple_step=0.020, train/loss_vlb_step=7.54e-5, train/loss_step=0.020, globaData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00
Epoch 0:  57%|██████████▊        | 600/1056 [12:40<09:38,  1.27s/it, loss=0.0745, v_num=2, train/loss_simple_step=0.0428, train/loss_vlb_step=0.000152, train/loData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
Epoch 0:  58%|███▍  | 609/1056 [13:20<09:47,  1.31s/it, loss=0.0665, v_num=2, train/loss_simple_step=0.0532, train/loss_vlb_step=0.000262, train/loss_step=0.053Epoch 0:  67%|| 712/1056 [15:16<07:22,  1.29s/it, loss=0.0603, v_num=2, train/loss_simple_step=0.0232, train/loss_vlb_step=8.41e-5, train/loss_step=0.0232, gloEpoch 0:  85%|| 900/1056 [18:48<03:15,  1.25s/it, loss=0.0569, v_num=2, train/loss_simple_step=0.00959, train/loss_vlb_step=3.75e-5, train/loss_step=0.00959, gData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00
Epoch 0:  90%|| 953/1056 [20:17<02:11,  1.28s/it, loss=0.067, v_num=2, train/loss_simple_step=0.0193, train/loss_vlb_step=7.23e-5, train/loss_step=0.0193, globEpoch 0: 100%|| 1055/1056 [22:11<00:01,  1.26s/it, loss=0.0567, v_num=2, train/loss_simple_step=0.0556, train/loss_vlb_step=0.000276, train/loss_step=0.0556, g/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py:56: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
  warning_cache.warn(
Epoch 1:   0%| | 0/1056 [00:00<?, ?it/s, loss=0.0593, v_num=2, train/loss_simple_step=0.126, train/loss_vlb_step=0.000617, train/loss_step=0.126, global_step=10Data shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:27<00:00,  1.80it/s]
Epoch 1:  28%|| 300/1056 [06:06<15:23,  1.22s/it, loss=0.0465, v_num=2, train/loss_simple_step=0.0711, train/loss_vlb_step=0.000249, train/loss_step=0.0711, glData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00,  1.78it/s]
Epoch 1:  57%|| 600/1056 [12:12<09:16,  1.22s/it, loss=0.0632, v_num=2, train/loss_simple_step=0.0698, train/loss_vlb_step=0.000314, train/loss_step=0.0698, glData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00,  1.78it/s]
Epoch 1:  61%|| 649/1056 [13:37<08:32,  1.26s/it, loss=0.0602, v_num=2, train/loss_simple_step=0.0739, train/loss_vlb_step=0.00045

训练文件被保存在 lightning_logs 目录。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/726034.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

MSP432自主开发笔记2:八路寻迹模块的编程

今日得以继续我的MSP432学习之路&#xff0c;今日学习八路寻迹模块的编程与测试&#xff1a; 本章需要掌握的知识只有俩个&#xff1a;串口通信发送数据、GPIO基础初始化与获取电平状态 这俩个在我专栏里都可寻到&#xff0c;大家可以自行查找~~ 八路灰度寻迹模块的原理与应用…

【UE5 Cesium】13-Cesium for Unreal 加载本地倾斜摄影

目录 效果 步骤 一、获取倾斜摄影资源 二、使用CesiumLab转换 三、在UE中加载转换后的倾斜摄影 效果 步骤 一、获取倾斜摄影资源 首先下载免费的资源&#xff0c;这里是从規劃署 香港三維實景模型 规划署 香港三维实景模型下载的&#xff0c; 我下载了如下6个区域的资源…

突破AI大模型工业化开发,生成式AI迎来全链条服务商

LLM大模型和生成式AI在2023年上半年迅速爆发&#xff0c;不仅高盛集团和麦肯锡发布了生成式AI的经济前景预测&#xff0c;纷纷认为生成式AI将为全球生产力带来显著提升&#xff0c;每年将为全球带来数万亿美元的经济增长&#xff0c;更有UC伯克利和斯坦福等先后发布了LLM大模型…

Centos环境Access denied for user ‘root‘@‘%to database ‘xxx‘

Centos7解决数据库出现Access denied for user ‘root‘‘%to database ‘xxx‘ 问题 原因: root%表示 root用户 通过任意其他端访问操作 被拒绝! 授权即可: 1&#xff1a;进入数据库 mysql -u root -p 2.输入 mysql>grant all privileges on *.* to root% with grant o…

CCF-CSP真题《202303-5 施肥》思路+python,c++满分题解

想查看其他题的真题及题解的同学可以前往查看&#xff1a;CCF-CSP真题附题解大全 试题编号&#xff1a;202303-5试题名称&#xff1a;施肥时间限制&#xff1a;2.0s内存限制&#xff1a;1.0GB问题描述&#xff1a; 问题描述 春天到了&#xff0c;西西艾弗岛上的 n 块田地需要施…

Mysql之进阶宝典系列-视图

Mysql之进阶宝典系列-视图 一、视图是什么(what) 视图本质上是一个虚表&#xff0c;在数据库中不实际存在&#xff0c;它的所有数据来源于查询中所使用的表的数据&#xff0c;而且是在视图调用过程中动态生成的。视图只保存了SQL查询的逻辑&#xff0c;不保存SQL查询的结果。 …

HarmonyOS学习路之开发篇—数据管理(关系型数据库)

关系型数据库概述 关系型数据库&#xff08;Relational Database&#xff0c;RDB&#xff09;是一种基于关系模型来管理数据的数据库。HarmonyOS关系型数据库基于SQLite组件提供了一套完整的对本地数据库进行管理的机制&#xff0c;对外提供了一系列的增、删、改、查等接口&am…

深度学习-GPU多卡并行训练总结

1.为什么要使用多GPU并行训练 简单来说&#xff0c;有两种原因&#xff1a;第一种是模型在一块GPU上放不下&#xff0c;两块或多块GPU上就能运行完整的模型&#xff08;如早期的AlexNet&#xff09;。第二种是多块GPU并行计算可以达到加速训练的效果。想要成为“炼丹大师“&…

【原生HTML+SpringBoot】电子病历编辑器源码

一、简介 本系统主要面向医院医生、护士&#xff0c;提供对住院病人的电子病历书写、保存、修改、打印等功能。本系统基于云端SaaS服务方式&#xff0c;通过浏览器方式访问和使用系统功能&#xff0c;提供电子病历在线制作、管理和使用的一体化电子病历解决方案&#x…

常见面试题之JVM组成

1. JVM由那些部分组成&#xff0c;运行流程是什么&#xff1f; JVM是什么 Java Virtual Machine Java程序的运行环境&#xff08;java二进制字节码的运行环境&#xff09; 好处&#xff1a; 一次编写&#xff0c;到处运行 自动内存管理&#xff0c;垃圾回收机制 JVM由哪些…

sqldeveloper 连接 MySQL

sqldeveloper 连接 MySQL 工作中使用 Oracle 用户的小伙伴&#xff0c;sqldeveloper 是常用的开发和运维工具之一 工作中如果连接MySQL需要安装额外的客户端工具 不但学习成本高而且维护也较为麻烦 能不能使用 sqldeveloper 可以同时管理Oracle和MySQL两种甚至更多种数据库 成为…

基于Javaweb实现ATM机系统开发实战(一)基础配置和搭建

一、环境介绍 JDK:JDK1.8 IDEA:2020 MYSQL :MYSQL5.7 Navcat :mysql的客户端工具 Tomcat:tomcat5.7 二、项目简介 本项目主要实现一个模拟ATM机存款、取款、转账功能的一个系统&#xff0c;可以查看打印交易明细&#xff0c;后台用户可以管理用户账户卡信息。本系统主要是针对计…

Java易忘知识点

Java八大基本类型 整数类型&#xff1a;byte&#xff0c;1字节&#xff0c;8位&#xff0c;最大存储数据量是255&#xff0c;存放的数据范围是-128~127之间。整数类型&#xff1a;short&#xff0c;2字节&#xff0c;16位&#xff0c;最大数据存储量是65536&#xff0c;数据范…

centos下./configure报错:Permission denied

./configure 文章目录 ./configure报错解决方案使用chmod给./configure赋予x权限sftp给configure文件赋予x权限 ./configure报错 -bash: ./configure: Permission denied解决方案 使用chmod给./configure赋予x权限 sudo chmod x ./configuresftp给configure文件赋予x权限

推荐一个基于Java 的在线网盘开源程序

目录 一、软件简介 二、功能列表 文件列表 画廊模式 视频预览 文本预览 音频预览 PDF 预览 Office 预览 3d 文件预览 生成直链 页面设置 后台设置-登录 后台设置-存储源列表 后台设置-存储源权限控制 后台设置-添加存储源(本地存储) 后台设置-添加存储源 后…

Spring Boot中的请求参数绑定及使用

Spring Boot中的请求参数绑定及使用 在Web应用程序中&#xff0c;请求参数绑定是非常重要的操作。Spring Boot框架使得请求参数绑定变得非常简单&#xff0c;通过使用注解和预定义的类可以轻松地实现此操作。本文将介绍Spring Boot中的请求参数绑定及其使用。 请求参数绑定 在…

第十章:C语言数据结构与算法初阶之链式二叉树

系列文章目录 文章目录 系列文章目录前言一、链式二叉树的定义二、链式二叉树的实现三、链式二叉树的遍历1、前序遍历/先根遍历2、中序遍历/中根遍历3、后序遍历/后根遍历4、层序遍历5、前/中/后序遍历的关系 四、节点个数以及高度等1. 二叉树节点的个数2. 二叉树叶子节点个数3…

Java面试知识点复习​_kaic

一、后端基础 1.Java基础、集合、线程、异常&#xff08;自定义异常&#xff09;流 2.mysql、redis、mongodb(为什么使用) 3.ssm、springboot、springcloud、mybatis-plus 1.接口和抽象类的区别 二、前端基础 1.事件 三、实习和项目 1.博客项目的日志配置&#xff08;切面&a…

flutter tabBar 的属性及自定义实现

flutter tabBar 的属性及自定义实现 前言一、TabBar是什么&#xff1f;二、TabBar 自定义三、 Tab 自定义总结 前言 在Flutter中&#xff0c;TabBar的indicatorPadding属性用于设置指示器的内边距&#xff0c;而不是用于调整指示器和文字之间的间距。要调整TabBar中指示器和文字…

Field ‘非主键_id‘ doesn‘t have a default value

参考文章 Field ‘非主键_id‘ doesn‘t have a default value 的sql报错有两种情况 1.如果id是主键的话,一般是主键没有添加自增导致的错误 2.如果报错的是非主键id 那么是数据库设置错误 前端请求参数根本没有传入business_id 但是数据库报错 解决方法 把数据库数据限制n…