yolov5 实例分割 jason标注格式转换 训练自己的数据集

news2024/11/25 21:14:18

目录

一、coco128-seg数据集分析

1、配置文件 coco128-seg.yaml

2、coco128-seg数据集

 二、自己用anylabeling标注获得的json文件

三、json文件转coco128-seg格式

四、实例分割训练

1、修改数据配置文件 coco128-seg.yaml

 2、训练


一、coco128-seg数据集分析

这个博客中有数据集下载网盘链接。

1、配置文件 coco128-seg.yaml

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# COCO128-seg dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov5
# └── datasets
#     └── coco128-seg  ← downloads here (7 MB)


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco128-seg  # dataset root dir
train: images/train2017  # train images (relative to 'path') 128 images
val: images/train2017  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes
names:
  0: person
  1: bicycle
  2: car
....
  78: hair drier
  79: toothbrush

# Download script/URL (optional)
download: https://ultralytics.com/assets/coco128-seg.zip

这句话很重要:

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]

我选择2,即提供图片在的路径列表文件

2、coco128-seg数据集

数据集的结构

 选择查看一个:如查看000000000030.jpg,与其对应的label的txt

 

 如图所示,txt中红色边框标注为label,75对应这yaml文件中的vase(花瓶),而黑色横线标记的就是一个坐标如(0.616156,0.364042),为了得到对应真实坐标,需要分别乘以宽和高,得到(0.616156*w,0.364042*h),即为标注的一个点;把后面坐标依次相连可的标注区域。

 二、自己用anylabeling标注获得的json文件

 

{
  "version": "0.2.23",
  "flags": {},
  "shapes": [
    {
      "label": "sheep",
      "text": "",
      "points": [
        [
          52.83876221498372,
          229.9674267100977
        ],
        [
          44.04397394136808,
          229.6416938110749
        ],
        [
          58.05048859934854,
          236.48208469055373
        ]
      ],
      "group_id": null,
      "shape_type": "polygon",
      "flags": {}
    },
    {
      "label": "sheep",
      "text": "",
      "points": [
        [
          100.0700325732899,
          225.40716612377847
        ],
        [
          98.7671009771987,
          220.52117263843647
        ],
        [
          88.99511400651465,
          215.96091205211724
        ]
      ],
      "group_id": null,
      "shape_type": "polygon",
      "flags": {}
    },
    {
      "label": "sheep",
      "text": "",
      "points": [
        [
          136.55211726384363,
          211.72638436482083
        ],
        [
          136.55211726384363,
          201.62866449511398
        ],
        [
          136.87785016286645,
          191.53094462540716
        ],
        [
          136.87785016286645,
          182.0846905537459
        ]
      ],
      "group_id": null,
      "shape_type": "polygon",
      "flags": {}
    },
    {
      "label": "sheep",
      "text": "",
      "points": [
        [
          263.58794788273616,
          173.94136807817588
        ],
        [
          272.057003257329,
          164.82084690553745
        ],
        [
          282.4804560260586,
          158.6319218241042
        ],
        [
          288.66938110749186,
          157.32899022801303
        ]
   
      ],
      "group_id": null,
      "shape_type": "polygon",
      "flags": {}
    }
  ],
  "imagePath": "sheep2.png",
  "imageData": null,
  "imageHeight": 300,
  "imageWidth": 449
}

三、json文件转coco128-seg格式

效果:

目录解释可以看之前写的博客:

VOC标签格式转yolo格式并划分训练集和测试集_爱钓鱼的歪猴的博客-CSDN博客

 代码:

这里json文件和图片在同一目录哦

import json
import os
import shutil
import random

labels = ['person', 'bicycle', 'car', 'motorbike', 'aeroplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'sofa', 'pottedplant', 'bed', 'diningtable', 'toilet', 'tvmonitor', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']


def generate_diretorys(wd):


    data_base_dir = os.path.join(wd, "VOCdevkit/")
    try:
        shutil.rmtree(data_base_dir)
        print("{} exist, will delete it".format(data_base_dir))
    except:
        print("{} does exist, does not need to delete!".format(data_base_dir))
    os.mkdir(data_base_dir)

    work_sapce_dir = os.path.join(data_base_dir, "VOC2007/")
    os.mkdir(work_sapce_dir)

    annotation_dir = os.path.join(work_sapce_dir, "Annotations/")
    os.mkdir(annotation_dir)

    image_dir = os.path.join(work_sapce_dir, "JPEGImages/")
    os.mkdir(image_dir)

    yolo_labels_dir = os.path.join(work_sapce_dir, "YOLOLabels/")
    os.mkdir(yolo_labels_dir)

    yolov5_images_dir = os.path.join(data_base_dir, "images/")
    os.mkdir(yolov5_images_dir)

    yolov5_labels_dir = os.path.join(data_base_dir, "labels/")
    os.mkdir(yolov5_labels_dir)

    yolov5_images_train_dir = os.path.join(yolov5_images_dir, "train/")
    os.mkdir(yolov5_images_train_dir)

    yolov5_images_test_dir = os.path.join(yolov5_images_dir, "val/")
    os.mkdir(yolov5_images_test_dir)

    yolov5_labels_train_dir = os.path.join(yolov5_labels_dir, "train/")
    os.mkdir(yolov5_labels_train_dir)

    yolov5_labels_test_dir = os.path.join(yolov5_labels_dir, "val/")
    os.mkdir(yolov5_labels_test_dir)

    print("创建VOC数据集格式多级目录完成\n")




def convert(jsonfile, yoloPath):
    """

    Args:
        jsonfile: anylabeling标注的结果文件 ,需要转化成coco128-seg数据集标注文件格式
        yoloPath:

    Returns:

    """

    assert os.path.exists(yoloPath), "'{}' does not exits".format(yoloPath)
    assert os.path.isfile(jsonfile), "'{}' does not exits".format(jsonfile)

    # 读取json标注文件
    with open(jsonfile, 'r') as load_file:
        load_dict = json.load(load_file)

    # 准备yolo标注输出文件
    fileName = jsonfile.split("/")[-1]
    file_id, extension = os.path.splitext(fileName)
    output_fileName = file_id + ".txt"
    outputPath = os.path.join(yoloPath, output_fileName)
    outputFile = open(outputPath, 'w')
    # print("\r jason文件名字:%s" %fileName)

    #json2yolo
    height = load_dict["imageHeight"]
    width = load_dict["imageWidth"]
    for item in load_dict["shapes"]:
        # print(item)
        label_int = labels.index(item["label"])
        row_str = ""
        row_str += str(label_int)
        if not item["points"]:
            continue
        for point in item["points"]:
            x = round(float(point[0])/width, 6) #小数保留6位
            y = round(float(point[1])/height, 6) #小数保留6位
            row_str += " " + str(x) + " " + str(y)

        row_str+="\n"
        outputFile.write(row_str)
    outputFile.close()





def json2yolo(inputPath, output_path, val_ratio):
    """

    Args:
        inputPath: json标注文件与图片所在的目录
        output_path: 输出目录,会在该目录下生成VOC数据集格式多级目录
        val_ratio: 验证集占比

    Returns:

    """
    #
    assert os.path.exists(inputPath), "'{}' does not exist".format(inputPath)
    assert os.path.exists(output_path), "'{}' does not exist".format(output_path)


    generate_diretorys(wd=output_path)

    wd = output_path
    train_file = open(os.path.join(wd, "yolov5_train.txt"), 'w')
    val_file = open(os.path.join(wd, "yolov5_val.txt"), 'w')
    train_file.close()
    val_file.close()
    train_file = open(os.path.join(wd, "yolov5_train.txt"), 'a')
    val_file = open(os.path.join(wd, "yolov5_val.txt"), 'a')




    # 检查图片与标签文件一一对应
    raw_files = os.listdir(inputPath)
    json_files = [] # jason文件完整路径
    img_files = [] # 图片文件完整路径
    for file in raw_files:
        fileName, extension = os.path.splitext(file)
        if extension in ['jpg','jpeg','ico','tiff', '.png','.bmp']:

            json_file = fileName + ".json"
            json_path = os.path.join(inputPath, json_file)

            if os.path.exists(json_path):
                img_path = os.path.join(inputPath,file)

                json_files.append(json_path)
                img_files.append(img_path)


                another_img_path = os.path.join(wd,"VOCdevkit","VOC2007","JPEGImages")
                another_jason_path = os.path.join(wd,"VOCdevkit","VOC2007","Annotations")
                shutil.copy(json_path, another_jason_path)
                shutil.copy(img_path, another_img_path)

    num = len(json_files)
    random.seed(0)
    eval_json_files = random.sample(json_files, k=int(num*val_ratio))

    i = 0
    for json_path in json_files:

        another_yolo_path = os.path.join(wd,"VOCdevkit","VOC2007","YOLOLabels")

        json_file_names_info = json_path.split("/")[-1]
        json_file_name_id, ext= os.path.splitext(json_file_names_info)
        yolo_file_name = json_file_name_id + ".txt"

        index = json_files.index(json_path)
        img_file_path = img_files[index]
        img_file_name = img_file_path.split("/")[-1]


        # print("\r json_file_name_id:{}   yolo_file_name :{}".format(json_file_name_id,yolo_file_name))

        if json_path in eval_json_files:
            index = json_files.index(json_path)
            img_path = img_files[index]


            val_img_path = os.path.join(wd,"VOCdevkit","images","val")
            val_json_path = os.path.join(wd,"VOCdevkit","labels","val")
            yolo_file = os.path.join(val_json_path, yolo_file_name)

            convert(jsonfile=json_path, yoloPath=val_json_path)
            shutil.copy(img_path, val_img_path)
            shutil.copy(yolo_file, another_yolo_path)


            val_file.write(os.path.join(val_img_path, img_file_name) + '\n')


        else:
            index = json_files.index(json_path)
            img_path = img_files[index]
            train_img_path = os.path.join(wd,"VOCdevkit","images","train")
            train_json_path = os.path.join(wd,"VOCdevkit","labels","train")
            yolo_file = os.path.join(train_json_path, yolo_file_name)


            convert(jsonfile=json_path, yoloPath=train_json_path)
            shutil.copy(img_path, train_img_path)
            shutil.copy(yolo_file, another_yolo_path)


            train_file.write(os.path.join(train_img_path, img_file_name) + '\n')

        i+=1
        print("\r processing [{}/{}]".format(i, num), end="")

if __name__ == "__main__":
    outputdir = "/home/jason/work/01-img/datasets-seg"
    inputdir = "/home/jason/work/01-img/imgs"
    json2yolo(inputPath=inputdir, output_path=outputdir, val_ratio=0.5)

这份代码其实就是把之前写的博客,目标检测json转yolo代码中的convert函数修改了一下

四、实例分割训练

1、修改数据配置文件 coco128-seg.yaml

首先修改数据路径

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: /home/jason/work/01-img/datasets-seg  # dataset root dir
train: yolov5_train.txt  # train images (relative to 'path') 128 images
val: yolov5_val.txt  # val images (relative to 'path') 128 images
test:  # test images (optional)

也可这样:

 也就是修改成这样:

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: /home/jason/work/01-img/datasets-seg/VOCdevkit  # dataset root dir
train: VOCdevkit/images/train  # train images (relative to 'path') 128 images
val: VOCdevkit/images/val  # val images (relative to 'path') 128 images
test:  # test images (optional)

另外coco128-seg.yaml文件中的类别信息,也要根据需要做修改,我这里不用动,因为我在转格式的时候类别列表用的就是coco 80 类别

 2、训练

在项目下执行

 python segment/train.py --data my-coco128-seg.yaml --weights yolov5n-seg.pt  --img 640 --batch-size 16  --epochs 1 --device cpu

输出:

(yolo) jason@honor:~/PycharmProjects/pytorch_learn/yolo/yolov5-7.0$ python segment/train.py --data my-coco128-seg.yaml --weights yolov5n-seg.pt  --img 640 --batch-size 16  --epochs 1
segment/train: weights=yolov5n-seg.pt, cfg=, data=my-coco128-seg.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=1, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train-seg, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, mask_ratio=4, no_overlap=False
github: skipping check (not a git repository), for updates see https://github.com/ultralytics/yolov5
requirements: YOLOv5 requirement "tensorboard>=2.4.1" not found, attempting AutoUpdate...
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/google-auth/
^CERROR: Operation cancelled by user

YOLOv5 🚀 2022-11-22 Python-3.8.13 torch-2.0.0+cu117 CPU

hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
TensorBoard: Start with 'tensorboard --logdir runs/train-seg', view at http://localhost:6006/

                 from  n    params  module                                  arguments                     
  0                -1  1      1760  models.common.Conv                      [3, 16, 6, 2, 2]              
  1                -1  1      4672  models.common.Conv                      [16, 32, 3, 2]                
  2                -1  1      4800  models.common.C3                        [32, 32, 1]                   
  3                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]                
  4                -1  2     29184  models.common.C3                        [64, 64, 2]                   
  5                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]               
  6                -1  3    156928  models.common.C3                        [128, 128, 3]                 
  7                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]              
  8                -1  1    296448  models.common.C3                        [256, 256, 1]                 
  9                -1  1    164608  models.common.SPPF                      [256, 256, 5]                 
 10                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]              
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 12           [-1, 6]  1         0  models.common.Concat                    [1]                           
 13                -1  1     90880  models.common.C3                        [256, 128, 1, False]          
 14                -1  1      8320  models.common.Conv                      [128, 64, 1, 1]               
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 16           [-1, 4]  1         0  models.common.Concat                    [1]                           
 17                -1  1     22912  models.common.C3                        [128, 64, 1, False]           
 18                -1  1     36992  models.common.Conv                      [64, 64, 3, 2]                
 19          [-1, 14]  1         0  models.common.Concat                    [1]                           
 20                -1  1     74496  models.common.C3                        [128, 128, 1, False]          
 21                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]              
 22          [-1, 10]  1         0  models.common.Concat                    [1]                           
 23                -1  1    296448  models.common.C3                        [256, 256, 1, False]          
 24      [17, 20, 23]  1    234397  models.yolo.Segment                     [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], 32, 64, [64, 128, 256]]
Model summary: 225 layers, 1991549 parameters, 1991549 gradients, 7.2 GFLOPs

Transferred 367/367 items from yolov5n-seg.pt
optimizer: SGD(lr=0.01) with parameter groups 60 weight(decay=0.0), 63 weight(decay=0.0005), 63 bias
train: Scanning /home/jason/work/01-img/datasets-seg/yolov5_train... 2 images, 0 backgrounds, 0 corrupt: 100%|██████████| 2/2 00:00
train: New cache created: /home/jason/work/01-img/datasets-seg/yolov5_train.cache
val: Scanning /home/jason/work/01-img/datasets-seg/yolov5_val... 2 images, 0 backgrounds, 0 corrupt: 100%|██████████| 2/2 00:00
val: New cache created: /home/jason/work/01-img/datasets-seg/yolov5_val.cache

AutoAnchor: 2.50 anchors/target, 1.000 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅
Plotting labels to runs/train-seg/exp/labels.jpg... 
Image sizes 640 train, 640 val
Using 2 dataloader workers
Logging results to runs/train-seg/exp
Starting training for 1 epochs...

      Epoch    GPU_mem   box_loss   seg_loss   obj_loss   cls_loss  Instances       Size
        0/0         0G    0.01272    0.02035    0.02096    0.09775          6        640: 100%|██████████| 1/1 00:01
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95)     Mask(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 00:00
                   all          2          4          1      0.957      0.995      0.768          1      0.957      0.995      0.778

1 epochs completed in 0.000 hours.
Optimizer stripped from runs/train-seg/exp/weights/last.pt, 4.3MB
Optimizer stripped from runs/train-seg/exp/weights/best.pt, 4.3MB

Validating runs/train-seg/exp/weights/best.pt...
Fusing layers... 
Model summary: 165 layers, 1986637 parameters, 0 gradients, 7.1 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95)     Mask(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 00:00
                   all          2          4          1      0.958      0.995      0.768          1      0.958      0.995      0.778
                   dog          2          4          1      0.958      0.995      0.768          1      0.958      0.995      0.778
Results saved to runs/train-seg/exp

至此,yolov5 实例分割训练记录完毕

如有帮助,不妨点个赞哦!

本文第一部分摘抄自第一篇参考博客,仅作笔记,无疑冒犯

参考:

coco128-seg数据集分析_coco128数据集_犟小孩的博客-CSDN博客

coco128数据集网盘分享 yolov5使用coco128完成第一次训练_coco128下载_GreenHand_Zhao的博客-CSDN博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/525640.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

5。STM32裸机开发(3)

嵌入式软件开发学习过程记录,本部分结合本人的学习经验撰写,系统描述各类基础例程的程序撰写逻辑。构建裸机开发的思维,为RTOS做铺垫(本部分基于库函数版实现),如有不足之处,敬请批评指正。 &…

【探索SpringCloud】服务发现

前言 今天,我们来聊聊SpringCloud服务发现。主要有如下几个议题: 一、服务发现的概念与方案;二、SpringCloud是如何与各个服务注册厂商进行集成的。 服务发现 在微服务架构中,我们不可避免的需要通过服务间的调用来完成系统功能…

Fourier分析入门——第1章——数学预备知识

第 1 章 学习Fourier分析的数学预备知识 目录 第 1 章 学习Fourier分析的数学预备知识 1.1 引言 1.2 几何和代数的一些相关概念的回顾 1.2.1 标量运算(scalar arithmetic) 1.2.2 向量运算(vector arithmetic) 1.2.3 向量乘法(vector multiplication) 1.2.4 向量长度 …

设计模式(java)-观察者模式

1. 简介 观察者模式,行为型设计模式。观察者模式在实际项目实践中,是一种使用较频繁的设计模式,它主要解决的是信息同步的问题,当多个对象需要从同一个主题中得到自身所需要的信息或状态,并通过这些信息或状态做出相应…

以太网外设ETH

1. 概述 近几年,项目需要,在多款单片机上使用了以太网外设。 本文为阶段知识整理,查缺补漏,方便以后再次遇到相关任务时,可以游刃有余的完成工作。 1.1 修改时间 2023年5月6日创建本文。包含STM32的ETH外设。2023年…

利用CNN对车牌进行智能识别(python代码,解压缩后直接运行)

1.代码流程 该段代码主要利用卷积神经网络(CNN)来识别车牌。下面是代码的主要流程: 导入所需的库和模块,包括matplotlib、numpy、cv2、tensorflow等。 加载用于检测车牌的级联分类器(cascade classifier)…

可见光遥感目标检测(一)任务概要介绍

前言 本篇开始对遥感图像的目标检测进行介绍,介绍了其目标前景、数据集以及评价指标。 本教程禁止转载。同时,本教程来自知识星球【CV技术指南】更多技术教程,可加入星球学习。 Transformer、目标检测、语义分割交流群 欢迎关注公众号CV技…

机器学习13(正则化)

文章目录 简介正则化经验风险和结构风险过拟合正则化建模策略 逻辑回归逻辑回归评估器 练习评估器训练与过拟合实验评估器的手动调参 简介 这一节详细探讨关于正则化的相关内容,并就 sklearn 中逻辑回归(评估器)的参数进行详细解释由于 skle…

javaweb项目实战之myBlog

项目简介 技术栈: Java Mysql Html Ajax Css JS Json 项目说明 :项目使用maven创建,使用MVC架构模式 表示层:通俗讲就是展现给用户的界面和控制器层Servlet,接受请求、封装数据、调用业务 逻辑层,响…

libevent高并发网络编程 - 05_libevent实现http客户端

文章目录 1 http客户端相关的APIevhttp_uri_parse()evhttp_uri_get_scheme()evhttp_uri_get_port()evhttp_uri_get_host()evhttp_uri_get_path()evhttp_uri_get_query()evhttp_connection_base_bufferevent_new()evhttp_request_new()evhttp_make_request()evhttp_request_get_…

刷题刷题,开心

一先来每日一题 在一个仓库里,有一排条形码,其中第 i 个条形码为 barcodes[i]。 请你重新排列这些条形码,使其中任意两个相邻的条形码不能相等。 你可以返回任何满足该要求的答案,此题保证存在答案。 示例 1: 输入&…

基于html+css图展示59

准备项目 项目开发工具 Visual Studio Code 1.44.2 版本: 1.44.2 提交: ff915844119ce9485abfe8aa9076ec76b5300ddd 日期: 2020-04-16T16:36:23.138Z Electron: 7.1.11 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Windows_NT x64 10.0.19044 项目…

Prompt learning 教学[最终篇]:Chatgpt使用场景推荐、优秀学习资料推荐、AI工具推荐

Prompt learning 教学[最终篇]:Chatgpt使用场景推荐、优秀学习资料推荐、AI工具推荐 1.chatgpt使用场景推荐 各位应该在各种平台看到不少可以尝试使用的场景,我这里仅收录: 有意思的场景:一般比较垂直或者小众,或者出…

CobaltStrike项目实战

环境介绍 模拟内网有三台机器:WEB、PC和DC。 WEB服务器有两个网络适配器,适配器1处于NAT模式用于连接外网,适配器2用于内网。 PC和WEB服务器一样,有两个适配器,能够同时访问外网和内网;DC作为域控制器&…

神经网络的训练过程、常见的训练算法、如何避免过拟合

神经网络的训练是深度学习中的核心问题之一。神经网络的训练过程是指通过输入训练数据,不断调整神经网络的参数,使其输出结果更加接近于实际值的过程。本文将介绍神经网络的训练过程、常见的训练算法以及如何避免过拟合等问题。 神经网络的训练过程 神…

henan Problem E. 矩阵游戏

hunan Problem E. 矩阵游戏 Attachments - 2023 CCPC Henan Provincial Collegiate Programming Contest - Codeforces 思路: 我们考虑用dp,定义f[i][j][k],代表从1,1走到i,j并且使用k次变换操作能够获得的最大 价值,那么类似于01背包,接下…

分布式数据库集成解决方案2

分布式数据库集成解决方案2 扩展阅读内部结构1.表空间(TABLESPACE) # 摘要 : 本文讨论了某公司发货系统的分布式数据库集成解决方案。该公司由于业务的发展,要在另三个城市设立货仓进行发货。为此,需要增加原先的MIS系统实现这一功…

javaweb系列-js函数、数组、字符串

1.4 函数 JavaScript中的函数被设计为执行特定任务的代码块&#xff0c;通过关键字function来定义。JavaScript中定义函数有2种语法。 第一种方法&#xff1a; <script>//定义function add(a,b){return a b;}//调用var a add(1,2);alert(a); </script> 第二种方…

【LeetCode】204.计数质数

204.计数质数&#xff08;中等&#xff09; 思路 埃拉托斯特斯筛法&#xff08;简称埃氏筛法&#xff09;&#xff0c;适用于「判断一个整数是否是质数」&#xff0c;该方法可以在判断一个整数 n 时&#xff0c;同时判断所有小于 n 的整数。 从 1 到 n 进行遍历&#xff0c;假…

内网渗透之权限维持-域控后门-SSPHOOKDSRMSIDSkeleton-Key

权限维持-基于验证DLL加载-SSP 方法一:但如果域控制器重启&#xff0c;被注入内存的伪造的SSP将会丢失。 mimikatz privilege::debug misc::memsspC:\Windows\System32\mimilsa.log 记录登录的账号密码 方法二:使用此方法即使系统重启&#xff0c;也不会影响到持久化的效果…