fastai2 实现SSD

news2024/9/8 23:06:49
  • https://github.com/search?q=fastai+ssd 有几个值得参考的代码,好好学习。
    • GitHub - Samjoel3101/SSD-Object-Detection: I am working on a SSD Object Detector using fastai and pytorch fastai2实现的SSD,终于找到了code。
    • https://github.com/sidravic/SSD_ObjectDetection_2/tree/master/train 这也是fastai2实现的ssd
  • 很重要的参考:mAP的参考,基于fastai2的结构:GitHub - rbrtwlz/fastai_object_detection: Extension of the fastai library to include object detection.
    • 计划是将这个mAP的计算同SSD结合起来就好了。
  • fastai2的SSD,来自:dhblog - Object Detection from scratch - Single Shot Detector
    • 这个获取data的方式帮助很大

  1.  fastai2和fastai1的bbox都是:x1,y1,x2,y2格式;显示框plt都是x,y,h,w格式
  2. fastai2的bbox范围是[-1,1];显示到224需要变换:
    for i,ax in enumerate(axes.flat): # y~[-1,1]  ([-1,1] + 1)/2~[0,1]
        show_ground_truth(ax, x[i], ((y[0][i] + 1)/2 * 224).cpu(), y[1][i].cpu())
    def draw_rect(ax, b, color='white'):
        patch = ax.add_patch(patches.Rectangle(b[:2], *b[-2:], fill=False, edgecolor=color, lw=2))

" 使用fastai v2 重写ssd by fastai course-v2 2018 part2 pascal_multi.ipynb "

# data pascal_voc2007


import warnings
warnings.filterwarnings('ignore')

import sys
sys.path.insert(0, '/home/zhr/fastai2/fastai_object_detection/fastai_object_detection') # debug源码,而非package

from pathlib import Path
from fastai.vision.all import *
# from zhr_util import get_annotations
from zhr_util import ssd_loss, SSD_Head, SSD_MultiHead, FocalLoss

path = Path('/home/helen/dataset/pascal_2007')

trn_im_names, trn_truths = get_annotations(path/'train.json')
val_im_names, val_truths = get_annotations(path/'valid.json')
# tst_im_names, tst_truths = get_annotations(path/'test.json') 
tot_im_names, tot_truths = [trn_im_names + val_im_names, trn_truths + val_truths]

img_y_dict = dict(zip(tot_im_names, tot_truths))
truth_data_func = lambda o: img_y_dict[o.name]

sz=224       # Image size
bs=64        # Batch size

item_tfms = [Resize(sz, method='squish'),]
batch_tfms = [Rotate(), Flip(), Dihedral()]

getters = [lambda o: path/'train'/o, lambda o: img_y_dict[o][0], lambda o: img_y_dict[o][1]]

pascal = DataBlock(blocks=(ImageBlock, BBoxBlock, BBoxLblBlock),
                   splitter=RandomSplitter(),
                   getters=getters,
                   item_tfms=item_tfms,
                   batch_tfms=batch_tfms,
                   n_inp=1)
dls = pascal.dataloaders(tot_im_names,bs=bs)
# dls.vocab

k = 9
head_reg4 = SSD_MultiHead(k, -3., dls)
body = create_body(resnet34(True))
model = nn.Sequential(body, head_reg4)

ssd_learner = Learner(dls, model, loss_func=ssd_loss)
ssd_learner.fit_one_cycle(3, 1e-3)

import json
import collections
from fastai.vision.all import *

def get_annotations(fname, prefix=None):
    "Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
    annot_dict = json.load(open(fname))
    id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
    classes = {}
    for o in annot_dict['categories']:
        classes[o['id']] = o['name']
    for o in annot_dict['annotations']:
        bb = o['bbox']
        id2bboxes[o['image_id']].append([bb[0],bb[1], bb[2]+bb[0], bb[3]+bb[1]])
        id2cats[o['image_id']].append(classes[o['category_id']])
    for o in annot_dict['images']:
        if o['id'] in id2bboxes:
            id2images[o['id']] = ('') + o['file_name']
    ids = list(id2images.keys())
    return [id2images[k] for k in ids], [[id2bboxes[k], id2cats[k]] for k in ids]

" 多类别的标签:fastai v2版本的使用方法 "
# if 0:
    # df = pd.read_csv(path/'train.csv')

    # def get_x(r): return path/'train'/r['fname']
    # def get_y(r): return r['labels'].split(' ')

    # # dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
    # #                    get_x = get_x, get_y = get_y)
    # # dsets = dblock.datasets(df)

    # def splitter(df):
    #     train = df.index[~df['is_valid']].tolist()
    #     valid = df.index[df['is_valid']].tolist()
    #     return train,valid

    # dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
    #                 splitter=splitter,
    #                 get_x=get_x, 
    #                 get_y=get_y,
    #                 item_tfms = RandomResizedCrop(224, min_scale=0.35))
    # dls = dblock.dataloaders(df)
        
    # dls.show_batch(max_n=9, figsize=(8, 6))
__all__ = ['get_ssd_model','ssd_resnet34', 'ssd_loss']

# Cell
import torch
from torch import nn
from torch.nn import Module
from torchvision.ops.boxes import batched_nms
from torch.hub import load_state_dict_from_url
from functools import partial
from fastai.vision.all import delegates

from fastai.vision import *
from fastai.callback import *

from fastai.vision import models
from fastai.vision.learner import create_body
from fastai.callback.hook import num_features_model
from fastai.layers import *

import torch.nn.functional as F

# Method used to match the shape of the conv_ssd_layer to the ground truth's shape
def flatten_conv(x,k):
    # Flatten the 4x4 grid to dim16 vectors
    bs,nf,gx,gy = x.size()
    x = x.permute(0,2,3,1).contiguous()
    return x.view(bs,-1,nf//k)

# Standard convolution with stride=2 to halve the size of the image
class OutConv(nn.Module):
    # Output Layers for SSD-Head. Contains oconv1 for Classification and oconv2 for Detection
    def __init__(self, k, nin, bias, dls):
        super().__init__()
        self.k = k
        self.oconv1 = nn.Conv2d(nin, (len(dls.vocab))*k, 3, padding=1)
        self.oconv2 = nn.Conv2d(nin, 4*k, 3, padding=1)
        self.oconv1.bias.data.zero_().add_(bias)
        
    def forward(self, x):
        return [flatten_conv(self.oconv2(x), self.k), # 先box,再label
                flatten_conv(self.oconv1(x), self.k)]
    
# SSD convolution that camptures bounding box and class
class StdConv(nn.Module):
    # Standard Convolutional layers 
    def __init__(self, nin, nout, stride=2, drop=0.1):
        super().__init__()
        self.conv = nn.Conv2d(nin, nout, 3, stride=stride, padding=1)
        self.bn = nn.BatchNorm2d(nout)
        self.drop = nn.Dropout(drop)
        
    def forward(self, x): return self.drop(self.bn(F.relu(self.conv(x))))

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

class SSD_Head(nn.Module):
    def __init__(self, k, bias, dls):
        super().__init__()
        self.drop = nn.Dropout(0.25)
        self.sconv0 = StdConv(512,256, stride=1)
        self.sconv2 = StdConv(256,256)
        self.out = OutConv(k, 256, bias, dls)
        
    def forward(self, x):
        x = self.drop(F.relu(x))
        x = self.sconv0(x)
        x = self.sconv2(x)
        return self.out(x)



def one_hot_embedding(labels, num_classes):
    return torch.eye(num_classes)[labels].cuda()

# 还是写成GPU格式更为有效,否则
class BCE_Loss(nn.Module):
    def __init__(self, num_classes):
        super().__init__()
        self.num_classes = num_classes

    def forward(self, pred, targ):
        t = one_hot_embedding(targ.squeeze(), self.num_classes)
        t = t[:,1:] # Start from 1 to exclude the Background
        x = pred[:,1:]
        w = self.get_weight(x,t)
        return F.binary_cross_entropy_with_logits(x, t, w.detach(), reduction='sum')/self.num_classes
    
    def get_weight(self,x,t): return None

class FocalLoss(BCE_Loss):
    def get_weight(self,x,t):
        alpha,gamma = 0.25,1
        p = x.sigmoid()
        pt = p*t + (1-p)*(1-t)
        w = alpha*t + (1-alpha)*(1-t)
        return w * (1-pt).pow(gamma)

#convert center/height/width to fastai top left and bottom right coordinates
def cthw2corners(boxes):
    top = (boxes[:,0] - boxes[:,2]/2).view(-1,1)
    left = (boxes[:,1] - boxes[:,3]/2).view(-1,1)
    bot = (boxes[:,0] + boxes[:,2]/2).view(-1,1)
    right = (boxes[:,1] + boxes[:,3]/2).view(-1,1)
    return torch.cat([top,left,bot,right],dim=1)
def hw2corners(ctr, hw): 
    # Function to convert BB format: (centers and dims) -> corners
    return torch.cat([ctr-hw/2, ctr+hw/2], dim=1)
# Filter out all zero-valued bounding boxes
def un_pad(boxes,labels):
    bb_keep = ((boxes[:,2] - boxes[:,0])>0).nonzero()[:,0]
    return boxes[bb_keep],labels[bb_keep]

# Calculate the area of a bounding box
def box_area(boxes):
    return (boxes[:,2] - boxes[:,0]) * (boxes[:,3] - boxes[:,1])

# Calculate the intersection of two given bounding boxes
def intersect(box_a,box_b):
    #make sure box_a and box_b exists, otherwise undefine behavior if you call the func
    top_left = torch.max(box_a[:,None,:2],box_b[None,:,:2])
    bot_right = torch.min(box_a[:,None,2:],box_b[None,:,2:])
    inter = torch.clamp((bot_right - top_left),min=0)
    return inter[:,:,0] * inter[:,:,1]

# Calculate Jaccard (IOU)
def iou(bbox,anchor):
    #bbox is gt_bb, anchor is anchor box, all in fastai style
    if len(bbox.shape) == 1: bbox = bbox[None,...]
    inter = intersect(bbox,anchor)
    union = box_area(bbox).unsqueeze(dim=1) + box_area(anchor).unsqueeze(dim=0) - inter #to broadcast shape to (N,16),where N is number of gt_bb for single image
    return inter / union
# Transform activations to bounding box format
def act_to_bbox(activation,anchor):
    activation = torch.tanh(activation) #force scale to be -1,1
    anchor = anchor.to(device)
    act_center = anchor[:,:2]+ (activation[:,:2]/2 * grid_sizes.float().to(activation.device))
    act_hw = anchor[:,2:] * (activation[:,2:]/2 + 1)
    # return cthw2corners(torch.cat([act_center,act_hw],dim=1))
    return hw2corners(act_center, act_hw)# 速度更快

  # Map to Ground Truth
def map_to_gt(overlaps):
    prior_overlap,prior_idx = overlaps.max(dim=1)
    sec_overlap,sec_idx = overlaps.max(dim=0)
    sec_overlap[prior_idx] = 4.99
    for i,o in enumerate(prior_idx): 
        sec_idx[o] = i
    return sec_overlap,sec_idx

class SSD_MultiHead(nn.Module):
    def __init__(self, k, bias, dls, drop=0.4):
        super().__init__()
        self.drop = nn.Dropout(drop)
        self.sconv0 = StdConv(512,256, stride=1, drop=drop)
        self.sconv1 = StdConv(256,256, drop=drop)
        self.sconv2 = StdConv(256,256, drop=drop)
        self.sconv3 = StdConv(256,256, drop=drop)
        self.out0 = OutConv(k, 256, bias, dls)
        self.out1 = OutConv(k, 256, bias, dls)
        self.out2 = OutConv(k, 256, bias, dls)
        self.out3 = OutConv(k, 256, bias, dls)

    def forward(self, x):
        x = self.drop(F.relu(x))
        x = self.sconv0(x)
        x = self.sconv1(x)
        o1c,o1l = self.out1(x)
        x = self.sconv2(x)
        o2c,o2l = self.out2(x)
        x = self.sconv3(x)
        o3c,o3l = self.out3(x)
        return [torch.cat([o1c,o2c,o3c], dim=1), # box
                torch.cat([o1l,o2l,o3l], dim=1)] # clas






anc_grids = [4, 2, 1]
anc_zooms = [0.75, 1., 1.3]
anc_ratios = [(1., 1.), (1., 0.5), (0.5, 1.)]

anchor_scales = [(anz*i,anz*j) for anz in anc_zooms 
                                    for (i,j) in anc_ratios]
# *** Number of Anchor Scales
k = len(anchor_scales)
# ***************************

import numpy as np
anc_offsets = [2/(o*2) for o in anc_grids] #2 is the h,w in fastai 1.0 (-1,1)
anc_x = np.concatenate([np.repeat(np.linspace(ao-1, 1-ao, ag), ag)
                        for ao,ag in zip(anc_offsets,anc_grids)])
anc_y = np.concatenate([np.tile(np.linspace(ao-1, 1-ao, ag), ag)
                        for ao,ag in zip(anc_offsets,anc_grids)])
anc_ctrs = np.repeat(np.stack([anc_x,anc_y], axis=1), k, axis=0)
anc_sizes = np.concatenate([np.array([[2*o/ag,2*p/ag] 
            for i in range(ag*ag) for o,p in anchor_scales])
                for ag in anc_grids]) #2/grid * scale,2 is the h,w in fastai 1.0
grid_sizes = torch.tensor(np.concatenate([np.array([ 1/ag 
            for i in range(ag*ag) for o,p in anchor_scales])
                for ag in anc_grids])).unsqueeze(1) *2 #again fastai 1.0 h,w is 2
anchors = torch.tensor(np.concatenate([anc_ctrs, anc_sizes], axis=1)).float()
anchor_cnr = cthw2corners(anchors)  
anchors = anchors.to(device)
anchor_cnr = anchor_cnr.to(device)
# 自己的SSD模型
class SSDModel(Module):
    def __init__(self, arch=models.resnet34, k=9, drop=0.4, no_cls=21):
        super().__init__()
        self.k = k
        
        self.body = create_body(arch(True))
        self.backbone = self.body
        self.drop = nn.Dropout(0.2)
        self.std_conv_0 = conv2_std_layer(num_features_model(self.body), 256, drop=drop,stride=1)
        # Dimension-reducing  layers
        self.std_conv_1 = conv2_std_layer(256, 256, drop=drop, stride=2) # 4 by 4 layer
        self.std_conv_2 = conv2_std_layer(256, 256, drop=drop, stride=2) # 2 by 2 layer
        self.std_conv_3 = conv2_std_layer(256, 256, drop=drop, stride=2) # 1 by 1 layer
        # Standard layers
        self.ssd_conv_1 = conv2_ssd_layer(256, k=self.k, no_cls=no_cls)
        self.ssd_conv_2 = conv2_ssd_layer(256, k=self.k, no_cls=no_cls)
        self.ssd_conv_3 = conv2_ssd_layer(256, k=self.k, no_cls=no_cls)

        # self.criterion = FocalLossMy()
        self.device = device
        self.anchors = anchors

    def forward(self, *x):
        imgs, targets = x if len(x)==2 else(x[0], None)
        xb = self.drop(F.relu(self.body(imgs)))
        xb = self.std_conv_0(xb)
        xb = self.std_conv_1(xb)
        bb1, cls1 = self.ssd_conv_1(xb) # 4 x 4
        xb = self.std_conv_2(xb)
        bb2, cls2 = self.ssd_conv_2(xb) # 2 x 2
        xb = self.std_conv_3(xb)     
        bb3, cls3  = self.ssd_conv_3(xb) # 1 x 1
        
        # bboxes = torch.cat([bb1, bb2, bb3], dim=1)
        # clases = torch.cat([cls1, cls2, cls3], dim=1)
        preds = [torch.cat([bb1, bb2, bb3], dim=1), 
                torch.cat([cls1, cls2, cls3], dim=1)]
        return preds
        # if targets is not None: # 训练过程
        #     cls_loss, reg_loss = self.criterion(preds, targets, self.anchors)
        #     return {"cls_loss": cls_loss, "reg_loss":reg_loss}
        # else:#验证过程
        #     predsOut = self.postprocess(imgs, self.anchors, preds)
        #     return predsOut
    
    def postprocess(self, x, anchors, preds):
        return None
loss_f = FocalLoss(21)

def ssd_1_loss(b_c,b_bb,bbox,clas,print_it=False):
    bbox,clas = un_pad(bbox,clas)
    a_ic = act_to_bbox(b_bb, anchors) # 之前的代码是有问题的,应该先转换激活元
    overlaps = iou(bbox.data, anchor_cnr.data)
    gt_overlap,gt_idx = map_to_gt(overlaps) # 找到真实的anchor
    gt_clas = clas[gt_idx]
    pos = gt_overlap > 0.4
    pos_idx = torch.nonzero(pos)[:,0]
    gt_clas[~pos] = 0
    gt_bbox = bbox[gt_idx]
    # loc_loss = ((a_ic[pos_idx] - gt_bbox[pos_idx]).abs()).mean()
    loc_loss = ((TensorBase(a_ic[TensorBase(pos_idx)]) - TensorBase(gt_bbox[TensorBase(pos_idx)])).abs()).mean()
    clas_loss  = loss_f(b_c, gt_clas)
    return loc_loss, clas_loss


def ssd_loss(pred,*targ,print_it=False):
    lcs,lls = 0.,0.
    for b_bb,b_c,bbox,clas in zip(*pred,*targ):
        loc_loss,clas_loss = ssd_1_loss(b_c,b_bb,bbox,clas,print_it)
        lls += loc_loss
        lcs += clas_loss
    if print_it: print(f'loc: {lls.data}, clas: {lcs.data}')
    # bce_loss就注释掉
#     if print_it: print(f'loc: {lls.data[0]}, clas: {lcs.data[0]}')
    return lls+lcs
 
  
    
@delegates(SSDModel)
def get_ssd_model(arch_str, num_classes, pretrained=True, pretrained_backbone=True,
                   trainable_layers=5, **kwargs):
    model = SSDModel(arch=arch_str, no_cls=num_classes)
    return model


ssd_resnet34 = partial(get_ssd_model, arch_str=models.resnet34)





本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/489241.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

等保定级怎么做

Q25:现在还没做等保还来得及吗?有什么影响? 答:来得及。种一棵树,最好的时间是十年前,其次是现在。可先根据定级备案要求和流程,先向公安递交定级备案文件,测评与整改预算提上日程,在经费未落实前,可以先进行系统定级、差距分析、整改计划制订等工作。 根据《等保工…

LVGL移植——stm32f4

LVGL移植说明 移植LVGL版本:8.3.6 主控:STM32F407ZGT6 github链接:https://github.com/lvgl/lvgl.git 文章目录 LVGL移植说明STM32移植LVGL①需要的依赖文件②移植显示驱动文件③将文件加入工程当中④配置心跳④修改栈堆的空间⑤编译链接 STM…

02-权限提升-Win溢出漏洞及ATSCPS提权

权限提升-Win溢出漏洞及AT&SC&PS提权 思维导图 明确权限提升基础知识:权限划分 明确权限提升环境问题:web及本地 web提权:已有网站权限(可以操作网站内容,但无法操作服务器),想要获得…

【软考中级】2022下半年软件设计师综合知识真题与答案

1、以下关于R1SC(精简指令集计算机)特点的叙述中,错误的是()。 A.对存储器操作进行限制,使控制简单化 B.指令种类多,指令功能强 C.设置大量通用寄存器 D.选取使用频率较高的一些指令,提高执行速度 参考答案:B 2、…

Qt6之KDE框架

25年来,KDE社区一直在使用Qt开发各种自由软件产品。其中包括Plasma桌面环境,像Krita和Kdenlive这样的创意工具,像GCompris这样的教育应用程序,像Kontact这样的群件套件以及无数其他应用程序,实用程序和小部件。 Qt以其…

Shell+VCS学习3---VCS命令

1 VCS介绍 VCS的功能可以大致分为两个大类:编译和仿真。 VCS编译的过程,就是经过一系列的操作,将verilog代码转换为可执行文件(.svim),接下来就是用dve进行仿真过程生成.vpd波形文件。 VCS是编译型verilo…

C++---树形DP---树的最长路径(每日一道算法2023.5.4)

注意事项: 本题为"树与图的DFS深度优先遍历—树的重心"的近似题,同时涉及到 单链表模拟邻接表存储图 的操作,建议先理解那篇文章。 题目: 给定一棵树,树中包含 n 个结点(编号1~n)和 …

JavaScript:栈和对列

文章目录 栈和对列Js 有栈与队列吗20. 有效的括号 - 力扣(LeetCode)思路 1047. 删除字符串中的所有相邻重复项 - 力扣(LeetCode)思路代码分析array.join() 操作打印const s of str 操作遍历 150. 逆波兰表达式求值 - 力扣&#xf…

(1)QT基础铺垫

目录 1.Qt特性 2. 新建项目 3. 工作目录与构建目录 4. 工作目录 4.1 .pro 项目配置文件 4.2 dialog.h 4.3 dialog.cpp 4.4 main.cpp 5. 帮助文档 6. 调试信息 1.Qt特性 Qt经常被当作是一个基于c语言的gui开发框架,但是这并不是qt的全部,除了开…

助力工业物联网,工业大数据之ODS层构建:申明分区代码及测试【十】

文章目录 知识点13:ODS层构建:申明分区代码及测试知识点14:ODS层与DWD层区别知识点15:DWD层构建:需求分析知识点16:DWD层构建:建库实现测试知识点17:DWD层构建:建表实现测…

Packet Tracer – 研究 VLAN 实施

Packet Tracer – 研究 VLAN 实施 地址分配表 设备 接口 IP 地址 子网掩码 默认网关 S1 VLAN 99 172.17.99.31 255.255.255.0 不适用 S2 VLAN 99 172.17.99.32 255.255.255.0 不适用 S3 VLAN 99 172.17.99.33 255.255.255.0 不适用 PC1 NIC 172.17.10.2…

Linux部署Gitlab/上传项目

一、提前准备 1.1安装依赖工具 yum install -y curl policycoreutils-python openssh-serversystemctl start sshd systemctl enable sshd 1.2安装Postfix邮件服务器 #安装 postfix yum install -y postfix#启动 postfix 并设置为开机启动 systemctl enable postfix systemctl …

HashCode与String大家庭

当金钱站起来说话时,所有的真理都保持了沉默;金钱一旦作响,坏话随之戛然而止。 Hashcode的作用 java的集合有两类,一类是List,还有一类是Set 前者有序可重复,后者无序不重复。当我们在set中插入的时候怎…

Vue传参${id}变成$%7Bid%7D

发生缘由 外卖项目在Linux服务器上面运行发送请求乱码 运行环境 电脑系统:win10jdk版本:jdk-8SpringBoot版本:v2.4.5MP版本:3.4.2Vue版本:Vue.js v2.6.12Linux版本:Centos7 报错信息 // 修改页面反查详…

MES系统中的BOM为何如此重要?先进的BOM体系怎么构建?

其实不管有没有数字化,BOM都是制造企业的灵魂纽带,对于产品繁多、流程冗长的工业企业来说,如果BOM管理不规范,必然对生产效率和产品质量带来巨大的隐患,因此在工业企业的数字化转型之路中,建立科学规范的BO…

Blender 建模练习-锁链

目录 1.1.1 贝塞尔圆1.2 阵列修改器1.3 阵列修改器 物体偏移1.4 添加贝塞尔曲线1.5 曲线修改器 1. 本次练习主要使用到阵列修改器、贝塞尔曲线、空物体 1.1 贝塞尔圆 把贝塞尔圆进行缩放,然后在物体数据属性|几何数据|倒角|设置倒角深度为0.05 1.2 阵列修改器 …

【数据结构】二叉树(详细)

二叉树 1.树1.1定义1.2基本术语1.3树形结构和线性结构1.4树的存储结构1.4.1双亲表示法1.4.2孩子兄弟表示法 2.二叉树2.1定义2.2特殊二叉树2.3性质2.4存储结构2.4.1顺序存储2.4.2链式存储结构 3.二叉树的基本操作3.1前序遍历(先序遍历)3.2中序遍历3.3后序…

开放原子训练营(第三季)inBuilder低代码开发实验室---报销单录入系统

作为一名低代码初学者,我使用inBuilder系统设计了一款报销单录入系统,实现了报销单录入与显示报销单列表的功能(如图1与图2所示),并获得了很多开发心得。从inBuilder系统的优点、缺点以及开发过程三方面出发&#xff0…

go继承nacos配置中心并读取配置信息

配置中心 为什么需要配置中心 平时我们写一个demo的时候,或者说一个单体的应用,都会有一个配置文件,不管是 json文件或者yaml文件,里面包含了redis,mysql,es等信息,如果我们修改了配置文件,往往我们需要重…

和Ai一起学习CMake(一)

和Ai一起学习CMake 现在人工智能爆火,ChatGPT、new bing等层出不穷。我们借助Ai来学习一下CMake。下面是我与Ai的问答,这个学习主要是通过Ai来学习,但是防止Ai乱说话,我会结合自身的知识和实际操作给出相应的补充。 我的环境如下…