先进计算大赛背景:
‘’存内计算”架构通过消除存储与计算单元间的物理距离,突破传统冯·诺依曼架构的限制,自2016年起受到广泛关注,被视为国产算力发展的关键技术。 在存内计算架构中,权重布局对提高存算单元利用率和计算效率至关重要,是编译阶段优化的重点。优化算法的效率、阵列面积利用率和计算效率是神经网络在存内计算芯片上部署的关键指标。简而言之,"存内计算"技术通过优化,权重布局,提升了芯片性能和资源利用效率。
存算一体技术要点:
存算一体被认为是一种新型计算架构,与经典的冯诺依曼架构不同,存储器 本身即可进行计算,将存储单元和计算单元合为一体,省去了计算过程中数据搬 运环节,消除了由于数据搬运带来的功耗和延迟,从而进一步提升计算能效。
以知存科技的模拟存算计算方案为例,,首先将被乘矩阵的参数提 前存入存算单元,然后将乘数向量输入,在水平方向上,乘数在存算单元中完成 与被乘数的乘法操作,在垂直方向上,每个存算单元的乘法结果累加,最后得到 输出向量。
之所以以矩阵计算为例,是因为矩阵对于机器学习系统而言极为关键,它为数据表示提供了一种既简单又高效的方法。举例来说,输入数据(如图像里的像素集合)或者模型内部不同层之间的运行机制均能够借助矩阵来表示。所以,矩阵相乘运算在深度学习模型的总计算量当中占据着相当大的比重。实际上,在诸多当下流行的Transformer模型,如BERT、CLIP以及ChatGPT中,矩阵乘法的运行时长大约占其总运行时长的45 - 60%。矩阵乘法在卷积运算的计算过程里起着重要的作用,而卷积运算又是大多数计算机视觉模型的基础,也是许多高性能计算应用的核心部分。
这里我们使用的create函数如下:
def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): from pathlib import Path from models.common import AutoShape, DetectMultiBackend from models.experimental import attempt_load from models.yolo import ClassificationModel, DetectionModel, SegmentationModel from utils.downloads import attempt_download from utils.general import LOGGER, ROOT, check_requirements, intersect_dicts, logging from utils.torch_utils import select_device if not verbose: LOGGER.setLevel(logging.WARNING) check_requirements(ROOT / "requirements.txt", exclude=("opencv-python", "tensorboard", "thop")) name = Path(name) path = name.with_suffix(".pt") if name.suffix == "" and not name.is_dir() else name try: device = select_device(device) if pretrained and channels == 3 and classes == 80: try: model = DetectMultiBackend(path, device=device, fuse=autoshape) if autoshape: if model.pt and isinstance(model.model, ClassificationModel): LOGGER.warning( "WARNING ⚠️ YOLOv3 ClassificationModel is not yet AutoShape compatible. " "You must pass torch tensors in BCHW to this model, i.e. shape(1,3,224,224)." ) elif model.pt and isinstance(model.model, SegmentationModel): LOGGER.warning( "WARNING ⚠️ YOLOv3 SegmentationModel is not yet AutoShape compatible. " "You will not be able to run inference with this model." ) else: model = AutoShape(model) # for file/URI/PIL/cv2/np inputs and NMS except Exception: model = attempt_load(path, device=device, fuse=False) else: cfg = list((Path(__file__).parent / "models").rglob(f"{path.stem}.yaml"))[0] model = DetectionModel(cfg, channels, classes) # create model if pretrained: ckpt = torch.load(attempt_download(path), map_location=device) # load csd = ckpt["model"].float().state_dict() # checkpoint state_dict as FP32 csd = intersect_dicts(csd, model.state_dict(), exclude=["anchors"]) model.load_state_dict(csd, strict=False) if len(ckpt["model"].names) == classes: model.names = ckpt["model"].names if not verbose: LOGGER.setLevel(logging.INFO) # reset to default return model.to(device) except Exception as e: help_url = "https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading" s = f"{e}. Cache may be out of date, try `force_reload=True` or see {help_url} for help." raise Exception(s) from e
除此之外,我们还要用到基于存算一体的深度学习编译工具链,他和传统的编译器可大有不同,传统编译器是一种能够把高级编程语言转换为低级目标语言的程序,其编译流程可划分为前端和后端这两个部分。在编译过程中,前端负责解析高级语言,进而生成抽象语法树与中间表示;后端则负责生成目标代码。 深度学习编译器与传统编译器类似,是一种用于将深度学习神经网络模型部署到硬件平台的工具。它处在深度学习框架与硬件设备之间,把深度学习框架中所描述的模型定义当作输入内容,然后在各类深度学习硬件上生成高效的代码实现并将其作为输出结果。它是连接软硬件的桥梁,有着重要意义。而我们的作品就是运用了次深度学习编译工具链。下图为深度学习编译工具链常用设计框架。
witin_mapper是知存科技自研的用于神经网络映射的编译软件栈,可以将量化后的神经网络模型映射到WTM2101 MPU加速器上,是一种包括RiscV和MPU的完整解决方案,可以完成算子和图级别的转换和优化,将预训练权重编排到存算阵列中,并针对网络结构和算子给出存算优化方案,同时将不适合MPU运算的算子调度到CPU上运算,实现整网的调度,让神经网络开发人员高效快捷的将训练好的算法运行在WTM2101芯片上,极大缩短模型移植的开发周期并提高算法开发的效率。
Links : https://github.com/witmem/Witmem-Toolchain-WTM2101
可用工具:
在最开始我们做了模型准备,这个是官方提供的,然后通过官网下载模型可视化工具Netron软件。为后续工作做准备。紧接着我们获取模型中数据依赖关系以及权重矩阵块参数,这个过程总体来说不难,但是由于个人原因,在这里花费了很多精力。然后我们对应onnx文件的权重矩阵块组信息以及矩阵块之间的数据处理先后关系。到这里,一大步就已经跨过去了。
接下来,开始进行编译映射工作。读取了上面输出的结果,用其实现紧密排布如下图,
后面输出结果,使用存算阵列情况与各个阵列内的排布情况,如下图:
-
core_id:“空矩形”的序号
-
index:环节一中解析得到的权重矩阵的数据依赖顺序
-
w_clo_start:权重矩阵的左上角在“空矩形”中的列方向位置
-
w_row_start:权重矩阵的左上角在“空矩形”中的行方向位置
配置验证:
对于这一步,我们团队主要测试存算阵列的执行效率,排布的合理性,比如是否超出存算阵列的范围,是否存在矩阵块面积重叠的情况,以及存算阵列的面积利用率。
下面的run函数是我们的键盘手提供的,它能够对多种输入源进行处理,然后根据设定的参数进行推理、后处理,并将结果进行保存或显示。
@smart_inference_mode() def run( weights=ROOT / "yolov5s-cls.pt", # model.pt path(s) source=ROOT / "data/images", # file/dir/URL/glob/screen/0(webcam) data=ROOT / "data/coco128.yaml", # dataset.yaml path imgsz=(224, 224), # inference size (height, width) device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu view_img=False, # show results save_txt=False, # save results to *.txt nosave=False, # do not save images/videos augment=False, # augmented inference visualize=False, # visualize features update=False, # update all models project=ROOT / "runs/predict-cls", # save results to project/name name="exp", # save results to project/name exist_ok=False, # existing project/name ok, do not increment half=False, # use FP16 half-precision inference dnn=False, # use OpenCV DNN for ONNX inference vid_stride=1, # video frame-rate stride ): """Performs YOLOv3 classification inference on various input sources and saves or displays results.""" source = str(source) save_img = not nosave and not source.endswith(".txt") # save inference images is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) is_url = source.lower().startswith(("rtsp://", "rtmp://", "http://", "https://")) webcam = source.isnumeric() or source.endswith(".streams") or (is_url and not is_file) screenshot = source.lower().startswith("screen") if is_url and is_file: source = check_file(source) # download # Directories save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run (save_dir / "labels" if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir # Load model device = select_device(device) model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) stride, names, pt = model.stride, model.names, model.pt imgsz = check_img_size(imgsz, s=stride) # check image size # Dataloader bs = 1 # batch_size if webcam: view_img = check_imshow(warn=True) dataset = LoadStreams(source, img_size=imgsz, transforms=classify_transforms(imgsz[0]), vid_stride=vid_stride) bs = len(dataset) elif screenshot: dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) else: dataset = LoadImages(source, img_size=imgsz, transforms=classify_transforms(imgsz[0]), vid_stride=vid_stride) vid_path, vid_writer = [None] * bs, [None] * bs # Run inference model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) for path, im, im0s, vid_cap, s in dataset: with dt[0]: im = torch.Tensor(im).to(model.device) im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 if len(im.shape) == 3: im = im[None] # expand for batch dim # Inference with dt[1]: results = model(im) # Post-process with dt[2]: pred = F.softmax(results, dim=1) # probabilities # Process predictions for i, prob in enumerate(pred): # per image seen += 1 if webcam: # batch_size >= 1 p, im0, frame = path[i], im0s[i].copy(), dataset.count s += f"{i}: " else: p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0) p = Path(p) # to Path save_path = str(save_dir / p.name) # im.jpg txt_path = str(save_dir / "labels" / p.stem) + ("" if dataset.mode == "image" else f"_{frame}") # im.txt s += "{:g}x{:g} ".format(*im.shape[2:]) # print string annotator = Annotator(im0, example=str(names), pil=True) # Print results top5i = prob.argsort(0, descending=True)[:5].tolist() # top 5 indices s += f"{', '.join(f'{names[j]} {prob[j]:.2f}' for j in top5i)}, " # Write results text = "\n".join(f"{prob[j]:.2f} {names[j]}" for j in top5i) if save_img or view_img: # Add bbox to image annotator.text([32, 32], text, txt_color=(255, 255, 255)) if save_txt: # Write to file with open(f"{txt_path}.txt", "a") as f: f.write(text + "\n") # Stream results im0 = annotator.result() if view_img: if platform.system() == "Linux" and p not in windows: windows.append(p) cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) cv2.imshow(str(p), im0) cv2.waitKey(1) # 1 millisecond # Save results (image with detections) if save_img: if dataset.mode == "image": cv2.imwrite(save_path, im0) else: # 'video' or 'stream' if vid_path[i] != save_path: # new video vid_path[i] = save_path if isinstance(vid_writer[i], cv2.VideoWriter): vid_writer[i].release() # release previous video writer if vid_cap: # video fps = vid_cap.get(cv2.CAP_PROP_FPS) w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) else: # stream fps, w, h = 30, im0.shape[1], im0.shape[0] save_path = str(Path(save_path).with_suffix(".mp4")) # force *.mp4 suffix on results videos vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) vid_writer[i].write(im0) # Print time (inference-only) LOGGER.info(f"{s}{dt[1].dt * 1E3:.1f}ms") # Print results t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image LOGGER.info(f"Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}" % t) if save_txt or save_img: s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else "" LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") if update: strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning)
通过以上的工作,在最后分析了权重矩阵的计算依赖关系。到这里我们的作品就差不多了。
存内计算开发者社区:存内计算开发者社区-CSDN社区云