[NLP]使用Alpaca-Lora基于llama模型进行微调教程

news2024/10/1 19:40:39

Stanford Alpaca 是在 LLaMA 整个模型上微调,即对预训练模型中的所有参数都进行微调(full fine-tuning)。但该方法对于硬件成本要求仍然偏高且训练低效。

[NLP]理解大型语言模型高效微调(PEFT)

因此, Alpaca-Lora 则是利用 Lora 技术,在冻结原模型 LLaMA 参数的情况下,通过往模型中加入额外的网络层,并只训练这些新增的网络层参数。由于这些新增参数数量较少,这样不仅微调的成本显著下降,还能获得和全模型微调(full fine-tuning)类似的效果。

LoRA 的原理其实并不复杂,它的核心思想是在原始预训练语言模型旁边增加一个旁路,做一个降维再升维的操作,来模拟所谓的 intrinsic rank(预训练模型在各类下游任务上泛化的过程其实就是在优化各类任务的公共低维本征(low-dimensional intrinsic)子空间中非常少量的几个自由参数)。训练的时候固定预训练语言模型的参数,只训练降维矩阵 A 与升维矩阵 B。而模型的输入输出维度不变,输出时将 BA 与预训练语言模型的参数叠加。用随机高斯分布初始化 A,用 0 矩阵初始化 B。这样能保证训练开始时,新增的通路BA=0从,而对模型结果没有影响。

在推理时,将左右两部分的结果加到一起即可,h=Wx+BAx=(W+BA)x,所以,只要将训练完成的矩阵乘积BA跟原本的权重矩阵W加到一起作为新权重参数替换原始预训练语言模型的W即可,不会增加额外的计算资源。

LoRA 的最大优势是速度更快,使用的内存更少;因此,可以在消费级硬件上运行。

一 环境搭建

基础环境配置如下:

  • 操作系统: CentOS 7
  • CPUs: 单个节点具有 1TB 内存的 Intel CPU,物理CPU个数为64,每颗CPU核数为16
  • GPUs: 4 卡 A100 80GB GPU
  • Docker Image: pytorch:1.13.0-cuda11.6-cudnn8-devel

在 Alpaca-LoRA 项目中,作者提到,为了廉价高效地进行微调,他们使用了 Hugging Face 的 PEFT。PEFT 是一个库(LoRA 是其支持的技术之一,除此之外还有Prefix Tuning、P-Tuning、Prompt Tuning),可以让你使用各种基于 Transformer 结构的语言模型进行高效微调。下面安装PEFT。

 
#安装peft
git clone https://github.com/huggingface/peft.git
cd peft/
pip install .

#安装bitsandbytes。
git clone git@github.com:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=116 make cuda11x
python setup.py install
如果安装 bitsandbytes出现如下错误:
/usr/bin/ld: cannot find -lcudart

请行执行如下命令

cd /usr/lib
ln -s /usr/local/cuda/lib64/libcudart.so libcudart.so

#下载alpaca-lora
git clone git@github.com:tloen/alpaca-lora.git
cd alpaca-lora
pip install -r requirements.txt

requirements.txt文件具体的内容如下:

accelerate
appdirs
loralib
bitsandbytes
black
black[jupyter]
datasets
fire
git+https://github.com/huggingface/peft.git
transformers>=4.28.0
sentencepiece
gradio

模型格式转换

将LLaMA原始权重文件转换为Transformers库对应的模型文件格式。可以直接从Hugging Face下载转换好的模型如下:

下载方法可以参考:[NLP]Huggingface模型/数据文件下载方法

decapoda-research/llama-7b-hf · Hugging Face

decapoda-research/llama-13b-hf · Hugging Face

模型微调

python finetune.py \
    --base_model '/disk1/llama-13b' \
    --data_path './alpaca_data_cleaned_archive.json' \
    --output_dir './lora-alpaca' \
    --batch_size 128 \
    --micro_batch_size 8 \
    --num_epochs 1


torchrun --nproc_per_node=4 --master_port=29000 finetune.py \
    --base_model '/disk1/llama-13b' \
    --data_path './alpaca_data_cleaned_archive.json' \
    --output_dir './lora-alpaca' \
    --batch_size 128 \
    --micro_batch_size 8 \
    --num_epochs 1
Training Alpaca-LoRA model with params:
base_model: /disk1/llama-13b
data_path: ./alpaca_data_cleaned_archive.json
output_dir: ./lora-alpaca
batch_size: 128
micro_batch_size: 8
num_epochs: 1
learning_rate: 0.0003
cutoff_len: 256
val_set_size: 2000
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'v_proj']
train_on_inputs: True
add_eos_token: False
group_by_length: False
wandb_project: 
wandb_run_name: 
wandb_watch: 
wandb_log_model: 
resume_from_checkpoint: False
prompt template: alpaca

Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:43<00:00,  1.06s/it]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:43<00:00,  1.06s/it]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:43<00:00,  1.06s/it]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:43<00:00,  1.06s/it]
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. 
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. 
The class this function is called from is 'LlamaTokenizer'.
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
/opt/conda/lib/python3.9/site-packages/peft/utils/other.py:102: FutureWarning: prepare_model_for_int8_training is deprecated and will be removed in a future version. Use prepare_model_for_kbit_training instead.
  warnings.warn(
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. 
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. 
The class this function is called from is 'LlamaTokenizer'.
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. 
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. 
The class this function is called from is 'LlamaTokenizer'.
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
/opt/conda/lib/python3.9/site-packages/peft/utils/other.py:102: FutureWarning: prepare_model_for_int8_training is deprecated and will be removed in a future version. Use prepare_model_for_kbit_training instead.
  warnings.warn(
/opt/conda/lib/python3.9/site-packages/peft/utils/other.py:102: FutureWarning: prepare_model_for_int8_training is deprecated and will be removed in a future version. Use prepare_model_for_kbit_training instead.
  warnings.warn(
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. 
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. 
The class this function is called from is 'LlamaTokenizer'.
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
/opt/conda/lib/python3.9/site-packages/peft/utils/other.py:102: FutureWarning: prepare_model_for_int8_training is deprecated and will be removed in a future version. Use prepare_model_for_kbit_training instead.
  warnings.warn(
trainable params: 6,553,600 || all params: 13,022,417,920 || trainable%: 0.05032552357220002
Map:   3%|███▊                                                                                                                                          | 1330/49759 [00:01<00:39, 1216.23 examples/s]trainable params: 6,553,600 || all params: 13,022,417,920 || trainable%: 0.05032552357220002
Map:   0%|                                                                                                                                                           | 0/49759 [00:00<?, ? examples/s]trainable params: 6,553,600 || all params: 13,022,417,920 || trainable%: 0.05032552357220002
Map:   1%|▊                                                                                                                                              | 272/49759 [00:00<00:36, 1350.21 examples/s]trainable params: 6,553,600 || all params: 13,022,417,920 || trainable%: 0.05032552357220002
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49759/49759 [00:38<00:00, 1294.31 examples/s]
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49759/49759 [00:38<00:00, 1284.04 examples/s]
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49759/49759 [00:38<00:00, 1283.95 examples/s]
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:01<00:00, 1221.03 examples/s]
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49759/49759 [00:39<00:00, 1274.42 examples/s]
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:01<00:00, 1285.16 examples/s]
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:01<00:00, 1281.27 examples/s]
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:01<00:00, 1290.31 examples/s]
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
[W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [localhost]:29005 (errno: 97 - Address family not supported by protocol).
  0%|                                                                                                                                                                         | 0/388 [00:00<?, ?it/s]/opt/conda/lib/python3.9/site-packages/bitsandbytes-0.41.0-py3.9.egg/bitsandbytes/autograd/_functions.py:322: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
  warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/opt/conda/lib/python3.9/site-packages/bitsandbytes-0.41.0-py3.9.egg/bitsandbytes/autograd/_functions.py:322: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
  warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/opt/conda/lib/python3.9/site-packages/bitsandbytes-0.41.0-py3.9.egg/bitsandbytes/autograd/_functions.py:322: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
  warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/opt/conda/lib/python3.9/site-packages/bitsandbytes-0.41.0-py3.9.egg/bitsandbytes/autograd/_functions.py:322: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
  warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
{'loss': 2.249, 'learning_rate': 2.9999999999999997e-05, 'epoch': 0.03}                                                                                                                               
{'loss': 2.1927, 'learning_rate': 5.6999999999999996e-05, 'epoch': 0.05}                                                                                                                              
{'loss': 2.0813, 'learning_rate': 7.8e-05, 'epoch': 0.08}                                                                                                                                             
{'loss': 1.7206, 'learning_rate': 0.00010799999999999998, 'epoch': 0.1}                                                                                                                               
 11%|████████████████▋                                                                                                                               11%|███████████▋                                                                                                | 42/388 [10:50<1:27:2

4卡输出结果如上图,显存占用如下 

-------------------------------+----------------------+----------------------+
|   0  NVIDIA A100-SXM...  On   | 00000000:47:00.0 Off |                    0 |
| N/A   60C    P0   322W / 400W |  36944MiB / 81920MiB |     89%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-SXM...  On   | 00000000:4B:00.0 Off |                    0 |
| N/A   61C    P0   321W / 400W |  34204MiB / 81920MiB |     97%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-SXM...  On   | 00000000:89:00.0 Off |                    0 |
| N/A   62C    P0   349W / 400W |  34200MiB / 81920MiB |     98%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-SXM...  On   | 00000000:8E:00.0 Off |                    0 |
| N/A   63C    P0   261W / 400W |  33882MiB / 81920MiB |     89%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+

导出为 HuggingFace 格式

可以下载: Angainor/alpaca-lora-13b · Hugging Face   的lora_weights

修改export_hf_checkpoint.py文件:

import os

import torch
import transformers
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer  # noqa: F402

BASE_MODEL = os.environ.get("BASE_MODEL", "/disk1/llama-13b")
LORA_MODEL = os.environ.get("LORA_MODEL", "./alpaca-lora-13b")
HF_CHECKPOINT = os.environ.get("HF_CHECKPOINT", "./hf_ckpt")

tokenizer = LlamaTokenizer.from_pretrained(BASE_MODEL)

base_model = LlamaForCausalLM.from_pretrained(
    BASE_MODEL,
    load_in_8bit=False,
    torch_dtype=torch.float16,
    device_map={"": "cpu"},
)

first_weight = base_model.model.layers[0].self_attn.q_proj.weight
first_weight_old = first_weight.clone()

lora_model = PeftModel.from_pretrained(
    base_model,
    LORA_MODEL,
    device_map={"": "cpu"},
    torch_dtype=torch.float16,
)

lora_weight = lora_model.base_model.model.model.layers[
    0
].self_attn.q_proj.weight

assert torch.allclose(first_weight_old, first_weight)

# merge weights - new merging method from peft
lora_model = lora_model.merge_and_unload()

lora_model.train(False)

# did we do anything?
assert not torch.allclose(first_weight_old, first_weight)

lora_model_sd = lora_model.state_dict()
deloreanized_sd = {
    k.replace("base_model.model.", ""): v
    for k, v in lora_model_sd.items()
    if "lora" not in k
}

LlamaForCausalLM.save_pretrained(
    base_model, HF_CHECKPOINT, state_dict=deloreanized_sd, max_shard_size="400MB"
)

python export_hf_checkpoint.py

The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:26<00:00,  1.56it/s]

查看模型输出文件:

hf_ckpt/
├── config.json
├── generation_config.json
├── pytorch_model-00001-of-00082.bin
├── pytorch_model-00002-of-00082.bin
├── pytorch_model-00003-of-00082.bin
├── pytorch_model-00004-of-00082.bin
├── pytorch_model-00005-of-00082.bin
├── pytorch_model-00006-of-00082.bin
├── pytorch_model-00007-of-00082.bin
├── pytorch_model-00008-of-00082.bin
├── pytorch_model-00009-of-00082.bin
├── pytorch_model-00010-of-00082.bin
├── pytorch_model-00011-of-00082.bin
├── pytorch_model-00012-of-00082.bin
├── pytorch_model-00013-of-00082.bin
├── pytorch_model-00014-of-00082.bin
├── pytorch_model-00015-of-00082.bin
├── pytorch_model-00016-of-00082.bin
├── pytorch_model-00017-of-00082.bin
├── pytorch_model-00018-of-00082.bin
├── pytorch_model-00019-of-00082.bin
├── pytorch_model-00020-of-00082.bin
├── pytorch_model-00021-of-00082.bin
├── pytorch_model-00022-of-00082.bin
├── pytorch_model-00023-of-00082.bin
├── pytorch_model-00024-of-00082.bin
├── pytorch_model-00025-of-00082.bin
├── pytorch_model-00026-of-00082.bin
├── pytorch_model-00027-of-00082.bin
├── pytorch_model-00028-of-00082.bin
├── pytorch_model-00029-of-00082.bin
├── pytorch_model-00030-of-00082.bin
├── pytorch_model-00031-of-00082.bin
├── pytorch_model-00032-of-00082.bin
├── pytorch_model-00033-of-00082.bin
├── pytorch_model-00034-of-00082.bin
├── pytorch_model-00035-of-00082.bin
├── pytorch_model-00036-of-00082.bin
├── pytorch_model-00037-of-00082.bin
├── pytorch_model-00038-of-00082.bin
├── pytorch_model-00039-of-00082.bin
├── pytorch_model-00040-of-00082.bin
├── pytorch_model-00041-of-00082.bin
├── pytorch_model-00042-of-00082.bin
├── pytorch_model-00043-of-00082.bin
├── pytorch_model-00044-of-00082.bin
├── pytorch_model-00045-of-00082.bin
├── pytorch_model-00046-of-00082.bin
├── pytorch_model-00047-of-00082.bin
├── pytorch_model-00048-of-00082.bin
├── pytorch_model-00049-of-00082.bin
├── pytorch_model-00050-of-00082.bin
├── pytorch_model-00051-of-00082.bin
├── pytorch_model-00052-of-00082.bin
├── pytorch_model-00053-of-00082.bin
├── pytorch_model-00054-of-00082.bin
├── pytorch_model-00055-of-00082.bin
├── pytorch_model-00056-of-00082.bin
├── pytorch_model-00057-of-00082.bin
├── pytorch_model-00058-of-00082.bin
├── pytorch_model-00059-of-00082.bin
├── pytorch_model-00060-of-00082.bin
├── pytorch_model-00061-of-00082.bin
├── pytorch_model-00062-of-00082.bin
├── pytorch_model-00063-of-00082.bin
├── pytorch_model-00064-of-00082.bin
├── pytorch_model-00065-of-00082.bin
├── pytorch_model-00066-of-00082.bin
├── pytorch_model-00067-of-00082.bin
├── pytorch_model-00068-of-00082.bin
├── pytorch_model-00069-of-00082.bin
├── pytorch_model-00070-of-00082.bin
├── pytorch_model-00071-of-00082.bin
├── pytorch_model-00072-of-00082.bin
├── pytorch_model-00073-of-00082.bin
├── pytorch_model-00074-of-00082.bin
├── pytorch_model-00075-of-00082.bin
├── pytorch_model-00076-of-00082.bin
├── pytorch_model-00077-of-00082.bin
├── pytorch_model-00078-of-00082.bin
├── pytorch_model-00079-of-00082.bin
├── pytorch_model-00080-of-00082.bin
├── pytorch_model-00081-of-00082.bin
├── pytorch_model-00082-of-00082.bin
└── pytorch_model.bin.index.json

0 directories, 85 files

导出为PyTorch state_dicts

修改export_state_dict_checkpoint.py文件:

参考文档

  • LLaMA
  • Stanford Alpaca:斯坦福-羊驼
  • Alpaca-LoRA

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/788368.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Qt6 Qt Quick UI原型学习QML第七篇

文章目录 效果演示QML语法 ClickableImageV2.qmlQML语法 EasingCurves.qml时钟小球滚动QML 源码## 时钟小球滚动QML解释 语法解释参考动画片动画元素应用动画可点击图像V2上升的物体第一个对象第二个对象第三个对象缓和曲线分组动画并行动画连续动画嵌套动画 效果演示 QML语法 …

给APK签名—两种方式(flutter android 安装包)

前提&#xff1a;给未签名的apk签名&#xff0c;可以先检查下apk有没有签名 通过命令行查看&#xff1a;打开终端或命令行界面&#xff0c;导入包含APK文件的目录&#xff0c;并执行以下命令&#xff1a; keytool -printcert -jarfile your_app.apk 将 your_app.apk替换为要检查…

MybatisPlus查询条件为空字符串或null问题及解决

参考&#xff1a;https://www.yii666.com/blog/292928.html 解决办法 mybatisplus的条件构造器方法 eq()、like()等这些方法能支持第三个参数 condition condition是一个布尔值&#xff0c;当condition为false 时&#xff0c;当前这个条件方法不会生效&#xff0c;即生成的s…

曲线拟合曲面拟合(MATLAB拟合工具箱)位置前馈量计算(压力闭环控制应用)

利用PLC进行压力闭环控制的项目背景介绍请查看下面文章链接,这里不再赘述。 信捷PLC压力闭环控制应用(C语言完整PD、PID源代码)_RXXW_Dor的博客-CSDN博客闭环控制的系列文章,可以查看PID专栏的的系列文章,链接如下:张力控制之速度闭环(速度前馈量计算)_RXXW_Dor的博客-CSD…

flash attention 2论文学习

flash attention作者Tri Dao发布了flash attention 2&#xff0c;性能为flash attention的2倍。 优化点主要如下&#xff1a; 一、减少 non-matmul FLOPs A00中由于tensor core的存在&#xff0c;使得gpu对于浮点矩阵运算吞吐很高&#xff0c;如FP16/BF16可以达到312 TFLOPs/…

【弹力设计篇】聊聊熔断设计

为什么需要熔断 熔断这个词一听从生活中就是保险丝超过一定的温度后自动断开&#xff0c;以此来保护家用电器&#xff0c;属于电路中自我保护装置。如果没有熔断&#xff0c;那么家用电器一定会损坏的。 进一步再来分析一下&#xff0c;在分布式系统中&#xff0c;各个系统之间…

建立TCP连接的各个系统调用

TCP 连接的过程图 服务器 socket() 函数 socket() 返回的 sockfd 是一个描述符。socket()对应于普通文件的打开操作。普通文件的打开操作返回一个文件描述字&#xff0c;而socket()用于创建一个socket描述符&#xff08;socket descriptor&#xff09;&#xff0c;它唯一标识…

PX4仿真jMAVSim没有界面

切换java版本,使用java-8 sudo update-alternatives --config java删除旧文件 rm -rf Tools/jMAVSim/out编辑accessibility.properties 文件&#xff1a; sudo gedit /etc/java-8-openjdk/accessibility.properties注释掉下面这行 #assistive_technologiesorg.GNOME.Acessi…

笔试题:统计字符串中某字符串在其出现的字符个数

笔试题&#xff1a;统计字符串中某一子串的字符个数&#xff1a;例如字符串aabbcd,有aabb:4,ab:2 哈哈&#xff0c;这道题是小编面试音视频龙头企业的笔试题&#xff0c;以下是我写的代码&#xff1a;如果有错误&#xff0c;希望可以指正!!! 解题思路&#xff1a;利用双指针i和…

一刷总结篇

也养成了记录博客的好习惯吧&#xff0c;不过一刷有时也偷懒没跟上&#xff0c;但总体而言是比没刷代码随想录之前的状态要好。还是要记得当前目标是什么&#xff08;深抓主要矛盾&#xff09;。二刷代码随想录时每题要充分思考并且刷之前放过的题&#xff08;如扩展提等&#…

单相导轨电表支持双路双控吗?

单相导轨电表是一种电子式电能表&#xff0c;它采用导轨式安装结构&#xff0c;体积小、安装方便&#xff0c;适用于城市、农村或工厂企业的单相电能计量和集中式安装。单相导轨电表可以支持双路双控&#xff0c;也就是可以同时测量两个电路的电能消耗并进行控制。 双路双控是指…

图形编辑器开发:是否要像 Figma 一样上 wasm

大家好&#xff0c;我是前端西瓜哥。 wasm 拿来做 Web 端的图形编辑器貌似是不错的选择。 因为图形处理会有相当多无法利用到 WebGL GPU 加速的 CPU 密集的计算。比如对一条复杂贝塞尔曲线进行三角化&#xff0c;对多个图形进行复杂图形的布尔运算。 图形编辑器性能天花板 F…

TypeChat,用TypeScript快速接入AI大语言模型

TypeChat是C# 和 TypeScript 之父 Anders Hejlsberg全新的开源项目。使用AI在自然语言和应用程序和API之间建立桥梁&#xff0c;并且使用TypeScript。 现在出现了很多大型语言模型&#xff0c;但是如何将这些模型最好地集成到现有的应用程序中&#xff0c;如何使用人工智能来接…

设计模式||工厂模式(含有代码样例)

什么是工厂模式&#xff1f; 工厂模式&#xff08;Factory Pattern&#xff09;是一种常见的创建型设计模式&#xff0c;它提供了一种封装对象创建过程的方式。工厂模式通过定义一个创建对象的接口&#xff0c;但具体的对象创建在子类中实现&#xff0c;这样可以将对象的实例化…

Docker系列 1 - 镜像和容器

Docker系列 1 - 镜像和容器 1、关于 Docker2、镜像 image3、容器 container 1、关于 Docker docker官网&#xff1a;http://www.docker.com docker中文网站&#xff1a;https://www.docker-cn.com/ Docker Hub 仓库官网: https://hub.docker.com/ Docker 的基本组成&#…

【C++】多态原理剖析,Visual Studio开发人员工具使用查看类结构cl /d1 reportSingleClassLayout

author&#xff1a;&Carlton tag&#xff1a;C topic&#xff1a;【C】多态原理剖析&#xff0c;Visual Studio开发人员工具使用查看类结构cl /d1 reportSingleClassLayout website:黑马程序员C tool&#xff1a;Visual Studio 2019 date&#xff1a;2023年7月24日 目…

电脑记事本在哪里?电脑桌面显示记事本要怎么设置?

绝大多数上班族在使用电脑办公时&#xff0c;都需要随手记录一些琐碎或重要的事情&#xff0c;例如工作注意事项、常用的文案、某项工作的具体要求、多个平台的账号和密码等。于是就有不少小伙伴想要使用电脑记事本软件来记录&#xff0c;那么电脑记事本在哪里呢&#xff1f;想…

VM虚拟机网络配置桥接模式方法步骤

VM虚拟机配置桥接模式&#xff0c;可以让虚拟机和物理主机一样存在于局域网中&#xff0c;可以和主机相通&#xff0c;和互联网相通&#xff0c;和局域网中其它主机相通。 vmware为我们提供了三种网络工作模式&#xff0c;它们分别是&#xff1a;Bridged&#xff08;桥接模式&…

C# | [极坐标] 与 [平面直角系坐标] 的相互转换

极坐标与平面直角系坐标的相互转换方法及C#代码实现 文章目录 极坐标与平面直角系坐标的相互转换方法及C#代码实现前言极坐标转换为平面直角系坐标计算公式示例代码运行结果 平面直角系坐标转换为极坐标计算公式示例代码运行结果 结束语 前言 极坐标和平面直角系坐标是常见的坐…

细胞生物学试剂UAMC1110,FAP-IN-1,相关数据特点说明

资料编辑|陕西新研博美生物科技有限公司小编MISSwu​ UAMC1110&#xff0c;FAP-IN-1&#xff0c;(S)-N-[2-(2-氰基-4,4-二氟-1-吡咯烷基)-2-氧代乙基]喹啉-4-甲酰胺 Product structure&#xff1a; Product specifications&#xff1a; 1.CAS No&#xff1a;N/A 2.Molecular f…