文章目录
- 1.背景
- 2.微调方式
- 2.1 关键环境版本信息
- 2.2 步骤
- 2.2.1 下载llama-factory
- 2.2.2 准备数据集
- 2.2.3 微调模式
- 2.2.3.1 zero-1微调
- 2.2.3.2 zero-2微调
- 2.2.3.3 zero-3微调
- 2.2.3.4 单卡Lora微调
- 2.2.4 实验
- 2.2.4.1 实验1:多GPU微调-zero1
- 2.2.4.2 实验2:多GPU微调-zero2
- 2.2.4.3 实验3:多GPU微调-zero3
- 2.2.4.4 实验4:Lora单卡微调
- 2.2.5 合并大模型并启动
- 2.2.5.1 方法一:Llama-factory合并,并使用ollama调用大模型
- 2.2.5.2 方法二:Llama-factory合并,并使用vllm启动模型服务
- 3 踩坑经验
- 3.1 微调踩坑
- 3.1.1 问题一:ValueError: Undefined dataset xxxx in dataset_info.json.
- 3.1.2 问题二: ValueError: Target modules {'c_attn'} not found in the base model. Please check the target modules and try again.
- 3.1.3 问题三: RuntimeError: The size of tensor a (1060864) must match the size of tensor b (315392) at non-singleton dimension 0。
- 3.1.4 问题四: 训练效率问题
1.背景
上一篇文章写到,【个人开发】macbook m1 Lora微调qwen大模型
该微调方式,同样适用于GPU,只不过在train.py脚本中,针对device,调整为cuda即可。
如果数据量过大的话,单卡微调会存在瓶颈,因此考虑多GPU进行微调。
网上搜罗了一圈,多卡微调的常用方案:deepspeed+Llama-factory。
本文主要记录该方式的微调情况,仅为个人学习记录
2.微调方式
2.1 关键环境版本信息
模块 | 版本 |
---|---|
python | 3.10 |
CUDA | 12.6 |
torch | 2.5.1 |
peft | 0.12.0 |
transformers | 4.46.2 |
accelerate | 1.1.1 |
trl | 0.9.6 |
deepspeed | 0.15.4 |
2.2 步骤
2.2.1 下载llama-factory
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e ".[torch,metrics]"
2.2.2 准备数据集
数据集采用网上流传的《甄嬛传》。
数据源地址:huanhuan.json
数据集结构如下。
// 文件命名:huanhuan.json
[
{
"instruction": "小姐,别的秀女都在求中选,唯有咱们小姐想被撂牌子,菩萨一定记得真真儿的——",
"input": "",
"output": "嘘——都说许愿说破是不灵的。"
},
...
]
其次,还得准备数据集信息【dataset_info.json】,因为是本地微调,所以微调时现访问dataset_info,再指定到具体的数据集中。
{
"identity": {
"file_name": "test_data.json"
}
}
注意文本的数据集的格式必须为,json,不然会报错。
2.2.3 微调模式
2.2.3.1 zero-1微调
配置参考zero-3的配置,修改了一下zero_optimization.stage的参数。
// 文件命名:ds_config_zero1.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 1,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"offload_param": {
"device": "none",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": 4,
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
微调脚本
# run_train_bash_zero_1.sh
#!/bin/bash
# 记录开始时间
START=$(date +%s.%N)
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch src/train.py \
--deepspeed ds_config_zero1.json \
--stage sft \
--do_train True \
--model_name_or_path /root/ai_project/fine-tuning-by-lora/models/model/qwen/Qwen2___5-7B-Instruct \
--finetuning_type lora \
--template qwen \
--dataset_dir /root/ai_project/fine-tuning-by-lora/dataset/ \
--dataset identity \
--cutoff_len 1024 \
--num_train_epochs 30 \
--max_samples 100000 \
--learning_rate 5e-05 \
--lr_scheduler_type cosine \
--warmup_steps 10 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--max_grad_norm 1.0 \
--logging_steps 10 \
--save_steps 100 \
--neftune_noise_alpha 0 \
--lora_rank 8 \
--lora_dropout 0.1 \
--lora_alpha 32 \
--lora_target q_proj,v_proj,k_proj,gate_proj,up_proj,o_proj,down_proj \
--output_dir ./output/qwen_7b_ft/zero1/ \
--bf16 True \
--plot_loss True
# 记录结束时间
END=$(date +%s.%N)
# 计算运行时间
DUR=$(echo "$END - $START" | bc)
# 输出运行时间
printf "Execution time: %.6f seconds\n" $DUR
2.2.3.2 zero-2微调
zero-2下述的配置中,调度器使用了AdamW,学习率在训练时候可以逐步下降。
// 文件命名:ds_config_zero2.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
}
},
"gradient_accumulation_steps": 4,
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
2.2.3.3 zero-3微调
本次微调采用zero-3的方式,因此在LLaMa-Factory目录下,新增配置文件。
相关配置可参考Llama-Factory提供的文件样例[./LLaMA-Factory/examples/deepspeed/]
// 文件命名:ds_config_zero3.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"offload_param": {
"device": "none",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
微调脚本
# run_train_bash.sh
#!/bin/bash
# 记录开始时间
START=$(date +%s.%N)
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch src/train.py \
--deepspeed ds_config_zero3.json \
--stage sft \
--do_train True \
--model_name_or_path /root/ai_project/fine-tuning-by-lora/models/model/qwen/Qwen2___5-7B-Instruct \
--finetuning_type lora \
--template qwen \
--dataset_dir /root/ai_project/fine-tuning-by-lora/dataset/ \
--dataset identity \
--cutoff_len 1024 \
--num_train_epochs 5 \
--max_samples 100000 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--learning_rate 5e-04 \
--lr_scheduler_type cosine \
--max_grad_norm 1.0 \
--logging_steps 5 \
--save_steps 100 \
--neftune_noise_alpha 0 \
--lora_rank 8 \
--lora_dropout 0.1 \
--lora_alpha 32 \
--lora_target q_proj,v_proj,k_proj,gate_proj,up_proj,o_proj,down_proj \
--output_dir ./output/qwen_7b_ds/train_2025_02_13 \
--bf16 True \
--plot_loss True
# 记录结束时间
END=$(date +%s.%N)
# 计算运行时间
DUR=$(echo "$END - $START" | bc)
# 输出运行时间
printf "Execution time: %.6f seconds\n" $DUR
说明一下上述一些关键参数:
参数 | 版本 |
---|---|
–deepspeed | 指定deepspeed加速微调方式 |
–model_name_or_path | 微调模型路径 |
–finetuning_type | 微调方式,这里用lora微调 |
–template | 训练和推理时构造 prompt 的模板,不同大语言模型的模板不一样,这里用的是qwen |
–dataset_dir | 本地的数据集路径 |
–dataset | 指定dataset_info.json中哪个数据集 |
–lora_target | 应用 LoRA 方法的模块名称。 |
–output_dir | 模型输出路径。 |
模型微调参数可以参考:Llama-Factory参数介绍
其他参数,其实就是常规使用peft进行lora微调的常见参数,以及常见的微调参数,可以对照如下。
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
inference_mode=False,
r=8,
lora_alpha=32,
lora_dropout=0.1
)
2.2.3.4 单卡Lora微调
具体使用可以参考上一篇文章:【个人开发】macbook m1 Lora微调qwen大模型
也可以参考github项目:fine-tuning-by-Lora
微调代码如下。
torch_dtype = torch.half
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
inference_mode=False,
r=8,
lora_alpha=32,
lora_dropout=0.1
)
def train():
# 加载模型
model_dir = snapshot_download(model_id=model_id, cache_dir=f"{models_dir}/model", revision='master')
if model_path != model_dir:
raise Exception(f"model_path:{model_path} != model_dir:{model_dir}")
model = AutoModelForCausalLM.from_pretrained(model_path,device_map=device, torch_dtype=torch_dtype)
model.enable_input_require_grads() # 开启梯度检查点时,要执行该方法
# 加载数据
df = pd.read_json(dataset_file)
ds = Dataset.from_pandas(df)
print(ds[:3])
# 处理数据
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
def process_func(item):
MAX_LENGTH = 384 # Llama分词器会将一个中文字切分为多个token,因此需要放开一些最大长度,保证数据的完整性
input_ids, attention_mask, labels = [], [], []
instruction = tokenizer(
f"<|start_header_id|>user<|end_header_id|>\n\n{item['instruction'] + item['input']}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
add_special_tokens=False) # add_special_tokens 不在开头加 special_tokens
response = tokenizer(f"{item['output']}<|eot_id|>", add_special_tokens=False)
input_ids = instruction["input_ids"] + response["input_ids"] + [tokenizer.pad_token_id]
attention_mask = instruction["attention_mask"] + response["attention_mask"] + [1] # 因为eos token咱们也是要关注的所以 补充为1
labels = [-100] * len(instruction["input_ids"]) + response["input_ids"] + [tokenizer.pad_token_id]
if len(input_ids) > MAX_LENGTH: # 做一个截断
input_ids = input_ids[:MAX_LENGTH]
attention_mask = attention_mask[:MAX_LENGTH]
labels = labels[:MAX_LENGTH]
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
"labels": labels
}
tokenized_id = ds.map(process_func, remove_columns=ds.column_names)
tokenizer.decode(list(filter(lambda x: x != -100, tokenized_id[1]["labels"])))
# 加载lora权重
model = get_peft_model(model, lora_config)
# 训练模型
training_args = TrainingArguments(
output_dir=checkpoint_dir,
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
logging_steps=5,
num_train_epochs=30,
save_steps=100,
learning_rate=5e-04,
save_on_each_node=True,
gradient_checkpointing=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_id,
data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True),
)
trainer.train()
# 保存模型
trainer.model.save_pretrained(lora_dir)
tokenizer.save_pretrained(lora_dir)
2.2.4 实验
本次测试使用多GPU微调,测试多GPU微调跟单GPU微调的性能对比。
使用2,030条数据,epoch = 30 ,batch size = 4,Gradient Accumulation steps = 4
实验组 | 实验类别 | 步数 | 耗时 | 最终loss |
---|---|---|---|---|
实验1 | zero1微调 | 480 | 09:00 | 0.0101 |
实验2 | zero2微调 | 480 | 09:59 | 0.4757 |
实验3 | zero3微调 | 480 | 1:49:11 | 0.0746 |
实验4 | 单卡lora微调 | 3810 | 1:07:57 | 0.0009 |
初步结论:
1.基于实验1,实验3的对照,使用zero3微调,耗时明显提升的原因还是资源使用不合理【没充分使用GPU】。
2.基于实验1,实验3跟实验2的对照,实验2的损失下降比较慢的一个原因是因为使用的学习率调度器的问题。
2.2.4.1 实验1:多GPU微调-zero1
日志如下
[INFO|trainer.py:2369] 2025-02-18 09:44:50,875 >> ***** Running training *****
[INFO|trainer.py:2370] 2025-02-18 09:44:50,875 >> Num examples = 2,030
[INFO|trainer.py:2371] 2025-02-18 09:44:50,875 >> Num Epochs = 30
[INFO|trainer.py:2372] 2025-02-18 09:44:50,875 >> Instantaneous batch size per device = 4
[INFO|trainer.py:2375] 2025-02-18 09:44:50,875 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:2376] 2025-02-18 09:44:50,875 >> Gradient Accumulation steps = 4
[INFO|trainer.py:2377] 2025-02-18 09:44:50,875 >> Total optimization steps = 480
[INFO|trainer.py:2378] 2025-02-18 09:44:50,878 >> Number of trainable parameters = 20,185,088
.....
***** train metrics *****
epoch = 30.0
total_flos = 234733999GF
train_loss = 1.0322
train_runtime = 0:09:00.75
train_samples_per_second = 112.619
train_steps_per_second = 0.888
Figure saved at: ./output/qwen_7b_ft/zero1/training_loss.png
GPU使用情况。
loss下降情况如下:
2.2.4.2 实验2:多GPU微调-zero2
使用2,030条数据,8卡微调,微调参数如下,总共480步,耗时09:59。
[INFO|trainer.py:2369] 2025-02-17 12:53:54,461 >> ***** Running training *****
[INFO|trainer.py:2370] 2025-02-17 12:53:54,461 >> Num examples = 2,030
[INFO|trainer.py:2371] 2025-02-17 12:53:54,461 >> Num Epochs = 30
[INFO|trainer.py:2372] 2025-02-17 12:53:54,461 >> Instantaneous batch size per device = 4
[INFO|trainer.py:2375] 2025-02-17 12:53:54,461 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:2376] 2025-02-17 12:53:54,461 >> Gradient Accumulation steps = 4
[INFO|trainer.py:2377] 2025-02-17 12:53:54,461 >> Total optimization steps = 480
[INFO|trainer.py:2378] 2025-02-17 12:53:54,465 >> Number of trainable parameters = 20,185,088
***** train metrics *****
epoch = 30.0
total_flos = 234733999GF
train_loss = 1.6736
train_runtime = 0:09:59.38
train_samples_per_second = 101.605
train_steps_per_second = 0.801
Figure saved at: ./output/qwen_7b_ft/zero2/training_loss.png
GPU使用情况如下:
损失下降情况:
2.2.4.3 实验3:多GPU微调-zero3
使用2,030条数据,8卡微调,微调参数如下,总共480步,耗时1:49:11。
[INFO|trainer.py:2369] 2025-02-17 13:07:48,438 >> ***** Running training *****
[INFO|trainer.py:2370] 2025-02-17 13:07:48,438 >> Num examples = 2,030
[INFO|trainer.py:2371] 2025-02-17 13:07:48,438 >> Num Epochs = 30
[INFO|trainer.py:2372] 2025-02-17 13:07:48,438 >> Instantaneous batch size per device = 4
[INFO|trainer.py:2375] 2025-02-17 13:07:48,438 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:2376] 2025-02-17 13:07:48,438 >> Gradient Accumulation steps = 4
[INFO|trainer.py:2377] 2025-02-17 13:07:48,438 >> Total optimization steps = 480
[INFO|trainer.py:2378] 2025-02-17 13:07:48,442 >> Number of trainable parameters = 20,185,088
...
***** train metrics *****
epoch = 30.0
total_flos = 257671GF
train_loss = 0.3719
train_runtime = 1:49:11.88
train_samples_per_second = 9.295
train_steps_per_second = 0.073
Figure saved at: ./output/qwen_7b_ft/zero3/training_loss.png
[WARNING|2025-02-17 14:57:11] llamafactory.extras.ploting:162 >> No metric eval_loss to plot.
[WARNING|2025-02-17 14:57:11] llamafactory.extras.ploting:162 >> No metric eval_accuracy to plot.
[INFO|modelcard.py:449] 2025-02-17 14:57:11,629 >> Dropping the following result as it does not have all the necessary fields:
GPU使用情况如下:
损失下降情况:
2.2.4.4 实验4:Lora单卡微调
单卡微调,总共需要3810步。
2.2.5 合并大模型并启动
2.2.5.1 方法一:Llama-factory合并,并使用ollama调用大模型
模型合并
利用Llama-factory的框架,配置llama3_lora_sft_qwen.yaml 文件,进行模型合并。
# llama3_lora_sft_qwen.yaml
### model
model_name_or_path: /root/ai_project/fine-tuning-by-lora/models/model/qwen/Qwen2___5-7B-Instruct
adapter_name_or_path: /root/ai_project/LLaMA-Factory/output/qwen_7b_ds/zero2/
template: qwen
trust_remote_code: true
### export
export_dir: output/llama3_lora_sft_qwen
export_size: 5
export_device: gpu
export_legacy_format: false
llamafactory-cli export llama3_lora_sft_qwen.yaml
模型打包
合并完成后,会有直接生成Modelfile文件,可以直接打包到ollama中。
# ollama modelfile auto-generated by llamafactory
FROM .
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ range .Messages }}{{ if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
<|im_start|>assistant
{{ else if eq .Role "assistant" }}{{ .Content }}<|im_end|>
{{ end }}{{ end }}"""
SYSTEM """You are a helpful assistant."""
PARAMETER stop "<|im_end|>"
PARAMETER num_ctx 4096
模型启动
ollama启动
ollama create llama3_lora_sft_qwen -f Modelfile
参考文章:大模型开发和微调工具Llama-Factory–>LoRA合并
2.2.5.2 方法二:Llama-factory合并,并使用vllm启动模型服务
模型的合并同方法一,之后使用vllm命令启动。
vllm命令启动模型服务
# 内置了vllm的qwen的template。
CUDA_VISIBLE_DEVICES=1,2,3,4 python3 -m vllm.entrypoints.openai.api_server \
--model "/root/ai_project/LLaMA-Factory/output/merge/" \
--port 6006 \
--tensor-parallel-size 4 \
--served-model-name Qwen2.5-7B-sft \
--max-model-len 8192 \
--dtype half \
--host 0.0.0.0
模型服务接口调用
import requests
def chat_with_vllm(prompt, port=6006):
url = f"http://localhost:{port}/v1/chat/completions"
headers = {"Content-Type": "application/json"}
data = {
"model": "Qwen2.5-7B-sft", # 模型名称或路径
"messages": [{"role": "user", "content": prompt}],
"max_tokens": 512,
"temperature": 0.7
}
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
result = response.json()
generated_text = result["choices"][0]["message"]["content"]
print(generated_text.strip())
else:
print("Error:", response.status_code, response.text)
# 示例调用
chat_with_vllm("你是谁?", port=6006)
服务日志:
说明:日志中可以看到template。
调用结果:
3 踩坑经验
3.1 微调踩坑
3.1.1 问题一:ValueError: Undefined dataset xxxx in dataset_info.json.
如果你脚本的启动参数,–dataset identity。而dataset_info.json中的数据信息,没有“identity”这个key,则会出现这个报错,只要确保你dataset_info.json中存在该key即可。
3.1.2 问题二: ValueError: Target modules {‘c_attn’} not found in the base model. Please check the target modules and try again.
如果你脚本的启动参数,–lora_target参数设为常见的c_attn参数,则会报此错。处理方式还是调整参数,使用Lora微调时的常见参数,q_proj,v_proj,k_proj,gate_proj,up_proj,o_proj,down_proj。注意格式,如果格式不对,还是会报错。
3.1.3 问题三: RuntimeError: The size of tensor a (1060864) must match the size of tensor b (315392) at non-singleton dimension 0。
这种tensor的问题,很可能是模型冲突的问题,比如调到一半,然后重新提调,指到相同的路径。重新指定output路径即可。
3.1.4 问题四: 训练效率问题
在GPU充分的情况下,使用zero_2的训练效率,很明显比zero_3的训练效率更快!
【后续,持续更新。。。】