Optimizing RoBERTa: Fine-Tuning with Mixed Precision on AMD — ROCm Blogs
简介
在这篇博客中,我们将探讨如何微调鲁棒优化的BERT预训练方法([RoBERTa](https://arxiv.org/abs/1907.11692))大型语言模型,重点在于PyTorch的混合精度功能。具体来说,我们将利用AMD GPU进行混合精度微调,以在不显著影响精度的情况下加快模型训练过程。
RoBERTa是Facebook AI开发的双向编码器表示转换模型([BERT](https://arxiv.org/abs/1810.04805))的高级变体。它通过修改预训练中的关键超参数(如移除下一个句子预测机制和使用更大的小批量大小进行训练)来增强BERT。在广泛的自然语言处理(NLP)任务中,该模型表现出优越的性能。有关RoBERTa的更多信息,请参见[RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692)。
混合精度训练是一种通过使用16位和32位浮点操作来加速深度学习模型训练阶段的技术。PyTorch通过`torch.cuda.amp`模块支持自动混合精度(AMP)训练。在使用AMP时,包含矩阵乘法的操作将在较低(float 16)精度下进行。较低精度的计算速度更快且使用的内存更少。通过在训练期间保留模型权重的全精度副本来维持模型精度。
有关混合精度训练的更多信息,请参见[自动混合精度包 - torch.amp](Automatic Mixed Precision package - torch.amp — PyTorch 2.4 documentation)和[在使用AMD GPU的PyTorch中自动混合精度](Automatic mixed precision in PyTorch using AMD GPUs — ROCm Blogs)。
您可以在[GitHub文件夹](rocm-blogs/blogs/artificial-intelligence/roberta_amp at release · ROCm/rocm-blogs · GitHub)中找到与本博客文章相关的文件。
系统要求:操作系统和硬件测试
- AMD GPU:请参见[ROCm 文档页面](System requirements (Linux) — ROCm installation (Linux))获取支持的硬件和操作系统。
- ROCm 6.1:请参见[ROCm Linux 安装说明](ROCm installation for Linux — ROCm installation (Linux))获取安装说明。
- Docker:请参见[在 Ubuntu 上安装 Docker 引擎](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository)获取安装说明。
- PyTorch 2.1.2:请使用官方 ROCm Docker 镜像,链接为:[rocm/pytorch:rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2](https://hub.docker.com/layers/rocm/pytorch/rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2/images/sha256-f6ea7cee8aae299c7f6368187df7beed29928850c3929c81e6f24b34271d652b?context=explore)。
运行本博客
- 克隆仓库并 cd
进入博客目录:
git clone git@github.com:ROCm/rocm-blogs.git
cd rocm-blogs/blogs/artificial-intelligence/roberta_amp
- 构建并启动容器。有关构建过程的详细信息,请参见 roberta_amp/docker/Dockerfile
。
cd docker
docker compose build
docker compose up
- 在你的浏览器中打开 http://localhost:8888/lab/tree/src/roberta_amp.ipynb,并打开 roberta_amp.ipynb
笔记本。
微调 RoBERTa 用于情感分类任务
我们使用了在[Hugging Face 网站](https://huggingface.co/)上的[dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion)数据集。该数据集是为情感分类任务设计的。它包括一系列标注了六种基本情感(愤怒,恐惧,快乐,爱,悲伤,惊喜)的英文 Twitter 消息。该数据集包括分割和未分割版本的配置。分割配置包含 16,000 个训练示例,以及每个验证和测试分割的 2,000 个示例。我们使用此数据集来微调自定义的 RoBERTa 语言模型,用于情感分类。我们将评估 RoBERTa 在预测文本情感表达方面的性能,比较使用混合精度训练前后的效果。
在使用 PyTorch 微调模型时,有两个选择:*PyTorch Hugging Face Trainer API* 或 *原生 Python*。我们将使用 原生 Python 微调,因为它可以为用户提供更多的训练过程控制。此方法要求用户手动设置训练循环、处理反向传播、以及明确管理数据加载和模型更新。混合精度可以通过手动设置并使用 torch.cuda.amp
模块在原生 PyTorch 中实现。
让我们首先导入以下模块:
import torch
from torch.utils.data import DataLoader
from torch.optim import AdamW
from tqdm.notebook import tqdm
from transformers import RobertaTokenizer, RobertaForSequenceClassification, set_seed
import datasets
import time
set_seed(42)
from IPython.display import display, clear_output
探索`dair-ai/emotion`数据集并加载数据如下:
# Dataset description can be found at https://huggingface.co/datasets/dair-ai/emotion
# Load train validation and test splits
train_data = datasets.load_dataset("dair-ai/emotion", split="train",trust_remote_code=True)
validation_data = datasets.load_dataset("dair-ai/emotion", split="validation",trust_remote_code=True)
test_data = datasets.load_dataset("dair-ai/emotion", split="test",trust_remote_code=True)
# Show dataset number of examples and column names
print(train_data)
print(validation_data)
print(test_data,'\n')
# Print the first instance and label on the train split
print('Text:',train_data['text'][0], '| Label:', train_data['label'][0])
每个数据分割包含以下内容:
Dataset({
features: ['text', 'label'],
num_rows: 16000
})
Dataset({
features: ['text', 'label'],
num_rows: 2000
})
Dataset({
features: ['text', 'label'],
num_rows: 2000
})
Text: i didnt feel humiliated | Label: 0
在上面的输出中,我们打印了`train`分割数据的第一个训练示例,其对应的标签是`0:sadness`。
使用本地PyTorch进行微调
让我们开始使用本地PyTorch训练方法对我们的自定义RoBERTa模型进行微调。我们将微调两个版本的同一模型:一个版本进行常规微调,另一个版本使用混合精度微调。我们对比两种版本在训练时间和性能指标上的表现。
常规微调和性能指标
首先,我们对数据进行标记化并创建相应的数据加载器:
# Get the device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load train validation and test splits
# Dataset description can be found at https://huggingface.co/datasets/dair-ai/emotion
train_data = datasets.load_dataset("dair-ai/emotion", split="train",trust_remote_code=True)
validation_data = datasets.load_dataset("dair-ai/emotion", split="validation",trust_remote_code=True)
test_data = datasets.load_dataset("dair-ai/emotion", split="test",trust_remote_code=True)
# Load the tokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
# Tokenize the dataset
def tokenize_function(examples):
return tokenizer(examples['text'], padding = 'max_length',return_tensors="pt")
# Apply tokenization to each split
tokenized_train_data = train_data.map(tokenize_function, batched = True)
tokenized_validation_data = validation_data.map(tokenize_function, batched = True)
tokenized_test_data = test_data.map(tokenize_function, batched = True)
# Set type to PyTorch tensors
tokenized_train_data.set_format(type="torch")
tokenized_validation_data.set_format(type="torch")
tokenized_test_data.set_format(type="torch")
# Transform tokenized datasets to PyTorch dataloder
train_loader = DataLoader(tokenized_train_data, batch_size = 32)
validation_loader = DataLoader(tokenized_validation_data, batch_size = 32)
test_loader = DataLoader(tokenized_test_data, batch_size = 32)
现在定义其余组件并执行以下代码开始训练:
# Load Roberta model for sequence classification
num_labels = 6 # The dair-ai/emotion contains 6 labels
epochs = 3
model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels = num_labels)
model.to(device)
# Instantiate the optimizer with the given learning rate
optimizer = AdamW(model.parameters(), lr = 5e-5)
# Training Loop
model.train()
# Train the model
torch.cuda.synchronize() # Wait for all kernels to finish
start_time = time.time()
for epoch in range(epochs):
for batch in train_loader:
inputs = {'input_ids':batch['input_ids'].to(model.device),
'attention_mask':batch['attention_mask'].to(model.device),
'labels':batch['label'].to(model.device)
}
outputs = model(**inputs)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
clear_output(wait=True)
display(f'Epoch: {epoch+1}/{epochs}. Training Loss: {loss.item()}')
# Validation Loop
model.eval()
total_eval_loss = 0
for batch in validation_loader:
with torch.no_grad():
inputs = {'input_ids':batch['input_ids'].to(model.device),
'attention_mask':batch['attention_mask'].to(model.device),
'labels':batch['label'].to(model.device)
}
outputs = model(**inputs)
loss = outputs.loss
total_eval_loss += loss.item()
avg_val_loss = total_eval_loss / len(validation_loader)
display(f'Validation Loss: {avg_val_loss}')
torch.cuda.synchronize() # Wait for all kernels to finish
training_time_regular = time.time() - start_time
print(f'Mixed Precision False. Training time (s):{training_time_regular:.3f}')
# Save the model
model.save_pretrained(f'./native_finetuned_roberta_mixed_precision_false')
上面的代码显示每个批次的训练损失和平均验证损失。训练完成后,您将获得类似以下的输出:
'Epoch: 3/3. Training Loss: 0.10250010341405869'
'Validation Loss: 0.18223475706246164'
Mixed Precision False. Training time (s):681.362
上面的输出显示第三个epoch最后一个批次的训练损失,平均验证损失,以及常规训练的总训练时间(在这种情况下约为680秒)。
这个模型的表现如何?让我们计算它的精确度(Precision)、召回率(Recall)和F1性能指标:
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def roberta_finetuned_performance_metrics(saved_model_path, tokenizer):
is_mixed_precision = saved_model_path.split('_')[-1]
model = RobertaForSequenceClassification.from_pretrained(saved_model_path)
model.to(device)
# return predictions
def inference(batch):
inputs = {k: v.to(device) for k, v in batch.items() if k in tokenizer.model_input_names}
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits,dim = -1).cpu().numpy()
return {'predictions': predictions}
# Perform inference on test set
results = tokenized_test_data.map(inference, batched=True, batch_size = 32)
# Extract predictions and true labels
predictions = results['predictions'].tolist()
true_labels = tokenized_test_data['label'].tolist()
# Compute evaluation metrics
accuracy = accuracy_score(true_labels,predictions)
precision, recall, f1, _ = precision_recall_fscore_support(true_labels, predictions, average = 'weighted')
print(f'Model mixed precision: {is_mixed_precision}.\nPrecision: {precision:.3f} | Recall: {recall:.3f} | F1: {f1:.3f}')
saved_model_path = './native_finetuned_roberta_mixed_precision_false'
roberta_finetuned_performance_metrics(saved_model_path, tokenizer)
输出为:
Model mixed precision: False.
Precision: 0.930 | Recall: 0.925 | F1: 0.919
混合精度微调
现在,让我们在使用本地PyTorch进行训练时利用混合精度。运行以下代码开始训练过程:
# Load Roberta model for sequence classification
num_labels = 6 # The dair-ai/emotion contains 6 labels
model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels = num_labels)
model.to(device)
# Define the optimizer
optimizer = AdamW(model.parameters(), lr = 5e-5)
# Instantiate gradient scaler
scaler = torch.cuda.amp.GradScaler()
# Train the model
torch.cuda.synchronize() # Wait for all kernels to finish
model.train()
start_time = time.time()
for epoch in range(epochs):
for batch in tqdm(train_loader):
optimizer.zero_grad()
inputs = {'input_ids':batch['input_ids'].to(model.device),
'attention_mask':batch['attention_mask'].to(model.device),
'labels':batch['label'].to(model.device)
}
# Use Automatic Mixed Precision
with torch.cuda.amp.autocast():
outputs = model(**inputs)
loss = outputs.loss
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
clear_output(wait=True)
display(f'Epoch: {epoch+1}/{epochs}. Training Loss: {loss.item()}')
# Validation loop
model.eval()
total_eval_loss = 0
for batch in validation_loader:
with torch.no_grad(), torch.cuda.amp.autocast():
inputs = {'input_ids':batch['input_ids'].to(model.device),
'attention_mask':batch['attention_mask'].to(model.device),
'labels':batch['label'].to(model.device)
}
outputs = model(**inputs)
loss = outputs.loss
total_eval_loss +=loss.item()
avg_val_loss = total_eval_loss / len(validation_loader)
display(f'Validation Loss: {avg_val_loss}')
torch.cuda.synchronize() # Wait for all kernels to finish
training_time_amp = time.time() - start_time
print(f'Mixed Precision True. Training time (s):{training_time_amp:.3f}')
# Save the model
model.save_pretrained(f'./native_finetuned_roberta_mixed_precision_true')
在上述代码中,我们显式地使用了`torch.cuda.amp.GradScaler`和`torch.cuda.amp.autocast`来在训练循环中启用自动混合精度。训练结束时,我们将得到以下输出:
'Epoch: 3/3. Training Loss: 0.1367110311985016'
'Validation Loss: 0.1395080569717619'
Mixed Precision True. Training time (s):457.022
使用PyTorch自动混合精度比常规微调实现了更短的训练时间(大约450秒)。
最后,相关的性能指标如下:
saved_model_path = './native_finetuned_roberta_mixed_precision_true'
roberta_finetuned_performance_metrics(saved_model_path, tokenizer)
Model mixed precision: True.
Precision: 0.927 | Recall: 0.928 | F1: 0.927
我们在训练时间上取得了较好的结果,同时对整体模型性能只有最小的影响。
总结
在这篇博客中,我们探讨了使用混合精度训练对RoBERTa大型语言模型进行微调,重点强调了使用AMD GPU。我们利用PyTorch的自动混合精度(AMP),观察到AMD硬件在加速训练过程方面非常出色,同时确保模型性能的最小损失。AMD硬件与PyTorch AMP的集成展示了一种有效的解决方案,提升了计算效率并缩短了训练时间,使其非常适合深度学习工作流。