【笔记】transformer

news2024/10/7 2:31:45

一.前置知识

1.什么是注意力机制

【参考知乎】一文读懂注意力机制

1)原理是什么?怎么实现?

在这里插入图片描述

step:
(1)通过打分函数计算查询向量q和输入h的相关性
(2)softmax归一化获得注意力分布
【注意】此时的输入h仍然为一个标量,而在 键值对注意力机制(以及 多头注意力机制中)都是使用键值对

详细如下图:
在这里插入图片描述
更多细节参考一文读懂注意力机制

2)查询向量q如何获得?

  1. 在查询向量q往往和任务相关,比如基于Seq-to-Seq的机器翻译任务中,这个查询向量 q 可以是Decoder端前个时刻的输出状态向量
  2. 然而在自注意力机制(self-Attention)中,这里的查询向量q也可以使用输入信息进行生成。输入信息h,通过三个矩阵W 映射到对应的 查询空间Q,键空间K,值空间V
    在这里插入图片描述

2.什么是transformer架构?和普通的encoder-decoder架构有什么区别?

transformer其实也属于encoder-decoder架构。transformer不同的是完全基于注意力机制,完全不使用循环和卷积。

二. transformer

1.来自两个任意输入或输出位置的信号的关联 所需的操作数量 被减少为一个常数数量的操作,尽管代价是由于平均注意力加权位置而降低了有效分辨率,我们用Multi-Head Attention抵消了这一影响

1.架构

在这里插入图片描述

1.1encoder

第一层是一个多头的自注意力机制,第二层是一个简单的、位置上的全连接前馈网络。我们在两个子层中采用一个残差连接,然后进行层归一化

1.2decoder

除了每个编码器层的两个子层之外,解码器还插入了第三个子层,它对编码器堆栈的输出进行多头注意力。与编码器类似,我们在每个子层周围采用残差连接然后进行层归一化。我们还修改了解码器堆栈中的自注意力子层,以防止位置关注后续位置。

1.3 多头自注意力机制

【代码】github传送门
多头注意力实现代码:

"""
---
title: Multi-Headed Attention (MHA)
"""

import math
from typing import Optional, List

import torch
from torch import nn

from labml import tracker


class PrepareForMultiHeadAttention(nn.Module):
    """
    <a id="PrepareMHA"></a>
    ## Prepare for multi-head attention
    This module does a linear transformation and splits the vector into given
    number of heads for multi-head attention.
    This is used to transform **key**, **query**, and **value** vectors.
    """

    def __init__(self, d_model: int, heads: int, d_k: int, bias: bool):
        super().__init__()
        # Linear layer for linear transform
        self.linear = nn.Linear(d_model, heads * d_k, bias=bias)
        # Number of heads
        self.heads = heads
        # Number of dimensions in vectors in each head
        self.d_k = d_k

    def forward(self, x: torch.Tensor):
        # Input has shape `[seq_len, batch_size, d_model]` or `[batch_size, d_model]`.
        # We apply the linear transformation to the last dimension and split that into
        # the heads.
        head_shape = x.shape[:-1]

        # Linear transform
        x = self.linear(x)

        # Split last dimension into heads
        x = x.view(*head_shape, self.heads, self.d_k)

        # Output has shape `[seq_len, batch_size, heads, d_k]` or `[batch_size, heads, d_model]`
        return x


class MultiHeadAttention(nn.Module):
    r"""
    <a id="MHA"></a>
    ## Multi-Head Attention Module
    This computes scaled multi-headed attention for given `query`, `key` and `value` vectors.
    $$\mathop{Attention}(Q, K, V) = \underset{seq}{\mathop{softmax}}\Bigg(\frac{Q K^\top}{\sqrt{d_k}}\Bigg)V$$
    In simple terms, it finds keys that matches the query, and gets the values of
     those keys.
    It uses dot-product of query and key as the indicator of how matching they are.
    Before taking the $softmax$ the dot-products are scaled by $\frac{1}{\sqrt{d_k}}$.
    This is done to avoid large dot-product values causing softmax to
    give very small gradients when $d_k$ is large.
    Softmax is calculated along the axis of of the sequence (or time).
    """

    def __init__(self, heads: int, d_model: int, dropout_prob: float = 0.1, bias: bool = True):
        """
        * `heads` is the number of heads.
        * `d_model` is the number of features in the `query`, `key` and `value` vectors.
        """

        super().__init__()

        # Number of features per head
        self.d_k = d_model // heads
        # Number of heads
        self.heads = heads

        # These transform the `query`, `key` and `value` vectors for multi-headed attention.
        self.query = PrepareForMultiHeadAttention(d_model, heads, self.d_k, bias=bias)
        self.key = PrepareForMultiHeadAttention(d_model, heads, self.d_k, bias=bias)
        self.value = PrepareForMultiHeadAttention(d_model, heads, self.d_k, bias=True)

        # Softmax for attention along the time dimension of `key`
        self.softmax = nn.Softmax(dim=1)

        # Output layer
        self.output = nn.Linear(d_model, d_model)
        # Dropout
        self.dropout = nn.Dropout(dropout_prob)
        # Scaling factor before the softmax
        self.scale = 1 / math.sqrt(self.d_k)

        # We store attentions so that it can be used for logging, or other computations if needed
        self.attn = None

    def get_scores(self, query: torch.Tensor, key: torch.Tensor):
        """
        ### Calculate scores between queries and keys
        This method can be overridden for other variations like relative attention.
        """

        # Calculate $Q K^\top$ or $S_{ijbh} = \sum_d Q_{ibhd} K_{jbhd}$
        return torch.einsum('ibhd,jbhd->ijbh', query, key)

    def prepare_mask(self, mask: torch.Tensor, query_shape: List[int], key_shape: List[int]):
        """
        `mask` has shape `[seq_len_q, seq_len_k, batch_size]`, where first dimension is the query dimension.
        If the query dimension is equal to $1$ it will be broadcasted.
        """

        assert mask.shape[0] == 1 or mask.shape[0] == query_shape[0]
        assert mask.shape[1] == key_shape[0]
        assert mask.shape[2] == 1 or mask.shape[2] == query_shape[1]

        # Same mask applied to all heads.
        mask = mask.unsqueeze(-1)

        # resulting mask has shape `[seq_len_q, seq_len_k, batch_size, heads]`
        return mask

    def forward(self, *,
                query: torch.Tensor,
                key: torch.Tensor,
                value: torch.Tensor,
                mask: Optional[torch.Tensor] = None):
        """
        `query`, `key` and `value` are the tensors that store
        collection of *query*, *key* and *value* vectors.
        They have shape `[seq_len, batch_size, d_model]`.
        `mask` has shape `[seq_len, seq_len, batch_size]` and
        `mask[i, j, b]` indicates whether for batch `b`,
        query at position `i` has access to key-value at position `j`.
        """

        # `query`, `key` and `value`  have shape `[seq_len, batch_size, d_model]`
        seq_len, batch_size, _ = query.shape

        if mask is not None:
            mask = self.prepare_mask(mask, query.shape, key.shape)

        # Prepare `query`, `key` and `value` for attention computation.
        # These will then have shape `[seq_len, batch_size, heads, d_k]`.
        query = self.query(query)
        key = self.key(key)
        value = self.value(value)

        # Compute attention scores $Q K^\top$.
        # This gives a tensor of shape `[seq_len, seq_len, batch_size, heads]`.
        scores = self.get_scores(query, key)

        # Scale scores $\frac{Q K^\top}{\sqrt{d_k}}$
        scores *= self.scale

        # Apply mask
        if mask is not None:
            scores = scores.masked_fill(mask == 0, float('-inf'))

        # $softmax$ attention along the key sequence dimension
        # $\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_k}}\Bigg)$
        attn = self.softmax(scores)

        # Save attentions if debugging
        tracker.debug('attn', attn)

        # Apply dropout
        attn = self.dropout(attn)

        # Multiply by values
        # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_k}}\Bigg)V$$
        x = torch.einsum("ijbh,jbhd->ibhd", attn, value)

        # Save attentions for any other calculations 
        self.attn = attn.detach()

        # Concatenate multiple heads
        x = x.reshape(seq_len, batch_size, -1)

        # Output layer
        return self.output(x)

2.整体代码

【pytorch版本】transformer

"""
---
title: Transformer Encoder and Decoder Models
summary: >
  These are PyTorch implementations of Transformer based encoder and decoder models,
  as well as other related modules.
---

# Transformer Encoder and Decoder Models

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/basic/autoregressive_experiment.ipynb)
"""
import math

import torch
import torch.nn as nn

from labml_nn.utils import clone_module_list
from .feed_forward import FeedForward
from .mha import MultiHeadAttention
from .positional_encoding import get_positional_encoding


class EmbeddingsWithPositionalEncoding(nn.Module):
    """
    <a id="EmbeddingsWithPositionalEncoding"></a>

    ## Embed tokens and add [fixed positional encoding](positional_encoding.html)
    """

    def __init__(self, d_model: int, n_vocab: int, max_len: int = 5000):
        super().__init__()
        self.linear = nn.Embedding(n_vocab, d_model)
        self.d_model = d_model
        self.register_buffer('positional_encodings', get_positional_encoding(d_model, max_len))

    def forward(self, x: torch.Tensor):
        pe = self.positional_encodings[:x.shape[0]].requires_grad_(False)
        return self.linear(x) * math.sqrt(self.d_model) + pe


class EmbeddingsWithLearnedPositionalEncoding(nn.Module):
    """
    <a id="EmbeddingsWithLearnedPositionalEncoding"></a>

    ## Embed tokens and add parameterized positional encodings
    """

    def __init__(self, d_model: int, n_vocab: int, max_len: int = 5000):
        super().__init__()
        self.linear = nn.Embedding(n_vocab, d_model)
        self.d_model = d_model
        self.positional_encodings = nn.Parameter(torch.zeros(max_len, 1, d_model), requires_grad=True)

    def forward(self, x: torch.Tensor):
        pe = self.positional_encodings[:x.shape[0]]
        return self.linear(x) * math.sqrt(self.d_model) + pe


class TransformerLayer(nn.Module):
    """
    <a id="TransformerLayer"></a>

    ## Transformer Layer

    This can act as an encoder layer or a decoder layer.

    🗒 Some implementations, including the paper seem to have differences
    in where the layer-normalization is done.
    Here we do a layer normalization before attention and feed-forward networks,
    and add the original residual vectors.
    Alternative is to do a layer normalization after adding the residuals.
    But we found this to be less stable when training.
    We found a detailed discussion about this in the paper
     [On Layer Normalization in the Transformer Architecture](https://papers.labml.ai/paper/2002.04745).
    """

    def __init__(self, *,
                 d_model: int,
                 self_attn: MultiHeadAttention,
                 src_attn: MultiHeadAttention = None,
                 feed_forward: FeedForward,
                 dropout_prob: float):
        """
        * `d_model` is the token embedding size
        * `self_attn` is the self attention module
        * `src_attn` is the source attention module (when this is used in a decoder)
        * `feed_forward` is the feed forward module
        * `dropout_prob` is the probability of dropping out after self attention and FFN
        """
        super().__init__()
        self.size = d_model
        self.self_attn = self_attn
        self.src_attn = src_attn
        self.feed_forward = feed_forward
        self.dropout = nn.Dropout(dropout_prob)
        self.norm_self_attn = nn.LayerNorm([d_model])
        if self.src_attn is not None:
            self.norm_src_attn = nn.LayerNorm([d_model])
        self.norm_ff = nn.LayerNorm([d_model])
        # Whether to save input to the feed forward layer
        self.is_save_ff_input = False

    def forward(self, *,
                x: torch.Tensor,
                mask: torch.Tensor,
                src: torch.Tensor = None,
                src_mask: torch.Tensor = None):
        # Normalize the vectors before doing self attention
        z = self.norm_self_attn(x)
        # Run through self attention, i.e. keys and values are from self
        self_attn = self.self_attn(query=z, key=z, value=z, mask=mask)
        # Add the self attention results
        x = x + self.dropout(self_attn)

        # If a source is provided, get results from attention to source.
        # This is when you have a decoder layer that pays attention to 
        # encoder outputs
        if src is not None:
            # Normalize vectors
            z = self.norm_src_attn(x)
            # Attention to source. i.e. keys and values are from source
            attn_src = self.src_attn(query=z, key=src, value=src, mask=src_mask)
            # Add the source attention results
            x = x + self.dropout(attn_src)

        # Normalize for feed-forward
        z = self.norm_ff(x)
        # Save the input to the feed forward layer if specified
        if self.is_save_ff_input:
            self.ff_input = z.clone()
        # Pass through the feed-forward network
        ff = self.feed_forward(z)
        # Add the feed-forward results back
        x = x + self.dropout(ff)

        return x


class Encoder(nn.Module):
    """
    <a id="Encoder"></a>

    ## Transformer Encoder
    """

    def __init__(self, layer: TransformerLayer, n_layers: int):
        super().__init__()
        # Make copies of the transformer layer
        self.layers = clone_module_list(layer, n_layers)
        # Final normalization layer
        self.norm = nn.LayerNorm([layer.size])

    def forward(self, x: torch.Tensor, mask: torch.Tensor):
        # Run through each transformer layer
        for layer in self.layers:
            x = layer(x=x, mask=mask)
        # Finally, normalize the vectors
        return self.norm(x)


class Decoder(nn.Module):
    """
    <a id="Decoder"></a>

    ## Transformer Decoder
    """

    def __init__(self, layer: TransformerLayer, n_layers: int):
        super().__init__()
        # Make copies of the transformer layer
        self.layers = clone_module_list(layer, n_layers)
        # Final normalization layer
        self.norm = nn.LayerNorm([layer.size])

    def forward(self, x: torch.Tensor, memory: torch.Tensor, src_mask: torch.Tensor, tgt_mask: torch.Tensor):
        # Run through each transformer layer
        for layer in self.layers:
            x = layer(x=x, mask=tgt_mask, src=memory, src_mask=src_mask)
        # Finally, normalize the vectors
        return self.norm(x)


class Generator(nn.Module):
    """
    <a id="Generator"></a>

    ## Generator

    This predicts the tokens and gives the lof softmax of those.
    You don't need this if you are using `nn.CrossEntropyLoss`.
    """

    def __init__(self, n_vocab: int, d_model: int):
        super().__init__()
        self.projection = nn.Linear(d_model, n_vocab)

    def forward(self, x):
        return self.projection(x)


class EncoderDecoder(nn.Module):
    """
    <a id="EncoderDecoder"></a>

    ## Combined Encoder-Decoder
    """

    def __init__(self, encoder: Encoder, decoder: Decoder, src_embed: nn.Module, tgt_embed: nn.Module, generator: nn.Module):
        super().__init__()
        self.encoder = encoder
        self.decoder = decoder
        self.src_embed = src_embed
        self.tgt_embed = tgt_embed
        self.generator = generator

        # This was important from their code.
        # Initialize parameters with Glorot / fan_avg.
        for p in self.parameters():
            if p.dim() > 1:
                nn.init.xavier_uniform_(p)

    def forward(self, src: torch.Tensor, tgt: torch.Tensor, src_mask: torch.Tensor, tgt_mask: torch.Tensor):
        # Run the source through encoder
        enc = self.encode(src, src_mask)
        # Run encodings and targets through decoder
        return self.decode(enc, src_mask, tgt, tgt_mask)

    def encode(self, src: torch.Tensor, src_mask: torch.Tensor):
        return self.encoder(self.src_embed(src), src_mask)

    def decode(self, memory: torch.Tensor, src_mask: torch.Tensor, tgt: torch.Tensor, tgt_mask: torch.Tensor):
        return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/179494.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【设计模式】结构型模式·组合模式

学习汇总入口【23种设计模式】学习汇总(数万字讲解体系思维导图) 写作不易&#xff0c;如果您觉得写的不错&#xff0c;欢迎给博主来一波点赞、收藏~让博主更有动力吧&#xff01; 一.概述 又称为部分整体模式&#xff0c;用于把一组相似的对象当作一个单一的对象。组合模式依…

AI医生来啦,ChatGPT在医疗领域的未来可期

最新消息&#xff0c;chatGPT推出了付费版&#xff01;每月&#xff04;42美元&#xff0c;不限流使用&#xff0c;你会付费使用吗&#xff1f;OpenAI 推出的聊天机器人 ChatGPT &#xff0c;获得了巨大的吸引力&#xff0c;目前用户数量超过 100 万。(要知道&#xff0c;Netfl…

2023 Flutter Forward 大会回顾,快来看看 Flutter 的未来会有什么

Flutter Forward 作为一场 Flutter 的突破性发布会&#xff0c;事实上 Flutter 3.7 在大会前已经发布 &#xff0c;所以本次大会更多是介绍未来的可能&#xff0c;核心集中于 come on soon 的支持&#xff0c;所以值得关注的内容很多&#xff0c;特别是一些 Feature 让人十分心…

layui框架学习(1:布局)

Layui是开源的 Web UI 组件库&#xff0c;虽然目前已不算是最主流的前端框架&#xff0c;但很多开源项目都采用Layui设计页面&#xff0c;为了学习相关的项目&#xff0c;同时也为了掌握Layui的基本用法&#xff0c;特此基于B站的Layui教学视频及Layui的官网教程&#xff0c;从…

【DockerCE】使用docker运行HertzBeat

HertzBeat是一款免Agent的监控平台&#xff0c;拥有强大自定义监控能力&#xff0c;可以对应用服务、数据库、中间件、操作系统、云原生等进行监控&#xff0c;配置告警阈值&#xff0c;以及告警通知(邮件微信钉钉飞书)。关于这个软件的介绍&#xff0c;我这里就不做过多的介绍…

在线工具轻松设计电商直通车主图,无需下载

电商直通车主图设计教程&#xff01;无门槛在线设计&#xff0c;零基础轻松入门的电商设计工具&#xff0c;轻松就能搞定的主图设计工具&#xff0c;下面跟着小编的设计教程&#xff0c;一起学习如何使用在线工具乔拓云轻松设计专属的商品直通车主图&#xff0c;在线模板轻松设…

QTableWidget表格使用及美化

QTableWidget使用 选中一行、选中单个目标、禁止编辑 ui->tableWidget->setSelectionBehavior(QAbstractItemView::SelectRows);//选中的时候选中一行ui->tableWidget->setSelectionMode(QAbstractItemView::SingleSelection);//只能选中单个目标ui->tableWidg…

C++程序设计——泛型编程、函数模板、类模板

一、泛型编程 假如需要实现一个通用的加法函数&#xff0c;即可以实现多种类型的数据相加。这里当然可以使用函数重载来实现&#xff0c;但是其中会存在一些不好的地方&#xff0c;比如&#xff1a; &#xff08;1&#xff09;重载的函数仅仅是类型不同&#xff0c;代码复用率…

尚硅谷Vue3 笔记总结及代码

✨作者&#xff1a;猫十二懿 ❤️‍&#x1f525;账号&#xff1a;CSDN 、掘金 、个人博客 、Github &#x1f389;公众号&#xff1a;猫十二懿 笔记结合了总结视频中总结的内容 尚硅谷张天禹老师课程 空降视频 Vue3官网 1、Vue3简介 1.1 性能的提升 打包大小减少41%初次渲…

从源码解析代理模式

大纲代理模式&#xff08;结构型设计模式&#xff09;通过代理类去访问实现类中的方法&#xff0c;使用场景比如&#xff1a;已有接口和实现类的情况下&#xff0c;想要在已实现的方法基础上扩展更多的功能的场景。代理模式里的主要类&#xff1a;接口实现类&#xff0c;需实现…

数据结构 第三章 栈和队列(栈)

天空之外&#xff1a;点击收听 1 基本知识点 1、栈顶是指允许进行插入和删除操作的一端&#xff0c;另外一端称为栈底 2、进栈是指在栈顶位置插入元素(也叫入栈或者压栈)&#xff0c;出栈是指删除栈顶元素(也叫弹栈或者退栈) 3、栈溢出是指&#xff1a; 当栈满的时候&#x…

led和白炽灯哪个对眼睛好?分享光线舒适的LED护眼灯

最近对于白炽灯与LED灯哪个更护眼的话题受到很多人关注&#xff0c;经过综合考虑&#xff0c;LED灯更适合家庭使用的。 LED灯是电致发光的半导体芯片&#xff0c;抗震性能好&#xff0c;内置三基色荧光粉&#xff0c;让光线更加柔和&#xff0c;做到使用寿命长达10万小时&#…

通信原理简明教程 | 基本概念

文章目录1 通信及通信系统1.1 通信系统的基本组成模型1.2 通信系统的分类1.3 模拟通信和数字通信系统2 调制和解调2.1 调制解调的基本概念2.2 调制解调的分类2.3 调制解调的作用3 通信系统的质量指标3.1 模拟通信系统的质量指标3.2 数字通信系统的质量指标4 总结1 通信及通信系…

算法导论(二):渐进符号、递归及解法

渐近符号 基本的渐近符号&#xff1a; O 表示上界&#xff0c;即小于等于 ≤ Ω 表示下界&#xff0c;即大于等于 ≥ Θ 表示渐近等于 &#xff08;上一集也有使用这个符号&#xff09; 还有几个严格符号&#xff1a; o 表示小于 < ω 表示大于 > 渐近符号O 主要详细讲…

Latex中给图表添加中英文标题及生成相关目录

通常我们都是用\caption{这里是标题}的方式给图表添加对应的标题&#xff0c;如果我们需要同时给出两个标题呢&#xff1f;&#xff08;例如某些毕业论文中要求同时给出中英文标题&#xff09;如果我们还要生成对应的图表目录呢&#xff1f;这些问题都可以利用bicaption这个包来…

【论文翻译】A simple yet effective baseline for 3d human pose estimation

【论文】https://arxiv.org/abs/1705.03098v2 【pytorch】weigq/3d_pose_baseline_pytorch: A simple baseline for 3d human pose estimation in PyTorch. (github.com) 【tensorflow】https://github.com/una-dinosauria/3d-pose-baseline 摘要 随着深度卷积网络的成功&am…

手把手教你如何在项目中使用阿里字体图标IconFont

阿里图标官网地址&#xff1a;IconFont-阿里巴巴矢量图标库 一、注册账号 要使用阿里图标&#xff0c;首先你要在它的官网注册一个账号&#xff0c;注册的方式有多种&#xff08;手机号&#xff0c;Github&#xff0c;微博&#xff0c;阿里域账号&#xff09;&#xff0c;根据…

【CSDN的2022与2023】普普通通的三年,从懵懂、焦虑到坚定、奋进,破除焦虑努力成为更好的自己

大家好&#xff0c;我是黄小黄&#xff01;一名普通的软件工程在读学生。最近终于闲下来了一丢丢&#xff01;借着休息之余&#xff0c;来写一篇年度总结散散心~与其说是年度总结&#xff0c;不如说是给大学生活与莽莽撞撞的自己一个交代叭&#xff01; 这些都是小标题~碎碎念1…

行为型模式-观察者模式

1.概述 定义&#xff1a;又被称为发布-订阅&#xff08;Publish/Subscribe&#xff09;模式&#xff0c;它定义了一种一对多的依赖关系&#xff0c;让多个观察者对象同时监听某一个主题对象。这个主题对象在状态变化时&#xff0c;会通知所有的观察者对象&#xff0c;使他们能…

深度卷积对抗神经网络 基础 第四部分 可控制的GANs(Controllable GANs)

不同的生成模型定义 深度卷积对抗神经网络包含两种不同的生成模型&#xff0c; 条件生成模型 和非条件生成模型。非条件生成模型就像是一个彩票机或者赌博机&#xff0c;你输入一个任意数字的硬币数量&#xff0c;而输出则是随机的彩球。这样的系统&#xff0c;我们不能控制输…