Overview

The Attention Mechanism

  • The attention mechanism has revolutionized many Natural Language Processing (NLP) and Computer Vision (CV) tasks by addressing the limitations of traditional seq2seq models by alleviating the context vector bottleneck. Attention enables models to dynamically focus on relevant parts of the input sequence, enhancing their ability to handle long and complex sentences.
  • This improvement has been pivotal in advancing the performance and interpretability of AI models across a wide range of NLP applications. It has led to significant improvements in various applications such as machine translation, text summarization, and question answering.

The Bottleneck Problem

  • To understand the importance of attention, it is crucial to first grasp the bottleneck problem that attention helps to solve. In traditional sequence-to-sequence (seq2seq) models, such as those used in early neural machine translation systems, the architecture typically comprises an encoder and a decoder.
    • Encoder: Processes the input sequence (e.g., a sentence in the source language) and compresses it into a fixed-size context vector.
    • Decoder: Uses this context vector to generate the output sequence (e.g., a sentence in the target language).

The Context Vector Bottleneck

  • The main issue with this architecture is the context vector bottleneck. This bottleneck arises because the entire input sequence must be condensed into a single, fixed-size vector, regardless of the length or complexity of the input. As a result, crucial information can be lost, especially for long or complex sentences. This limitation hampers the model’s ability to capture and retain important details, leading to suboptimal performance.

How Attention Solves the Bottleneck Problem

  • The attention mechanism mitigates the context vector bottleneck by allowing the model to dynamically access different parts of the input sequence during the generation of each output element. Instead of relying on a single fixed-size context vector, the attention mechanism computes a weighted combination of all the encoder’s hidden states. This weighted sum acts as the context for each output step, enabling the model to focus on the most relevant parts of the input sequence.

Dynamic Focus on Relevant Input Parts

  • Here’s how the attention mechanism works in detail:

    1. Alignment Scores: For each decoder time step, alignment scores are computed between the current decoder hidden state and each encoder hidden state. These scores indicate how well the current part of the output aligns with different parts of the input.

    2. Attention Weights: The alignment scores are passed through a softmax function to obtain attention weights. These weights sum to 1 and represent the importance of each encoder hidden state for the current decoder time step.

    3. Context Vector: The context vector for the current decoder time step is computed as a weighted sum of the encoder hidden states, using the attention weights.

    4. Output Generation: The decoder uses this context vector, along with its own hidden state, to generate the next token in the output sequence.

  • By allowing the model to focus on different parts of the input sequence as needed, attention provides several benefits:

    • Improved Handling of Long Sequences: The model can retain and utilize relevant information from any part of the input sequence, which is especially beneficial for longer sentences.
    • Better Interpretability: The attention weights offer insights into which parts of the input the model is focusing on, making the model’s decision-making process more transparent.
    • Enhanced Performance: By addressing the bottleneck problem, attention leads to more accurate and fluent translations or generated text in various NLP tasks.

Origins of attention

Attention: Under the hood

  • As previously discussed, the role of attention in a model is to strategically focus on pertinent segments of the input sequence as and when required. This ability to tune into relevant sections enhances the model’s overall processing efficiency.
  • In a shift from traditional practices, the encoder now funnels a significantly larger amount of data to the decoder. Rather than simply transmitting the last hidden state of the encoding phase, it channels all the hidden states to the decoder, ensuring a more comprehensive data transfer.
  • A decoder utilizing attention features undertakes an additional step before generating its output. This step is designed to ensure the decoder’s focus is appropriately honed on parts of the input that are relevant to the current decoding time step. To achieve this, the following operations are performed:
    • Each hidden state is multiplied by its respective softmax score. This results in an amplification of hidden states associated with high scores and effectively diminishes the impact of those with low scores. This selective amplification technique supports the model’s ability to maintain focus on the more relevant parts of the input.
  • In an encoder, we employ the mechanism of self-attention. This technique allows the model to focus on different parts of the input independently, assisting the overall understanding of the sequence.
  • Conversely, in a decoder, cross-attention is applied. This allows the decoder to focus on different parts of the encoder’s output, aiding in the generation of a more accurate translation or summary.
  • With each step of the decoding process, a direct connection to the encoder is utilized to strategically zero in on a specific part of the input. This connection enables the model to maintain accuracy while parsing complex sequences.

The Classic Sequence-to-Sequence Model

  • The seq2seq model is composed of two main components: an encoder, and a decoder, as shown in the figure (source) below:

  • The encoder reads the input sentence, a sequence of vectors \(x = (x_{1}, \dots , x_{T})\), into a fixed-length vector \(c\). The encoder is a recurrent neural network, typical approaches are GRU or LSTMs such that:

    \[h_{t} = f\ (x_{t}, h_{t−1})\] \[c = q\ (h_{1}, \dotsc, h_{T})\]
    • where \(h_{t}\) is a hidden state at time \(t\), and \(c\) is a vector generated from the sequence of the hidden states, and \(f\) and \(q\) are some nonlinear functions.
  • At every time-step \(t\) the encoder produces a hidden state \(h_{t}\), and the generated context vector is modeled according to all hidden states.

  • The decoder is trained to predict the next word \(y_{t}\) given the context vector \(c\) and all the previously predict words \(\{y_{1}, \dots , y_{t-1}\}\), it defines a probability over the translation \({\bf y}\) by decomposing the joint probability:

    \[p({\bf y}) = \prod\limits_{i=1}^{x} p(y_{t} | {y_{1}, \dots , y_{t-1}}, c)\]
    • where \(\bf y = \{y_{1}, \dots , y_{t}\}\). In other words, the probability of a translation sequence is calculated by computing the conditional probability of each word given the previous words. With an LSTM/GRU each conditional probability is computed as:
    \[p(y_{t} | {y_{1}, \dots , y_{t-1}}, c) = g(y_{t−1}, s_{t}, c)\]
    • where, \(g\) is a nonlinear function that outputs the probability of \(y_{t}\), \(s_{t}\) is the value of the hidden state of the current position, and \(c\) the context vector.
  • In a simple seq2seq model, the last output of the LSTM/GRU is the context vector, encoding context from the entire sequence. This context vector is then used as the initial hidden state of the decoder.

  • At every step of decoding, the decoder is given an input token and (the previous) hidden state. The initial input token is the start-of-string <SOS> token, and the first hidden state is the context vector (the encoder’s last hidden state).

  • So, the fixed size context-vector needs to contain a good summary of the meaning of the whole source sentence, being this one big bottleneck, specially for long sentences. The figure below (taken from Bahdanau et al. (2015)) shows how the performance of the seq2seq model varies by sentence length:

Sequence-to-Sequence Model with Attention

  • The fixed size context-vector bottleneck was one of the main motivations by Bahdanau et al. (2015), which proposed a similar architecture but with a crucial improvement:

The new architecture consists of a bidirectional RNN as an encoder and a decoder that emulates searching through a source sentence during decoding a translation

  • The encoder is now a bidirectional recurrent network with a forward and backward hidden states. A simple concatenation of the two hidden states represents the encoder state at any given position in the sentence. The motivation is to include both the preceding and following words in the representation/annotation of an input word.

  • The other key element, and the most important one, is that the decoder is now equipped with some sort of search, allowing it to look at the whole source sentence when it needs to produce an output word, the attention mechanism. The figure below (taken from Bahdanau et al. (2015)) illustrates the attention mechanism in a seq2seq model.

  • The figure above gives a good overview of this new mechanism. To produce the output word at time \(y_{t}\) the decoder uses the last hidden state from the decoder - one can think about this as some sort of representation of the already produced words - and a dynamically computed context vector based on the input sequence.

  • The authors proposed to replace the fixed-length context vector by a another context vector \(c_{i}\) which is a sum of the hidden states of the input sequence, weighted by alignment scores.

  • Note that now the probability of each output word is conditioned on a distinct context vector \(c_{i}\) for each target word \(y\).

  • The new decoder is then defined as:

    \[p(y_{t} | {y_{1}, \dots , y_{t-1}}, c) = g(y_{t−1}, s_{i}, c)\]
    • where \(s_{i}\) is the hidden state for time \(i\), computed by:
    \[s_{i} = f(s_{i−1}, y_{i−1}, c_{i})\]
    • that is, a new hidden state for \(i\) depends on the previous hidden state, the representation of the word generated by the previous state and the context vector for position \(i\). The lingering question now is, how to compute the context vector \(c_{i}\)?

  • Instead of source and target sentences, we also have 2 sequences: passage and question(lengths are imbalance)
  • We need to model which words in the passage are most relevant to the question (and which question words)
  • Attention is the key ingredient here, similar to which words in source sentences are most relevant to the current target word

Context Vector

  • “In attention, the query refers to the word we’re computing attention for. In the case of an encoder, the query vector points to the current input word (aka context). For example, if the context was the first word in the input sentence, it would have a query vector q1.” (source)
  • The context vector \(c_{i}\) is a sum of the hidden states of the input sequence, weighted by alignment scores. Each word in the input sequence is represented by a concatenation of the two (i.e., forward and backward) RNNs hidden states, let’s call them annotations.

  • Each annotation contains information about the whole input sequence with a strong focus on the parts surrounding the \(i_{th}\) word in the input sequence.

  • The context vector \(c_{i}\) is computed as a weighted sum of these annotations:
\[c_{i} = \sum_{j=1}^{T_{x}} \alpha_{ij}h_{j}\]
  • The weight \(\alpha_{ij}\) of each annotation \(h_{j}\) is computed by:

    \[\alpha_{ij} = \text{softmax}(e_{ij})\]
    • where:
    \[e_{ij} = a(s_{i-1,h_{j}})\]
  • \(a\) is an alignment model which scores how well the inputs around position \(j\) and the output at position \(i\) match. The score is based on the RNN hidden state \(s_{i−1}\) (just before emitting \(y_{i}\) and the \(j_{th}\) annotation \(h_{j}\) of the input sentence

    \[a(s_{i-1},h_{j}) = \mathbf{v}_a^\top \tanh(\mathbf{W}_{a}\ s_{i-1} + \mathbf{U}_{a}\ {h}_j)\]
    • where both \(\mathbf{v}_a\) and \(\mathbf{W}_a\) are weight matrices to be learned in the alignment model.
  • The alignment model in the paper is described as feed forward neural network whose weight matrices \(\mathbf{v}_a\) and \(\mathbf{W}_a\) are learned jointly together with the whole graph/network.

  • The authors note:

“The probability \(\alpha_{ij}h_{j}\) reflects the importance of the annotation \(h_{j}\) with respect to the previous hidden state \(s_{i−1}\) in deciding the next state \(s_{i}\) and generating \(y_{i}\). Intuitively, this implements a mechanism of attention in the decoder.”

Attention vs. fixed-length context vector

  • Let’s visually review the attention mechanism and compare it against the fixed-length context vector approach. The pictures below (credit: Nelson Zhao) help understand the difference between the two encoder-decoder approaches. The figure below illustrates the encoder-decoder architecture with a fixed-context vector.

Extensions to the Classic Attention Mechanism

  • Luong et al. (2015) proposed and compared other mechanisms of attentions, more specifically, alternative functions to compute the alignment score:

  • Note that the concat operation is the same as in Bahdanau et al. (2015); however, instead of a weighted average over all the source hidden states, they proposed a mechanism of local attention which focus only on a small subset of the source positions per target word instead of attending to all words on the source for each target word.

Self-Attention / Scaled Dot-Product Attention

  • Earlier, we looked into the “classic” attention mechanism on which subsequent techniques such as self-attention or query-key-value-attention are based.
  • After transforming the field of neural machine translation, the attention mechanism was applied to other natural language processing tasks, such as document-level classification or sequence labelling and further extended to other modalities such as vision and speech.

Why have multiple attention layers?

  • Per Eugene Yan’s Some Intuition on Attention and the Transformer blog, multiple attention layers builds in redundancy (on top of having multiple attention heads). If we only had a single attention layer, that attention layer would have to do a flawless job—this design could be brittle and lead to suboptimal outcomes. We can address this via multiple attention layers, where each one uses the output of the previous layer with the safety net of skip connections. Thus, if any single attention layer messed up, the skip connections and downstream layers can mitigate the issue.
  • Stacking attention layers also broadens the model’s receptive field. The first attention layer produces context vectors by attending to interactions between pairs of words in the input sentence. Then, the second layer produces context vectors based on pairs of pairs, and so on. With more attention layers, the Transformer gains a wider perspective and can attend to multiple interaction levels within the input sentence.

Comparative Analysis: Additive vs. Scaled Dot-Product Attention

  • Among the various types of attention mechanisms, additive attention and scaled dot-product attention are the most commonly used. Here’s a comparison:

Origins and Definitions

  • Additive Attention:
    • Proposed by Bahdanau et al. in their 2015 paper titled “Neural Machine Translation by Jointly Learning to Align and Translate.”
    • It computes the alignment score between the query \(\mathbf{q}\) and the key \(\mathbf{k}\) using a feed-forward neural network with a single hidden layer.
    • The formula for the alignment score \(e_{ij}\) is: \(e_{ij} = \mathbf{v}^{T} \tanh(\mathbf{W}_{q}\mathbf{q}_i + \mathbf{W}_{k}\mathbf{k}_j)\)
    • Here, \(\mathbf{W}_{q}\) and \(\mathbf{W}_{k}\) are learnable weight matrices, and \(\mathbf{v}\) is a learnable vector.
  • Scaled Dot-Product Attention:
    • Introduced in Attention Is All You Need by Vaswani et al. in 2017.
    • It computes the alignment score by taking the dot product of the query and key vectors, scaled by the square root of the dimension of the key vectors (\(d_k\)).
    • The formula for the alignment score \(e_{ij}\) is: \(e_{ij} = \frac{\mathbf{q}_i \cdot \mathbf{k}_j}{\sqrt{d_{k}}}\)

Computational Efficiency

  • Additive Attention:
    • Involves a more complex computation due to the use of a feed-forward network.
    • While theoretically similar in complexity to dot-product attention, it is generally slower in practice because it cannot leverage highly optimized matrix multiplication libraries.
    • Requires additional parameters \(\mathbf{W}_{q}\), \(\mathbf{W}_{k}\), and \(\mathbf{v}\), increasing memory usage.
  • Scaled Dot-Product Attention:
    • Much faster and more space-efficient as it relies on matrix multiplication, which is highly optimized in modern deep learning libraries (e.g., TensorFlow, PyTorch).
    • The scaling factor \(\frac{1}{\sqrt{d_{k}}}\) helps to mitigate the issue of having large dot product values, which can lead to small gradients during backpropagation.

Theoretical Complexity

  • Both attention mechanisms have a theoretical time complexity of \(O(n^2 \cdot d)\), where \(n\) is the sequence length and \(d\) is the dimension of the representations.
  • However, in practice:
    • Additive Attention involves additional computation for the feed-forward network, which can slow down the process.
    • Scaled Dot-Product Attention benefits from efficient matrix multiplication operations, making it faster in real-world applications.

Usage and Performance

  • Additive Attention:
    • Often used in earlier models of neural machine translation and other NLP tasks before the advent of the Transformer architecture.
    • Still useful in scenarios where the performance benefits of dot-product attention do not outweigh its simplicity and interpretability.
  • Scaled Dot-Product Attention:
    • Integral to the Transformer architecture, which has become the standard for many NLP tasks.
    • Scales better with larger datasets and more complex models, leading to state-of-the-art performance in a wide range of applications.

Implementation Details

  • Additive Attention:
    • Typically implemented with separate weight matrices for the query and key vectors, followed by a non-linear activation (e.g., \(\tanh\)) and a final linear layer to compute the score.
    • Example pseudocode:
      def additive_attention(query, key):
          w_q = nn.Linear(query_dim, hidden_dim)
          w_k = nn.Linear(key_dim, hidden_dim)
          v = nn.Linear(hidden_dim, 1)
          scores = v(tanh(w_q(query) + w_k(key)))
          attention_weights = softmax(scores, dim=-1)
          return attention_weights
      
  • Scaled Dot-Product Attention:
    • Implemented using matrix multiplication followed by a scaling factor and softmax function to compute the attention weights.
    • Example pseudocode:
      def scaled_dot_product_attention(query, key, value, mask=None):
          d_k = query.size(-1)
          scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(d_k)
          if mask is not None:
              scores = scores.masked_fill(mask == 0, -1e9)
          attention_weights = softmax(scores, dim=-1)
          output = torch.matmul(attention_weights, value)
          return output
      

Conclusion

  • Additive Attention is more complex and computationally intensive but has been foundational in early NLP models.
  • Scaled Dot-Product Attention is faster, more efficient, and scalable, making it the preferred choice in modern architectures like Transformers.
  • The choice between the two often depends on the specific application requirements and the computational resources available. However, for most state-of-the-art NLP tasks, scaled dot-product attention is the go-to mechanism due to its performance and efficiency advantages.

Multi-head Attention

Cross Attention

Ghost Attention

  • The authors of Llama 2 proposed Ghost Attention (GAtt).
  • The authors have introduced Ghost Attention (GAtt), an innovative technique specifically designed to aid artificial intelligence in remembering and adhering to initial instructions throughout a conversation. This methodology extends the notion of Context Distillation, where specific details are distilled and highlighted from the broader context to enhance understanding.
    • Context Distillation is a concept that focuses on highlighting and isolating specific, crucial details from a larger and more complex context. This process is similar to distilling, where the essential elements are extracted from a compound mixture.Context Distillation is used to introduce and retain an instruction throughout a dialogue. This helps the model to consistently remember and adhere to the instruction, enhancing its ability to maintain focus and perform accurately.
  • In this technique, an instruction - a directive that must be consistently followed during the entire dialogue - is added to all user messages in a synthetic dialogue dataset. However, during the training phase, the instruction is only retained in the first turn of the dialogue and the loss (a measure of error) is set to zero for all tokens (representative units of information) from earlier turns.
  • The authors applied this unique approach across a variety of synthetic constraints, which included diverse elements like hobbies, languages, and public figures. Implementing GAtt effectively preserved attention on initial instructions for a significantly larger portion of the conversation, ensuring that the AI stayed focused on its tasks.
  • One of the notable achievements of GAtt is its ability to maintain consistency in adhering to initial instructions even over extended dialogues, comprising more than 20 turns, until it hits the maximum context length that the model can handle. While this first iteration has proven successful, the authors believe that there is ample room for further refinement and improvement, suggesting that the Ghost Attention technique can continue to evolve for enhanced performance.
  • Let’s say we are training a dialogue system to book appointments for a dental clinic, and one of the rules we want the system to follow is that it should always inquire about the patient’s dental insurance details.
  • In the synthetic dialogue dataset used for training, we append the instruction “Always ask about dental insurance” to every user message.
  • For example:
    • User: “I need an appointment.”
    • AI (with instruction): “Always ask about dental insurance. Sure, I can help you with that. Do you have a preferred date and time?”
    • User: “How about next Tuesday at 10 am?”
    • AI (with instruction): “Always ask about dental insurance. That time works. May I also ask if you have dental insurance and, if so, could you provide the details?”
  • During training, GAtt retains this instruction only in the first turn and sets the loss to zero for all tokens from earlier turns. The model will be trained to understand that asking about dental insurance is an important part of the conversation, and it should remember this instruction even in later turns.

  • For example, when the model is actually deployed:
    • User: “I need an appointment.”
    • AI: “Sure, I can help you with that. Do you have a preferred date and time?”
    • User: “How about next Tuesday at 10 am?”
    • AI: “That time works. May I also ask if you have dental insurance and, if so, could you provide the details?”
  • Notice that even though the instruction “Always ask about dental insurance” is not explicitly mentioned during the conversation after training, the AI system consistently adheres to it throughout the dialogue, as it has been trained using GAtt.
  • This technique ensures the AI model stays focused on the initial instruction, in this case, asking about dental insurance, enhancing its dialogue capabilities and making it more reliable for the task at hand.

Linear Attention

  • Proposed in Linformer: Self-Attention with Linear Complexity by Wang et al. from Facebook AI.
  • The authors proposes a novel approach to optimizing the self-attention mechanism in Transformer models, reducing its complexity from quadratic to linear with respect to sequence length. This method, named Linformer, maintains competitive performance with standard Transformer models while significantly enhancing efficiency in both time and memory usage.
  • Linformer introduces a low-rank approximation of the self-attention mechanism. By empirically and theoretically demonstrating that the self-attention matrix is of low rank, the authors propose a decomposition of the original scaled dot-product attention into multiple smaller attentions via linear projections. This factorization effectively reduces both the space and time complexity of self-attention from \(O(n^2)\) to \(O(n)\), addressing the scalability issues of traditional Transformers.
  • The model architecture involves projecting key and value matrices into lower-dimensional spaces before computing the attention, which retains the model’s effectiveness while reducing computational demands. The approach includes options for parameter sharing across projections, which can further reduce the number of trainable parameters without significantly impacting performance.
  • In summary, here’s how Linformer achieves linear-time attention:

    1. Low-Rank Approximation: The core idea behind Linformer is the observation that self-attention can be approximated by a low-rank matrix. This implies that the complex relationships captured by self-attention in Transformers do not necessarily require a full rank matrix, allowing for a more efficient representation.

    2. Reduced Complexity: While standard self-attention mechanisms in Transformers have a time and space complexity of \(O(n^2)\) with respect to the sequence length (n), Linformer reduces this complexity to \(O(n)\). This significant reduction is both in terms of time and space, making it much more efficient for processing longer sequences.

    3. Mechanism of Linear Self-Attention: The Linformer achieves this by decomposing the scaled dot-product attention into multiple smaller attentions through linear projections. Specifically, it introduces two linear projection matrices \(E_i\) and \(F_i\) which are used when computing the key and value matrices. By first projecting the original high-dimensional key and value matrices into a lower-dimensional space (\(n \times k\)), Linformer effectively reduces the complexity of the attention mechanism.

    4. Combination of Operations: The combination of these operations forms a low-rank factorization of the original attention matrix. Essentially, Linformer simplifies the computational process by approximating the full attention mechanism with a series of smaller, more manageable operations that collectively capture the essential characteristics of the original full-rank attention.

  • The figure below from the paper shows: (left and bottom-right) architecture and example of the proposed multihead linear self-attention; (top right) inference time vs. sequence length the various Linformer models.

  • Experimental validation shows that Linformer achieves similar or better performance compared to the original Transformer on standard NLP tasks such as sentiment analysis and question answering, using datasets like GLUE and IMDB reviews. Notably, the model offers considerable improvements in training and inference speeds, especially beneficial for longer sequences.
  • Additionally, various strategies for enhancing the efficiency of Linformer are tested, including different levels of parameter sharing and the use of non-uniform projected dimensions tailored to the specific demands of different layers within the model.
  • The authors suggest that the reduced computational requirements of Linformer not only make high-performance models more accessible and cost-effective but also open the door to environmentally friendlier AI practices due to decreased energy consumption.
  • In summary, Linformer proposes a more efficient self-attention mechanism for Transformers by leveraging the low-rank nature of self-attention matrices. This approach significantly reduces the computational burden, especially for long sequences, by lowering the complexity of attention calculations from quadratic to linear in terms of both time and space. This makes Linformer an attractive choice for tasks involving large datasets or long sequence inputs, where traditional Transformers might be less feasible due to their higher computational demands.

Attention in Today’s LLMs

  • Credits to the following section go to The AIEdge.
  • While the Transformers of 2017 implemented attention computation that scaled quadratically, this no longer holds true with recent Transformer models.
  • Significant advancements have been made in the computation of attentions since the introduction of GPT-3. Most large language models now employ sub-quadratic attention mechanisms, and many implementations have achieved constant space complexity. Innovations such as Paged-Attention and Flash Attention 1 and 2 have allowed for more efficient read-write access on hardware. Consequently, many open-source projects have moved beyond standard PyTorch implementations to accommodate enhanced hardware utilization.
  • For instance, in the Mistral-7B model, a sliding-window multi-query attention mechanism with an efficient memory implementation is employed. Below is an implementation of a fully vectorized sliding-window multihead attention mechanism. The time complexity is approximately \(O(Nw)\), where \(w\) represents the window size. This approach necessitates at least \(\frac{context size}{w}\) decoder blocks to fully encompass the entire context size.

Overview

  • The sliding window multi-head attention mechanism is a specialized variant of attention that is efficient for long sequences by focusing on local context through the use of sliding windows. This approach reduces computational complexity compared to traditional full attention mechanisms.

Components

  • Here’s a breakdown of the code’s components and functionalities:

    1. Class Definition (SlidingWindowMultiheadAttention):
      • Inherits from nn.Module, making it a PyTorch module.
      • Takes parameters such as hidden_size, num_heads, and window_size during initialization.
    2. Initialization Method (__init__):
      • Ensures that hidden_size is divisible by num_heads for equal division among the heads.
      • Sets up various attributes including num_heads, head_dim (dimension per head), window_size, and linear transformations (qkv_linear and out):
        • qkv_linear: A linear layer that projects input x into queries, keys, and values.
        • out: A linear layer to transform the concatenated output from all attention heads back to the original hidden size.
    3. Forward Method (forward):
      • Takes an input tensor x and processes it through the following steps:
        • Input Reshaping: Determines the shape parameters from the input tensor (batch_size, seq_length, hidden_size).
        • Padding: Calculates padding to be applied for the sliding window mechanism, which is half the window size.
        • Query, Key, Value Computation:
          • Uses the qkv_linear layer to produce combined queries, keys, and values.
          • Reshapes and permutes the combined tensor to separate queries, keys, and values.
        • Sliding Window Mechanism:
          • Applies padding to keys and values.
          • Unfolds keys and values to create sliding window segments.
        • Attention Computation:
          • Calculates dot product attention scores between queries and keys from the windows.
          • Applies softmax normalization on scores scaled by the square root of the head dimension.
          • Computes the weighted sum of values based on these attention scores.
        • Merging Heads and Final Linear Transformation:
          • Reshapes the context to merge the heads.
          • Passes the merged context through the out linear layer to produce the final output.
    4. Return:
      • Returns the output of the module after processing through the attention mechanism and linear transformation.

References

Citation

If you found our work useful, please cite it as:

@article{Chadha2021DistilledAttention,
  title   = {Attention},
  author  = {Jain, Vinija and Chadha, Aman},
  journal = {Aman's AI Journal},
  year    = {2021},
  note    = {\url{https://aman.ai}}
}