Overview

  • Word embeddings are a fascinating aspect of modern computational linguistics, particularly in the domain of Natural Language Processing (NLP). These embeddings serve as the foundation for interpreting and processing human language in a format that computers can understand and utilize. Here, we delve into an overview of word embeddings, focusing on their conceptual framework and practical applications.

Motivation

  • J.R. Firth’s Insight and Distributional Semantics: The principle of distributional semantics is encapsulated in J.R. Firth’s famous quote (below), which highlights the significance of contextual information in determining word meaning and captures the importance of contextual information in defining word meanings. This principle is a cornerstone in the development of word embeddings.

“You shall know a word by the company it keeps.”

  • Role in AI and NLP: Situated at the heart of AI, NLP aims to bridge the gap between human language and machine understanding. The primary motivation for developing word embeddings within NLP is to create a system where computers can not only recognize but also understand and interpret the subtleties and complexities of human language, thus enabling more natural and effective human-computer interactions.
  • Advancements in NLP: The evolution of NLP, especially with the integration of deep learning methods, has led to significant enhancements in various language-related tasks, underscoring the importance of continuous innovation in this field.
  • Historical Context and Evolution: With over 50 years of development, originating from linguistics, NLP has grown to embrace sophisticated models that generate word embeddings. The motivation for this evolution stems from the desire to more accurately capture and represent the nuances and complexities of human language in digital form.
  • Word embeddings as a lens for nuanced language interpretation: Word embeddings, underpinned by the concept of distributional semantics, represent word meanings through vectors of real numbers. While not perfect, this method provides a remarkably effective means of interpreting and processing language in computational systems. The ongoing developments in this field continue to enhance our ability to model and understand natural language in a digital context.

Word Embeddings

  • Word embeddings, also known as word vectors, provide a dense, continuous, and compact representation of words, encapsulating their semantic and syntactic attributes. They are essentially real-valued vectors, and the proximity of these vectors in a multidimensional space is indicative of the linguistic relationships between words.

An embedding is a point in an \(N\)-dimensional space, where \(N\) represents the number of dimensions of the embedding.

  • This concept is rooted in the Distributional Hypothesis, which posits that words appearing in similar contexts are likely to bear similar meanings. Consequently, in a high-dimensional vector space, vectors representing semantically related words (e.g., ‘apple’ and ‘orange’, both fruits) are positioned closer to each other compared to those representing semantically distant words (e.g., ‘apple’ and ‘dog’).

  • Word embeddings are constructed by forming dense vectors for each word, chosen in such a way that they resemble vectors of contextually similar words. This process effectively embeds words in a high-dimensional vector space, with each dimension contributing to the representation of a word’s meaning. For example, the concept of ‘banking’ is distributed across all dimensions of its vector, with its entire semantic essence embedded within this multidimensional space.

  • The term ‘embedding’ in this context refers to the transformation of discrete words into continuous vectors, achieved through word embedding algorithms. These algorithms are designed to convert words into vectors that encapsulate a significant portion of their semantic content. A classic example of the effectiveness of these embeddings is the vector arithmetic that yields meaningful analogies, such as 'king' - 'man' + 'woman' ≈ 'queen'.

  • Word embeddings are typically pre-trained on large, unlabeled text corpora. This training often involves optimizing auxiliary objectives, like predicting a word based on its contextual neighbors, as demonstrated in Word2Vec by Mikolov et al. (2013). Through this process, the resultant word vectors encapsulate both syntactic and semantic properties of words.

  • These embeddings have shown remarkable efficiency in capturing contextual similarities, analogies, and, owing to their reduced dimensionality, facilitate rapid and effective computation in various natural language processing tasks. Similarity between word vectors can be quantitatively assessed using measures such as cosine similarity.

  • In deep learning models, word embeddings are frequently employed as the initial layer of data processing. This foundational role of embeddings will be further elaborated upon in our discussion of advanced models like BERT.

  • In summary, word embeddings not only efficiently encapsulate the semantic and syntactic nuances of language but also play a pivotal role in enhancing the computational efficiency of numerous natural language processing tasks.

Conceptual Framework of Word Embeddings

  1. Continuous Knowledge Representation:
    • Information can be categorized as either existing in continuous streams or in discrete chunks. Large Language Models (LLMs), such as BERT, GPT, and others, exemplify continuous knowledge representation. This approach contrasts with traditional methods that often handle data in discrete units.
  2. Nature of LLM Embeddings:
    • LLM embeddings are essentially dense, continuous, real-valued vectors situated within a high-dimensional space. For instance, in the case of BERT, these vectors are 768-dimensional. This concept can be analogized to geographical coordinates on a map. Just as longitude and latitude offer specific locational references on a two-dimensional plane, embeddings provide approximations of positions within a multi-dimensional semantic space. This space is constructed from the interconnections among words across vast internet resources.
  3. Characteristics of Embedding Vectors:
    • Since these vectors are continuous, they permit an infinite range of values within specified intervals. This continuity results in a certain ‘fuzziness’ in the embeddings’ coordinates, allowing for nuanced and context-sensitive interpretation of word meanings.
  4. Example of LLM Embedding Functionality:
    • Consider the LLM embedding for a phrase like ‘Jennifer Aniston’. This embedding would be a multi-dimensional vector leading to a specific location in a vast ‘word-space’, comprising several billion parameters. Adding another concept, such as ‘TV series’, to this vector could shift its position towards the vector representing ‘Friends’, illustrating the dynamic and context-aware nature of these embeddings. However, this sophisticated mechanism is not without its challenges, as it can sometimes lead to unpredictable or ‘hallucinatory’ outputs.
  • One of the initial attempts to digitally encapsulate a word’s meaning was through the development of WordNet. WordNet functioned as an extensive thesaurus, encompassing a compilation of synonym sets and hypernyms, the latter representing a type of hierarchical relationship among words.
  • Despite its innovative approach, WordNet encountered several limitations:
    • Inefficacy in capturing the full scope of word meanings.
    • Inadequacy in reflecting the subtle nuances associated with words.
    • An inability to incorporate evolving meanings of words over time.
    • Challenges in maintaining its currency and relevance in an ever-changing linguistic landscape.
  • Moreover, WordNet employed the principles of distributional semantics, which posits that a word’s meaning is largely determined by the words that frequently appear in close proximity to it.
  • Subsequently, the field of NLP witnessed a paradigm shift with the advent of word embeddings. These embeddings marked a significant departure from the constraints of traditional lexical databases like WordNet. Unlike its predecessors, word embeddings provided a more dynamic and contextually sensitive approach to understanding language. By representing words as vectors in a continuous vector space, these embeddings could capture a broader array of linguistic relationships, including semantic similarity and syntactic patterns.
  • Today, word embeddings continue to be a cornerstone technology in NLP, powering a wide array of applications and tasks. Their ability to efficiently encode word meanings into a dense vector space has not only enhanced the performance of various NLP tasks but also has laid the groundwork for more advanced language processing and understanding technologies.

Word Embedding Techniques

  • Accurately representing the meaning of words is a crucial aspect of NLP. This task has evolved significantly over time, with various techniques being developed to capture the nuances of word semantics.
  • Count-based methods like TF-IDF and BM25 focus on word frequency and document uniqueness, offering basic information retrieval capabilities. Co-occurrence based techniques such as Word2Vec, GloVe, and fastText analyze word contexts in large corpora, capturing semantic relationships and morphological details. Contextualized models like BERT and ELMo provide dynamic, context-sensitive embeddings, significantly enhancing language understanding by generating varied representations for words based on their usage in sentences. Details of the aforementioned taxonomy are as follows:

    1. Count-Based Techniques (TF-IDF and BM25): With their roots in the field of information retrieval, these methods focus on the frequency of words in documents. TF-IDF emphasizes words that are unique to a document in a corpus, while BM25 refines this approach with probabilistic modeling, considering document length and term saturation. They are foundational in information retrieval but lack semantic richness.

    2. Co-occurrence Based/Static Embedding Techniques (Word2Vec, GloVe, fastText): These techniques generate embeddings by analyzing how words co-occur in large text corpora. Word2Vec and GloVe create word vectors that capture semantic relationships, while fastText extends this by considering subword information, enhancing understanding of morphological structures.

    3. Contextualized/Dynamic Representation Techniques (BERT, ELMo): BERT and ELMo represent advanced embedding techniques, providing context-sensitive word representations. Unlike static embeddings, they generate different vectors for a word based on its surrounding context, leading to a deeper understanding of language nuances and ambiguities. These models have significantly improved performance in a wide range of NLP tasks.

Bag of Words (BoW)

Concept

  • Bag of Words (BoW) is a simple and widely used technique for text representation in natural language processing (NLP). It represents text data (documents) as vectors of word counts, disregarding grammar and word order but keeping multiplicity. Each unique word in the corpus is a feature, and the value of each feature is the count of occurrences of the word in the document.

Steps to Create BoW Embeddings:**

  1. Tokenization:
    • Split the text into words (tokens).
  2. Vocabulary Building:
    • Create a vocabulary list of all unique words in the corpus.
  3. Vector Representation:
    • For each document, create a vector where each element corresponds to a word in the vocabulary. The value is the count of occurrences of that word in the document.
  • Example:

    Consider a corpus with the following two documents:

    1. “The cat sat on the mat.”
    2. “The dog sat on the log.”
  • Steps:

    1. Tokenization:
      • Document 1: [“the”, “cat”, “sat”, “on”, “the”, “mat”]
      • Document 2: [“the”, “dog”, “sat”, “on”, “the”, “log”]
    2. Vocabulary Building:
      • Vocabulary: [“the”, “cat”, “sat”, “on”, “mat”, “dog”, “log”]
    3. Vector Representation:
      • Document 1: [2, 1, 1, 1, 1, 0, 0]
      • Document 2: [2, 0, 1, 1, 0, 1, 1]
    • The resulting BoW vectors are:
      • Document 1: [2, 1, 1, 1, 1, 0, 0]
      • Document 2: [2, 0, 1, 1, 0, 1, 1]

Differences Compared to TF-IDF and BM25

  • TF-IDF considers both the frequency of terms in a document and the rarity of the term in the corpus, whereas BoW only considers term frequency.
  • BM25 incorporates length normalization and a saturation function for term frequency, providing a more nuanced scoring compared to BoW and TF-IDF.
Summary
  • BoW:
    • Simple count-based method representing word frequencies.
    • Ignores word order and context.
    • Provides a straightforward vector representation.
  • TF-IDF:
    • Enhances BoW by weighing terms based on their importance in a document relative to the corpus.
    • Combines term frequency with inverse document frequency.
    • More effective in distinguishing important words from common words.
  • BM25:
    • Advanced ranking function for information retrieval.
    • Incorporates term frequency, document length normalization, and inverse document frequency.
    • Provides more nuanced and effective document scoring for relevance to queries.
  • While BoW is a foundational technique, TF-IDF and BM25 build on its principles to provide more sophisticated and effective ways to represent and rank documents based on their content and relevance.

Limitations of BoW

  • Bag of Words (BoW) embeddings, despite their simplicity and effectiveness in some applications, have several significant limitations. These limitations can impact the performance and applicability of BoW in more complex natural language processing (NLP) tasks. Here’s a detailed explanation of these limitations:
Lack of Contextual Information
  • Word Order Ignored:
    • BoW embeddings do not take into account the order of words in a document. This means that “cat sat on the mat” and “mat sat on the cat” will have the same BoW representation, despite having different meanings.
  • Loss of Syntax and Semantics:
    • The embedding does not capture syntactic and semantic relationships between words. For instance, “bank” in the context of a financial institution and “bank” in the context of a riverbank will have the same representation.
High Dimensionality
  • Large Vocabulary Size:
    • The dimensionality of BoW vectors is equal to the number of unique words in the corpus, which can be extremely large. This leads to very high-dimensional vectors, resulting in increased computational cost and memory usage.
  • Sparsity:
    • Most documents use only a small fraction of the total vocabulary, resulting in sparse vectors with many zero values. This sparsity can make storage and computation inefficient.
Lack of Handling of Polysemy and Synonymy
  • Polysemy:
    • Polysemous words (words with multiple meanings) are treated as a single feature, failing to capture their different senses based on context.
  • Synonymy:
    • Synonyms (different words with similar meanings) are treated as completely unrelated features. For example, “happy” and “joyful” will have different vector representations even though they have similar meanings.
Fixed Vocabulary
  • OOV (Out-of-Vocabulary) Words:
    • BoW cannot handle words that were not present in the training corpus. Any new word encountered will be ignored or misrepresented, leading to potential loss of information.
Feature Independence Assumption
  • No Inter-Feature Relationships:
    • BoW assumes that the presence or absence of a word in a document is independent of other words. This independence assumption ignores any potential relationships or dependencies between words, which can be crucial for understanding context and meaning.
Scalability Issues
  • Computational Inefficiency:
    • As the size of the corpus increases, the vocabulary size also increases, leading to scalability issues. High-dimensional vectors require more computational resources for processing, storing, and analyzing the data.
No Weighting Mechanism
  • Equal Importance:
    • In its simplest form, BoW treats all words with equal importance, which is not always appropriate. Common but less informative words (e.g., “the”, “is”) are treated the same as more informative words (e.g., “cat”, “bank”).
Lack of Generalization
  • Poor Performance on Short Texts:
    • BoW can be particularly ineffective for short texts or documents with limited content, where the lack of context and the sparse nature of the vector representation can lead to poor performance.
Examples of Limitations
  • Example of Lack of Contextual Information:
    • Consider two sentences: “Apple is looking at buying a U.K. startup.” and “Startup is looking at buying an Apple.” Both would have similar BoW representations but convey different meanings.
  • Example of High Dimensionality and Sparsity:
    • A corpus with 100,000 unique words results in BoW vectors of dimension 100,000, most of which would be zeros for any given document.
Summary
  • While BoW embeddings provide a straightforward and intuitive way to represent text data, their limitations make them less suitable for complex NLP tasks that require understanding context, handling large vocabularies efficiently, or dealing with semantic and syntactic nuances. More advanced techniques like TF-IDF, word embeddings (e.g., Word2Vec, GloVe, fastText), and contextual embeddings (e.g., ELMo, BERT) address many of these limitations by incorporating context, reducing dimensionality, and capturing richer semantic information.

Term Frequency-Inverse Document Frequency (TF-IDF)

  • Term Frequency-Inverse Document Frequency (TF-IDF) is a statistical measure used to evaluate the importance of a word to a document in a collection or corpus. It is often used in information retrieval and text mining to rank the relevance of documents to a specific query. The TF-IDF value increases proportionally with the number of times a word appears in the document, but this is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

1. Term Frequency (TF):

  • Term Frequency measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (the total number of terms in the document) as a way of normalization:
\[\text{TF(t)} = \frac{\text{Number of times term }t\text{ appears in a document}}{\text{Total number of terms in the document}}\]

2. Inverse Document Frequency (IDF):

  • Inverse Document Frequency measures how important a term is. While computing TF, all terms are considered equally important. However, certain terms, like “is”, “of”, and “that”, may appear a lot of times but have little importance. Thus, we need to weigh down the frequent terms while scaling up the rare ones, by computing the following:
\[\text{IDF(t)} = \log \left( \frac{\text{Total number of documents}}{\text{Number of documents with term }t\text{ in it}} \right)\]
  • TF-IDF Example:
    • Consider a document collection with two documents:
      • Doc 1: “The sky is blue.”
      • Doc 2: “The sun is bright.”
    • Let’s calculate the TF-IDF for the word “blue” in Doc 1.
      • TF for “blue” in Doc 1 = 1 / 4 (the word “blue” appears once in 4 words).
      • Assume our document collection contains 10 documents, and the word “blue” appears in 2 of these. Then, IDF for “blue” = log(10/2) = log(5).
    • Finally, the TF-IDF score for “blue” in Doc 1 = TF * IDF = (1/4) * log(5).
    • The TF-IDF score for “blue” in Doc 1 is thus a measure of its importance in that document, within the context of the given document collection. This score would be different in a different document or a different collection, reflecting the term’s varying importance. TF-IDF is a fundamental technique in text processing, often used for tasks like document classification, search engine ranking, and information retrieval.

Limitations of TF-IDF

  • TF-IDF is a widely used technique in information retrieval and text processing, but it comes with several limitations:

    1. Lack of Context and Word Order: TF-IDF treats each word in a document independently and does not consider the context in which a word appears. This means it cannot capture the meaning of words based on their surrounding words or the overall semantic structure of the text. The word order is also ignored, which can be crucial in understanding the meaning of a sentence.

    2. Does Not Account for Polysemy: Words with multiple meanings (polysemy) are treated the same regardless of their context. For example, the word “bank” would have the same representation in “river bank” and “savings bank”, even though it has different meanings in these contexts.

    3. Lack of Semantic Understanding: TF-IDF relies purely on the statistical occurrence of words in documents, which means it lacks any understanding of the semantics of the words. It cannot capture synonyms or related terms unless they appear in similar documents within the corpus.

    4. Bias Towards Rare Terms: While the IDF component of TF-IDF aims to balance the frequency of terms, it can sometimes overly emphasize rare terms. This might lead to overvaluing words that appear infrequently but are not necessarily more relevant or important in the context of the document.

    5. Vocabulary Limitation: The TF-IDF model is limited to the vocabulary of the corpus it was trained on. It cannot handle new words that were not in the training corpus, making it less effective for dynamic content or languages that evolve rapidly.

    6. Normalization Issues: The normalization process in TF-IDF (e.g., dividing by the total number of words in a document) may not always be effective in balancing document lengths and word frequencies, potentially leading to skewed results.

    7. Requires a Large and Representative Corpus: For the IDF part of TF-IDF to be effective, it needs a large and representative corpus. If the corpus is not representative of the language or the domain of interest, the IDF scores may not accurately reflect the importance of the words.

    8. No Distinction Between Different Types of Documents: TF-IDF treats all documents in the corpus equally, without considering the type or quality of the documents. This means that all sources are considered equally authoritative, which may not be the case.

    9. Poor Performance with Short Texts: In very short documents, like tweets or SMS messages, the TF-IDF scores can be less meaningful because of the limited word occurrence and context.

  • In conclusion, while TF-IDF is a powerful tool for certain applications, these limitations make it less suitable for tasks that require deep understanding of language, such as semantic search, word sense disambiguation, or processing of very short or dynamically changing texts. This has led to the development and adoption of more advanced techniques like word embeddings and neural network-based models in natural language processing.

Best Match 25 (BM25)

  • BM25 is a ranking function used in information retrieval systems, particularly in search engines, to rank documents based on their relevance to a given search query. It’s a part of the family of probabilistic information retrieval models and is an extension of the TF-IDF (Term Frequency-Inverse Document Frequency) approach, though it introduces several improvements and modifications.

Key Components of BM25

  1. Term Frequency (TF): BM25 modifies the term frequency component of TF-IDF to address the issue of term saturation. In TF-IDF, the more frequently a term appears in a document, the more it is considered relevant. However, this can lead to a problem where beyond a certain point, additional occurrences of a term don’t really indicate more relevance. BM25 addresses this by using a logarithmic scale for term frequency, which allows for a point of diminishing returns, preventing a term’s frequency from having an unbounded impact on the document’s relevance.

  2. Inverse Document Frequency (IDF): Like TF-IDF, BM25 includes an IDF component, which helps to weight a term’s importance based on how rare or common it is across all documents. The idea is that terms that appear in many documents are less informative than those that appear in fewer documents.

  3. Document Length Normalization: BM25 introduces a sophisticated way of handling document length. Unlike TF-IDF, which may unfairly penalize longer documents, BM25 normalizes for length in a more balanced manner, reducing the impact of document length on the calculation of relevance.

  4. Tunable Parameters: BM25 includes parameters like k1 and b, which can be adjusted to optimize performance for specific datasets and needs. k1 controls how quickly an increase in term frequency leads to term saturation, and b controls the degree of length normalization.

Example

  • Imagine you have a collection of documents and a user searches for “solar energy advantages”.

    • Document A is 300 words long and mentions “solar energy” 4 times and “advantages” 3 times.
    • Document B is 1000 words long and mentions “solar energy” 10 times and “advantages” 1 time.
  • Using BM25:
    • Term Frequency: The term “solar energy” appears more times in Document B, but due to term saturation, the additional occurrences don’t contribute as much to its relevance score as the first few mentions.
    • Inverse Document Frequency: If “solar energy” and “advantages” are relatively rare in the overall document set, their appearances in these documents increase the relevance score more significantly.
    • Document Length Normalization: Although Document B is longer, BM25’s length normalization ensures that it’s not unduly penalized simply for having more words. The relevance of the terms is balanced against the length of the document.
  • So, despite Document B having more mentions of “solar energy”, BM25 will calculate the relevance of both documents in a way that balances term frequency, term rarity, and document length, potentially ranking them differently based on how these factors interplay. The final relevance scores would then determine their ranking in the search results for the query “solar energy advantages”.

Differences compared to TF-IDF

  • BM25 is a ranking function used by search engines to estimate the relevance of documents to a given search query. It’s part of the probabilistic information retrieval model and is considered an evolution of the TF-IDF (Term Frequency-Inverse Document Frequency) model. Both are used to rank documents based on their relevance to a query, but they differ in how they calculate this relevance.
BM25
  • Term Frequency Component: Like TF-IDF, BM25 considers the frequency of the query term in a document. However, it adds a saturation point to prevent a term’s frequency from disproportionately influencing the document’s relevance.
  • Length Normalization: BM25 adjusts for the length of the document, penalizing longer documents less harshly than TF-IDF.
  • Tuning Parameters: It includes two parameters, k1 and b, which control term saturation and length normalization, respectively. These can be tuned to suit specific types of documents or queries.
TF-IDF
  • Term Frequency: TF-IDF measures the frequency of a term in a document. The more times the term appears, the higher the score.
  • Inverse Document Frequency: This component reduces the weight of terms that appear in many documents across the corpus, assuming they are less informative.
  • Simpler Model: TF-IDF is generally simpler than BM25 and doesn’t involve parameters like k1 or b.
Example
  • Imagine a search query “chocolate cake recipe” and two documents:

    • Document A: 100 words, “chocolate cake recipe” appears 10 times.
    • Document B: 1000 words, “chocolate cake recipe” appears 15 times.

    Using TF-IDF:

    • The term frequency for “chocolate cake recipe” would be higher in Document A.
    • Document B, being longer, might get a lower relevance score due to less frequency of the term.

    Using BM25:

    • The term frequency component would reach a saturation point, meaning after a certain frequency, additional occurrences of “chocolate cake recipe” contribute less to the score.
    • Length normalization in BM25 would not penalize Document B as heavily as TF-IDF, considering its length.
    • The tuning parameters k1 and b could be adjusted to optimize the balance between term frequency and document length.
  • In essence, while both models aim to determine the relevance of documents to a query, BM25 offers a more nuanced and adjustable approach, especially beneficial in handling longer documents and ensuring that term frequency doesn’t disproportionately affect relevance.

Limitations of BM25

  • BM25, while a powerful and widely-used ranking function in information retrieval, has several limitations:

    1. Parameter Sensitivity: BM25 includes parameters like k1 and b, which need to be fine-tuned for optimal performance. This tuning process can be complex and is highly dependent on the specific nature of the document collection and queries. Inappropriate parameter settings can lead to suboptimal results.

    2. Non-Handling of Semantic Similarities: BM25 primarily relies on exact keyword matching. It does not account for the semantic relationships between words. For instance, it would not recognize “automobile” and “car” as related terms unless explicitly programmed to do so. This limitation makes BM25 less effective in understanding the context or capturing the nuances of language.

    3. Ineffectiveness with Short Queries or Documents: BM25’s effectiveness can decrease with very short queries or documents, as there are fewer words to analyze, making it harder to distinguish relevant documents from irrelevant ones.

    4. Length Normalization Challenges: While BM25’s length normalization aims to prevent longer documents from being unfairly penalized, it can sometimes lead to the opposite problem, where shorter documents are unduly favored. The balance is not always perfect, and the effectiveness of the normalization can vary based on the dataset.

    5. Query Term Independence: BM25 assumes independence between query terms. It doesn’t consider the possibility that the presence of certain terms together might change the relevance of a document compared to the presence of those terms individually.

    6. Difficulty with Rare Terms: Like TF-IDF, BM25 can struggle with very rare terms. If a term appears in very few documents, its IDF (Inverse Document Frequency) component can become disproportionately high, skewing results.

    7. Performance in Specialized Domains: In specialized domains with unique linguistic features (like legal, medical, or technical fields), BM25 might require significant customization to perform well. This is because standard parameter settings and term-weighting mechanisms may not align well with the unique characteristics of these specialized texts.

    8. Ignoring Document Quality: BM25 focuses on term frequency and document length but doesn’t consider other aspects that might indicate document quality, such as authoritativeness, readability, or the freshness of information.

    9. Vulnerability to Keyword Stuffing: Like many other keyword-based algorithms, BM25 can be susceptible to keyword stuffing, where documents are artificially loaded with keywords to boost relevance.

    10. Incompatibility with Complex Queries: BM25 is less effective for complex queries, such as those involving natural language questions or multi-faceted information needs. It is designed for keyword-based queries and may not perform well with queries that require understanding of context or intent.

  • Understanding these limitations is crucial when implementing BM25 in a search engine or information retrieval system, as it helps in identifying cases where BM25 might need to be supplemented with other techniques or algorithms for better performance.

Word2Vec

  • Proposed in Efficient Estimation of Word Representations in Vector Space by Mikolov et al. (2013), the Word2Vec algorithm represented a significant advancement in the field of NLP at the time as a notable example of a word embedding technique.
  • Word2Vec is renowned for its effectiveness in learning word vectors, which are then used to decode the semantic relationships between words. It utilizes a vector space model to encapsulate words in a manner that captures both semantic and syntactic relationships. This method enables the algorithm to discern similarities and differences between words, as well as to identify analogous relationships, such as the parallel between “Stockholm” and “Sweden” and “Cairo” and “Egypt.”
  • Word2Vec’s methodology of representing words as vectors in a semantic and syntactic space has profoundly impacted the field of NLP, offering a robust framework for capturing the intricacies of language and its usage.

Motivation behind Word2Vec: The Need for Context-based Semantic Understanding

  • TF-IDF and BM25 are methods used in information retrieval to rank documents based on their relevance to a query. While they provide useful measures for text analysis, they do not offer context-based “semantic” embeddings (in the same way that Word2Vec or BERT embeddings do). Here’s why:

    1. TF-IDF: This method calculates a weight for each word in a document, which increases with the number of times the word appears in the document but decreases based on the frequency of the word across all documents. TF-IDF is good at identifying important words in a document but doesn’t capture the meaning of the words or their relationships with each other. It’s more about word importance than word meaning.

    2. BM25: An extension of TF-IDF, BM25 is a ranking function used by search engines to estimate the relevance of documents to a given search query. While it improves upon TF-IDF by incorporating probabilistic understanding of term occurrence and handling of term saturation, it still fundamentally operates on term frequency and inverse document frequency. Like TF-IDF, BM25 doesn’t inherently capture semantic relationships between words.

  • In contrast, semantic embeddings (like those from Word2Vec, BERT, etc.) are designed to capture the meanings of words and their relationships to each other. These embeddings represent words as vectors in a way that words with similar meanings are located close to each other in the vector space, enabling the capture of semantic relationships and nuances in language.
  • Therefore, while TF-IDF and BM25 are valuable tools for information retrieval and determining document relevance, they do not provide semantic embeddings of words or phrases. They are more focused on word occurrence and frequency rather than on capturing the underlying meanings and relationships of words.

Core Idea

  • Word2Vec employs a shallow neural network, trained on a large textual corpus, to predict the context surrounding a given word. The essence of Word2Vec lies in its ability to convert words into high-dimensional vectors. This representation allows the algorithm to capture the meaning, semantic similarity, and relationships with surrounding text. A notable feature of Word2Vec is its capacity to perform arithmetic operations with these vectors to reveal linguistic patterns, such as the famous analogy king - man + woman = queen.

Word2Vec Illustration

Word2Vec Architectures

  • Word2Vec offers two distinct architectures for training:

    1. Continuous Bag-of-Words (CBOW): This model predicts a target word based on its context words. The input is a summation of the word vectors of the surrounding context words, with the output being the current word. This approach is depicted in the following image: CBOW

    2. Skip-gram: Conversely, the Skip-gram model predicts the surrounding context words from a given target word. The input is the target word, and the output is a softmax classification over the entire vocabulary to predict the context words.

FAQ: What does the “Continuous” in Word2Vec’s Continuous Bag of Words (CBOW) refer to?
  • The term “Continuous” in Word2Vec’s Continuous Bag of Words (CBOW) model refers to the continuous and distributed representation of words in the vector space. This is in contrast to traditional bag-of-words models, which represent words using discrete and sparse vectors.
Traditional Bag of Words (BoW)
  • Discrete Representation:
    • In a traditional Bag of Words model, each word is represented as a unique index in a vocabulary, creating a sparse and high-dimensional vector. For example, in a vocabulary of 10,000 words, “cat” might be represented as a vector with a 1 in the position corresponding to “cat” and 0s elsewhere.
  • Sparse Vectors:
    • These vectors are sparse because most elements are zero. Each word vector is orthogonal to every other word vector, meaning there is no inherent similarity between words represented in this way.
  • No Context:
    • BoW models do not capture the context in which words appear. They only consider word frequencies within documents, ignoring word order and contextual relationships.
Continuous Bag of Words (CBOW)
  • Continuous and Distributed Representation:
    • The “Continuous” in CBOW refers to the use of continuous and dense vectors to represent words. Instead of sparse vectors, each word is mapped to a dense vector of real numbers. These vectors are typically of much lower dimensionality (e.g., 100 or 300 dimensions) and are learned through training on a large corpus.
  • Contextual Embeddings:
    • CBOW captures the context of a word by considering its surrounding words. Given a context (the words surrounding a target word), CBOW predicts the target word. For example, in the sentence “The cat sat on the mat,” the context for “sat” might be [“The”, “cat”, “on”, “the”, “mat”].
  • Training Process:
    • The model learns to maximize the probability of the target word given its context. This is done using a neural network that adjusts the word vectors to make similar words (words that appear in similar contexts) have similar vector representations.
  • Dense Vectors:
    • Each word is associated with a dense vector that captures various syntactic and semantic properties. These vectors are “continuous” in that they can take on any value in the real-number space, unlike the discrete indices used in traditional BoW models.
  • Example:
    • Suppose “cat” is represented by a 100-dimensional vector like [0.25, -0.1, 0.75, …]. This vector is learned from the contexts in which “cat” appears, and words that appear in similar contexts (like “dog”) will have similar vectors.
Key Advantages of CBOW
  1. Captures Contextual Information:
    • By predicting a target word from its context, CBOW captures the contextual relationships between words, leading to more meaningful word representations.
  2. Dense and Low-Dimensional Vectors:
    • The use of dense, continuous vectors reduces the dimensionality of the word representation, making it computationally more efficient and enabling the model to generalize better.
  3. Semantic Similarity:
    • Words with similar meanings or that appear in similar contexts will have similar embeddings, allowing for better semantic understanding.
  4. Efficient Training:
    • CBOW is generally faster to train than the Skip-gram model because it uses the entire context to predict the target word, which can be more efficient, especially for large corpora.
Summary
  • The “Continuous” in Continuous Bag of Words (CBOW) highlights the transition from discrete, sparse word representations to continuous, dense vector representations. This shift allows CBOW to capture contextual information and semantic relationships between words more effectively, leading to more powerful and meaningful word embeddings.

Training and Optimization

  • The training of Word2Vec involves representing every word in a fixed vocabulary by a vector and then optimizing these vectors to predict surrounding words accurately. This is achieved through stochastic gradient descent, minimizing a loss function that indicates the discrepancy between predicted and actual context words. The algorithm uses a sliding window approach to maximize the probability of context words given a center word, as illustrated in the accompanying diagram:

Word2Vec Training

Embedding and Semantic Relationships

  • Through the training process, Word2Vec places words with similar meanings in proximity within the high-dimensional vector space. For example, ‘bread’ and ‘croissant’ would have closely aligned vectors, just as ‘woman’, ‘king’, and ‘man’ would demonstrate meaningful relationships through vector arithmetic:

Word2Vec Analogies

Distinction from Traditional Models

  • A key differentiation between Word2Vec and traditional count-based language models is its reliance on embeddings. Deep learning-based NLP models, including Word2Vec, represent words, phrases, and even sentences using these embeddings, which encode much richer contextual and semantic information.

Semantic Nature of Word2Vec Embeddings

  • Word2Vec embeddings are considered semantic in the sense that they capture semantic relationships between words based on their usage in the text. The key idea behind Word2Vec is that words used in similar contexts tend to have similar meanings. This is often summarized by the phrase “a word is characterized by the company it keeps.”
  • When Word2Vec is trained on a large corpus of text, it learns vector representations (embeddings) for words such that words with similar meanings have similar embeddings. This is achieved through either of two model architectures: Continuous Bag-of-Words (CBOW) or Skip-Gram.
    1. CBOW Model: This model predicts a target word based on context words. For example, in the sentence “The cat sat on the ___”, the model tries to predict the word ‘mat’ based on the context provided by the other words.
    2. Skip-Gram Model: This model works the other way around, where it uses a target word to predict context words. For instance, given the word ‘cat’, it tries to predict ‘the’, ‘sat’, ‘on’, and ‘mat’.
  • These embeddings capture various semantic relationships, such as:
    • Similarity: Words with similar meanings have embeddings that are close in the vector space.
    • Analogy: Relationships like “man is to woman as king is to queen” can often be captured through vector arithmetic (e.g., vector('king') - vector('man') + vector('woman') is close to vector('queen')).
    • Clustering: Words with similar meanings tend to cluster together in the vector space.
  • However, it’s important to note that while Word2Vec captures many semantic relationships, it also has limitations. For example, it doesn’t capture polysemy well (the same word having different meanings in different contexts) and sometimes the relationships it learns are more syntactic than semantic. More advanced models like BERT and GPT have since been developed to address some of these limitations.

Further Learning Resources

Key Limitations of Word2Vec

  • Word2Vec, a pivotal development in natural language processing, has been instrumental in advancing our understanding of semantic relationships between words through vector embeddings. However, despite its breakthrough status, Word2Vec is not without limitations, many of which have been highlighted and addressed in subsequent developments within the field, such as BERT.
  • While Word2Vec’s method of word embedding results in a context-agnostic, single representation for each word, this becomes a significant limitation when dealing with polysemous words, i.e., words that possess multiple meanings based on their context. More advanced models such as BERT and ELMo have since been developed to provide context-dependent embeddings, where the representation of a word can dynamically vary based on its usage within a sentence. Here are the specifics:
  1. Static, Non-Contextualized Nature:
    • Single Vector Per Word: Word2Vec assigns a unique vector to each word, which remains static regardless of the word’s varying context in different sentences. This results in a representation that cannot dynamically adapt to different usages of the same word.
    • Combination of Contexts: In cases where a word like “bank” appears in multiple contexts (“river bank” vs. “financial bank”), Word2Vec does not generate distinct embeddings for each scenario. Instead, it creates a singular, averaged representation that amalgamates all the contexts in which the word appears, leading to a generalized semantic representation.
    • Lack of Disambiguation: The model’s inability to differentiate between the multiple meanings of polysemous words means that words like “bank” are represented by a single vector, irrespective of the specific meaning in a given context.
    • Context Window Limitation: Word2Vec employs a fixed-size context window, capturing only local co-occurrence patterns without a deeper understanding of the word’s role in the broader sentence or paragraph.
  2. Training Process and Computational Intensity:
    • Adjustments During Training: Throughout the training process, the word vectors are continually adjusted, not to switch between meanings but to refine the word’s placement in the semantic space based on an aggregate of its various uses.
    • Resource Demands: Training Word2Vec, particularly for large vocabularies, requires significant computational resources and time. Techniques like negative sampling were introduced to alleviate some of these demands, but computational intensity remains a challenge.
  3. Handling of Special Cases:
    • Phrase Representation: Word2Vec struggles with representing phrases or idioms where the meaning is not simply an aggregation of the meanings of individual words.
    • Out-of-Vocabulary Words: The model faces challenges with unknown or out-of-vocabulary (OOV) words. This issue is better addressed in models that treat words as compositions of characters, such as character embeddings, which are especially beneficial for languages with non-segmented script.
  4. Global Vector Representation Limitations:
    • Uniform Representation Across Contexts: Word2Vec, like other traditional methods, generates a global vector representation for each word, which does not account for the various meanings a word can have in different contexts. For example, the different senses of “bank” in diverse sentences are not captured distinctively.
  5. Resulting Embedding Compromises:
    • The resulting vector for words with distinct meanings is a compromise that reflects its diverse uses, leading to less precise representations for tasks requiring accurate contextual understanding.
  • These limitations of Word2Vec have spurred advancements in the field, leading to the development of more sophisticated language models that address issues of context sensitivity, polysemy, computational efficiency, and handling of OOV words. This progression towards more robust and context-aware word representations underscores the ongoing evolution and potential of natural language processing.

Global Vectors for Word Representation (GloVe)

Overview

  • Proposed in GloVe: Global Vectors for Word Representation by Pennington et al. (2014), Global Vectors for Word Representation (GloVe) embeddings are a type of word representation used in NLP. They are designed to capture not just the local context of words but also their global co-occurrence statistics in a corpus, thus providing a rich and nuanced word representation.
  • By blending these approaches, GloVe captures a fuller picture of word meaning and usage, making it a valuable tool for various NLP tasks, such as sentiment analysis, machine translation, and information retrieval.
  • Here’s a detailed explanation along with an example:

How GloVe Works

  1. Co-Occurrence Matrix: GloVe starts by constructing a large matrix that represents the co-occurrence statistics of words in a given corpus. This matrix has dimensions of [vocabulary size] x [vocabulary size], where each entry \((i, j)\) in the matrix represents how often word i occurs in the context of word j.

  2. Matrix Factorization: The algorithm then applies matrix factorization techniques to this co-occurrence matrix. The goal is to reduce the dimensions of each word into a lower-dimensional space (the embedding space), while preserving the co-occurrence information.

  3. Word Vectors: The end result is that each word in the corpus is represented by a vector in this embedding space. Words with similar meanings or that often appear in similar contexts will have similar vectors.

  4. Relationships and Analogies: These vectors capture complex patterns and relationships between words. For example, they can capture analogies like “man is to king as woman is to queen” by showing that the vector ‘king’ - ‘man’ + ‘woman’ is close to ‘queen’.

Example of GloVe in Action

  • Imagine a simple corpus with the following sentences:
    • “The cat sat on the mat.”
    • “The dog sat on the log.”
  • From this corpus, a co-occurrence matrix is constructed. For instance, ‘cat’ and ‘mat’ will have a higher co-occurrence score because they appear close to each other in the sentences. Similarly, ‘dog’ and ‘log’ will be close in the embedding space.
  • After applying GloVe, each word (like ‘cat’, ‘dog’, ‘mat’, ‘log’) will be represented as a vector. The vector representation captures the essence of each word, not just based on the context within its immediate sentence, but also based on how these words co-occur in the entire corpus.
  • In a large and diverse corpus, GloVe can capture complex relationships. For example, it might learn that ‘cat’ and ‘dog’ are both pets, and this will be reflected in how their vectors are positioned relative to each other and to other words like ‘pet’, ‘animal’, etc.

Significance of GloVe

  • GloVe is powerful because it combines the benefits of two major approaches in word representation:
    • Local Context Window Methods (like Word2Vec): These methods look at the local context, but might miss the broader context of word usage across the entire corpus.
    • Global Matrix Factorization Methods: These methods, like Latent Semantic Analysis (LSA), consider global word co-occurrence but might miss the nuances of local word usage.

Differences compared to Word2Vec and fastText

  • Word2Vec, GloVe, and fastText are all popular methods for generating word embeddings in natural language processing, but they differ significantly in their approaches and the type of information they capture.
  • Each method has its unique strengths and applications, and the choice between them often depends on the specific requirements of the NLP task, such as the nature of the language being processed, the need to handle OOV words, and the computational resources available.
  • Here’s a detailed comparison:
GloVe (Global Vectors for Word Representation)
  • Approach: GloVe creates word embeddings by constructing a word-context co-occurrence matrix from a corpus. It then uses matrix factorization techniques on this matrix.
  • Focus: GloVe captures global statistics of word co-occurrences across the entire corpus.
  • Strengths: It’s effective at capturing both the global co-occurrence statistics and local context of words, leading to rich and nuanced word representations.
  • Limitations: GloVe does not account for word order and is not as effective in capturing morphological nuances as fastText.
fastText
  • Approach: Developed by Facebook’s AI Research lab, fastText extends the Word2Vec model by not just considering words as whole entities but also breaking them down into subword units (n-grams).
  • Focus: fastText captures morphological information by considering these sub-word units, which is especially useful for languages with rich morphology.
  • Strengths: It can generate embeddings for out-of-vocabulary (OOV) words based on their subword components, making it robust in handling uncommon or new words.
  • Limitations: While powerful in handling morphology and OOV words, fastText might generate less meaningful embeddings for entire words since it emphasizes subword information.
Word2Vec
  • Approach: Word2Vec, developed by Google, generates word embeddings using either of two models: Continuous Bag of Words (CBOW) or Skip-Gram. Both these models use local word context for learning representations.
  • Focus: Word2Vec primarily focuses on the context in which a word appears, using the surrounding words as predictors for a target word (or vice versa).
  • Strengths: It’s efficient and effective at capturing syntactic and semantic word relationships based on local context.
  • Limitations: Word2Vec does not consider global co-occurrence statistics (unlike GloVe) and cannot handle OOV words or capture word morphology (unlike fastText).
Summary of Differences
  • Global vs. Local Context: GloVe combines global word co-occurrence statistics with local context, whereas Word2Vec focuses only on local context.
  • Morphological Richness: fastText excels in capturing subword information, making it superior for understanding morphologically rich languages and handling OOV words.
  • Word Representations: GloVe provides more nuanced word representations by considering overall corpus statistics, while Word2Vec offers efficient, context-focused representations. fastText adds the dimension of subword information to the mix.

fastText

Overview

  • Proposed in Enriching Word Vectors with Subword Information by Bojanowski et al. (2017), fastText is an advanced word representation and sentence classification library developed by Facebook’s AI Research (FAIR) lab. It’s primarily used for text classification and word embeddings in NLP. fastText differs from traditional word embedding techniques through its unique approach to representing words, which is particularly beneficial for understanding morphologically complex languages or handling rare words.
  • Specifically, fastText’s innovative approach of using subword information makes it a powerful tool for a variety of NLP tasks, especially in dealing with languages that have extensive word forms and in situations where the dataset contains many rare words. By learning embeddings that incorporate subword information, fastText provides a more nuanced and comprehensive understanding of language semantics compared to traditional word embedding methods.
  • Here’s a detailed look at fastText with an example.

Core Features of fastText

  1. Subword Information: Unlike traditional models that treat words as the smallest unit for training, fastText breaks down words into smaller units - subwords or character n-grams. For instance, for the word “fast”, with a chosen n-gram range of 3 to 6, some of the subwords would be “fas”, “fast”, “ast”, etc. This technique helps in capturing the morphology of words.

  2. Handling of Rare Words: Due to its subword approach, fastText can effectively handle rare words or even words not seen during training. It generates embeddings for these words based on their subword units, allowing it to infer some meaning from these subcomponents.

  3. Efficiency in Learning Word Representations: fastText is efficient in learning representations for words that appear infrequently in the corpus, which is a significant limitation in many other word embedding techniques.

  4. Applicability to Various Languages: Its subword feature makes it particularly suitable for languages with rich word formations and complex morphology, like Turkish or Finnish.

  5. Word Embedding and Text Classification: fastText can be used both for generating word embeddings and for text classification purposes, providing versatile applications in NLP tasks.

Example of fastText

  • Consider the task of building a sentiment analysis model using word embeddings for an input sentence like “The movie was breathtakingly beautiful”. In traditional models like Word2Vec, each word is treated as a distinct unit, and if words like “breathtakingly” are rare in the training dataset, the model may not have a meaningful representation for them.
  • With fastText, “breathtakingly” is broken down into subwords (e.g., “breat”, “eathtaking”, “htakingly”, etc.). fastText then learns vectors for these subwords. When computing the vector for “breathtakingly”, it aggregates the vectors of its subwords. This approach allows fastText to handle rare words more effectively, as it can utilize the information from common subwords to understand less common or even out-of-vocabulary words.

Differences compared to Word2Vec and GloVe

  • fastText, Word2Vec, and GloVe are all popular methods for generating word embeddings in NLP. However, they differ significantly in their approach and underlying principles. The choice between these three models depends on the specific requirements of the NLP task, the characteristics of the language data, and the importance of accurately representing rare or morphologically complex words.
fastText
  1. Subword Information: Developed by Facebook’s AI Research lab, fastText represents words as bags of character n-grams. This means that it breaks down words into smaller components (subwords) and learns representations for these subword units. This approach is particularly beneficial for handling morphologically rich languages and rare or out-of-vocabulary words.

  2. Handling of Rare Words: fastText excels in handling rare words by inferring their meanings based on the subword components, offering a unique advantage in dealing with words not seen in training data.

  3. Morphological Awareness: Ideal for languages with complex word formations, as it captures the nuances of word morphology through subwords.

GloVe
  1. Global Matrix Factorization: GloVe, developed at Stanford, operates on word co-occurrence matrices. It focuses on constructing a global co-occurrence matrix of words from a corpus and then applying matrix factorization to derive word vectors.

  2. Global Context: GloVe considers the entire corpus to ascertain word co-occurrence statistics, thereby capturing global word-word relationships.

  3. Emphasis on Semantic Relationships: Known for capturing fine-grained semantic and syntactic regularities using vector arithmetic, GloVe is effective in revealing word associations based on the global context.

Word2Vec
  1. Local Context Window: Developed by researchers at Google, Word2Vec employs a sliding window to capture local contextual information. It uses two main architectures: Continuous Bag of Words (CBOW), where context words are used to predict a target word, and Skip-Gram, where a word is used to predict its context words.

  2. Word-Level Features: Word2Vec treats each word as a distinct entity without considering its internal structure (unlike fastText). It does not inherently capture subword information or morphological patterns.

  3. Less Effective with Rare Words: Due to its word-level approach, Word2Vec is not as effective as fastText in dealing with rare or unseen words.

Key Differences
  • Word Representation: fastText’s use of subword information, as opposed to the word-level approaches of GloVe and Word2Vec.
  • Contextual Analysis: Word2Vec’s reliance on local context windows, contrasted with GloVe’s global co-occurrence matrix and fastText’s focus on internal word structure.
  • Handling Rare Words: fastText’s superior ability to handle rare and unseen words thanks to its subword approach, a capability that GloVe and Word2Vec lack.
  • Computational Complexity: GloVe and Word2Vec are generally more computationally efficient than fastText, especially for larger datasets, due to the latter’s more complex model involving subwords.

FAQ: How are Word2Vec, GloVe, and fastText Co-occurrence-based Embedding Techniques?

  • Word2Vec, GloVe, and FastText are all co-occurrence-based embedding techniques, but they differ in their approaches to leveraging co-occurrence information to learn word embeddings. Here’s a detailed explanation of each method and how they utilize co-occurrence information:

Word2Vec

  • Description:
    • Word2Vec, developed by Google, includes two model architectures: Continuous Bag of Words (CBOW) and Skip-gram.
  • Co-occurrence Information:
    • CBOW: Predicts a target word based on the context words (words surrounding the target word within a fixed window size). This approach implicitly leverages word co-occurrence within the context window to learn embeddings.
    • Skip-gram: Predicts context words given a target word. This method also relies on co-occurrence information within a fixed window around the target word.
    • Training Objective: Both CBOW and Skip-gram use neural networks to optimize the embeddings so that words appearing in similar contexts have similar vectors.
  • The models learn embeddings by maximizing the probability of predicting context words given a target word (Skip-gram) or predicting a target word given context words (CBOW).

GloVe

  • Description:
    • GloVe, developed by researchers at Stanford, is explicitly designed to capture global statistical information from a corpus by factorizing the word-word co-occurrence matrix.
  • Co-occurrence Information:
    • Co-occurrence Matrix: GloVe constructs a large sparse matrix where each cell represents the co-occurrence frequency of a pair of words within a specific context window.
    • Objective Function: GloVe’s training objective is to factorize this co-occurrence matrix to produce word vectors. It aims to ensure that the dot product of word vectors approximates the logarithm of the words’ co-occurrence probabilities.
    • Global Context: Unlike Word2Vec, which focuses on local context within a sliding window, GloVe captures global co-occurrence statistics across the entire corpus.

FastText

  • Description:
    • FastText, developed by Facebook, extends Word2Vec by incorporating subword information, representing words as bags of character n-grams.
  • Co-occurrence Information:
    • Subword Level Co-occurrence: FastText builds on the Skip-gram model of Word2Vec but adds a layer of granularity by considering subwords (character n-grams). This means that it leverages co-occurrence information at both the word level and the subword level.
    • Training Objective: Similar to Skip-gram, FastText predicts context words from a target word, but it enriches the embeddings with subword information, allowing it to better handle rare and morphologically rich words.
    • Enhanced Co-occurrence Handling: By incorporating subword information, FastText captures more detailed co-occurrence patterns, especially beneficial for languages with rich morphology or for handling out-of-vocabulary words.

Summary of Co-occurrence Based Techniques

  • Word2Vec: Uses local co-occurrence information within a context window around each word. It learns embeddings by optimizing the prediction of target-context word pairs through neural networks (CBOW and Skip-gram models).
  • GloVe: Utilizes global co-occurrence statistics from the entire corpus by factorizing a co-occurrence matrix. It explicitly captures how frequently words co-occur across the corpus, aiming to directly model the co-occurrence probabilities.
  • FastText: Extends the Skip-gram model to include subword information, leveraging both word-level and subword-level co-occurrence information. This approach helps to capture more fine-grained co-occurrence patterns and improves handling of rare or complex words.

  • Each of these methods leverages co-occurrence information to learn word embeddings, but they do so in different ways and with varying levels of granularity, ranging from local context windows (Word2Vec) to global co-occurrence matrices (GloVe) and subword-level details (FastText).

Handling Polysemous Words: Key Limitations of Word2Vec, GloVe and fastText

  • Word2Vec, GloVe, and fastText have limitations in handling polysemous words (words with multiple meanings). Here’s a detailed examination of how each method deals with polysemy:

Word2Vec

  • Description:
    • Word2Vec includes two model architectures: Continuous Bag of Words (CBOW) and Skip-gram. Both learn word embeddings by predicting target words from context words (CBOW) or context words from a target word (Skip-gram).
  • Handling Polysemy:
    • Single Vector Representation:
      • Word2Vec generates a single embedding for each word in the vocabulary, regardless of its context. This means that all senses of a polysemous word are represented by the same vector.
    • Context Averaging:
      • The embedding of a polysemous word is an average representation of all the contexts in which the word appears. For example, the word “bank” will have a single vector that averages contexts from both financial institutions and river banks.
    • Limitations:
      • This single-vector approach fails to capture distinct meanings accurately, leading to less precise embeddings for polysemous words.

GloVe

  • Description:
    • GloVe is a count-based model that constructs word embeddings using global word-word co-occurrence statistics from a corpus. It learns embeddings by factorizing the co-occurrence matrix.
  • Handling Polysemy:
    • Single Vector Representation:
      • Like Word2Vec, GloVe assigns a single embedding to each word in the vocabulary.
    • Global Context:
      • The embedding captures the word’s overall statistical context within the corpus. Thus, the different senses of polysemous words are combined into one vector.
    • Limitations:
      • Similar to Word2Vec, this blending of senses can dilute the quality of embeddings for polysemous words.

fastText

  • Description:
    • fastText, developed by Facebook, extends Word2Vec by incorporating subword information. It represents words as bags of character n-grams, which allows it to generate embeddings for words based on their subword units.
  • Handling Polysemy:
    • Single Vector Representation:
      • Although fastText incorporates subword information and can better handle rare words and morphologically rich languages, it still produces a single vector for each word.
    • Subword Information:
      • The inclusion of character n-grams can capture some nuances of polysemy, especially when different meanings have distinct morphological patterns. However, this is not a complete solution for polysemy.
    • Limitations:
      • While slightly better at representing polysemous words than Word2Vec and GloVe due to subword information, fastText still merges multiple senses into a single embedding.

Summary

  • Word2Vec, GloVe, and fastText all generate a single embedding per word, leading to a blended representation of different senses for polysemous words. This approach averages the contexts, which can dilute the specific meanings of polysemous words.

Handling Synonymous Words: Word2Vec, GloVe and fastText

  • GloVe, fastText, and Word2Vec are designed to capture the semantic similarity of words based on their context in large text corpora. While they do not explicitly handle synonyms, they can generate similar embeddings for synonymous words due to the way they process and represent text. Here’s a detailed explanation of how each method deals with synonymous words:

Word2Vec

  • Description:
    • Word2Vec uses the Continuous Bag of Words (CBOW) and Skip-gram models to learn word embeddings by predicting target words from context words or vice versa.
  • Handling Synonymy:
    • Context Similarity: Word2Vec embeddings are based on the distributional hypothesis, which posits that words appearing in similar contexts tend to have similar meanings. Therefore, synonymous words, which often appear in similar contexts, will have similar embeddings.
    • Training Process:
      • CBOW: Predicts a target word based on its surrounding context words. Words used in similar contexts will have similar embeddings.
      • Skip-gram: Predicts the context words given a target word, which also ensures that words appearing in similar contexts have similar embeddings.
    • Limitations:
      • Context Averaging: While synonymous words often have similar embeddings, the method does not explicitly account for nuances and degrees of synonymy. The quality of synonym handling depends on the quality and quantity of the training data.

GloVe

  • Description:
    • GloVe (Global Vectors for Word Representation) is a count-based model that learns word embeddings by factorizing the co-occurrence matrix of a corpus.
  • Handling Synonymy:
    • Co-occurrence Statistics: GloVe captures the global statistical information of word co-occurrences. Words that frequently co-occur with similar sets of words will have similar embeddings.
    • Training Process:
      • Co-occurrence Matrix: Constructed from the corpus, capturing how often words appear together within a context window.
      • Matrix Factorization: Factorizes the co-occurrence matrix to learn word vectors such that words with similar co-occurrence patterns have similar embeddings.
    • Limitations:
      • Global Context: While GloVe captures co-occurrence information effectively, it does not explicitly differentiate synonyms beyond their co-occurrence patterns. This can lead to less precise synonym handling compared to more context-aware methods.

fastText

  • Description:
    • fastText extends Word2Vec by incorporating subword information, representing words as bags of character n-grams, which allows for better handling of rare and morphologically complex words.
  • Handling Synonymy:
    • Subword Information: By breaking words into character n-grams, fastText captures morphological information, which helps in generating similar embeddings for synonyms, especially those with similar roots or stems.
    • Context Similarity: Like Word2Vec, fastText embeddings are based on context, so words appearing in similar contexts will have similar embeddings.
    • Training Process:
      • Subword Representation: Each word is represented as a bag of character n-grams, and the word vector is the sum of its subword vectors.
      • Enhanced Context Handling: This method allows fastText to capture nuances of meaning and similarity better than Word2Vec.
    • Limitations:
      • Single Embedding: While fastText improves synonym handling through subword information, it still assigns a single embedding per word, which can blur subtle differences between synonyms.

Summary

  • GloVe, fastText, and Word2Vec capture word similarity based on context, allowing them to generate similar embeddings for synonyms. However, they do not explicitly differentiate synonyms beyond their co-occurrence and context patterns.

Handling Polysemy and Synonymy: Contextual and Multi-Sense Embeddings

  • Given the limitations of Word2Vec, GloVe, and fastText in handling polysemy and synonymy, more advanced techniques have been developed:

    1. Contextual Word Embeddings:
      • ELMo (Embeddings from Language Models):
        • Generates context-dependent embeddings by considering the entire sentence. Each word’s vector changes based on its usage in the sentence, effectively distinguishing different senses.
      • BERT (Bidirectional Encoder Representations from Transformers):
        • Produces embeddings that vary with context, enabling it to handle polysemy by providing different representations for different senses based on the surrounding words.
    2. Multi-Sense Embeddings:
      • sense2vec:
        • An extension of Word2Vec that assigns different vectors to different senses of a word. It relies on pre-tagged data to differentiate between senses.
      • Multi-Sense Skip-Gram:
        • Extends the Skip-gram model to learn multiple embeddings per word, each corresponding to a different sense. This involves clustering contexts and learning separate vectors for each cluster.

Summary

  • Advanced Techniques like ELMo, BERT, and multi-sense embeddings provide more sophisticated methods to handle polysemy by generating context-dependent embeddings or multiple embeddings for different senses, thereby offering a more nuanced understanding of words with multiple meanings and their synonyms.

Example: TF-IDF, BM25, Word2Vec, and GloVe Embeddings

  • Let’s expand on the example involving the word “cat” to illustrate how different embedding techniques (TF-IDF, BM25, Word2Vec, and GloVe) might represent it. We’ll consider the same documents as before:
    • Document 1: “Cat sat on the mat.”
    • Document 2: “Dog sat on the log.”
    • Document 3: “Cat chased the dog.”

TF-IDF Embedding for “Cat”

  • In TF-IDF, each word in a document is assigned a weight. This weight increases with the number of times the word appears in the document but is offset by the frequency of the word in the corpus.
  • TF-IDF assigns a weight to a word in each document, reflecting its importance. The steps are:
    • Calculate Term Frequency (TF): Count of “cat” in each document divided by the total number of words in that document.
    • Calculate Inverse Document Frequency (IDF): Logarithm of the total number of documents divided by the number of documents containing “cat”.
    • Multiply TF by IDF for each document.
  • For instance, the TF-IDF weight for the word “cat” in Document 1 would be calculated as follows (simplified calculation):
    • Term Frequency (TF) of “cat” in Document 1 = 1/5 (it appears once out of five words).
    • Inverse Document Frequency (IDF) of “cat” = log(3/2) (it appears in 2 out of 3 documents, and we use the logarithm to dampen the effect).
    • TF-IDF for “cat” in Document 1 = TF * IDF = (1/5) * log(3/2).
  • Final TF-IDF Embedding for “Cat”: [0.18, 0, 0.18] (assuming normalized values for simplicity).

BM25 Embedding for “Cat”

  • BM25 builds on top of TF-IDF and thus is more complex than TF-IDF. It considers term frequency, document frequency, document length, and two parameters: k1 and b. The final BM25 score for “cat” in each document might look like this (assuming certain values for \(k1\) and \(b\)):
  • Final BM25 Score for “Cat”: [2.5, 0, 2.3] (hypothetical values).

Word2Vec Embedding for “Cat”

  • Word2Vec provides a dense vector for each word. This vector is learned based on the context in which the word appears across the entire corpus, not just our three documents as in the example above.
  • The model might represent the word “cat” as a vector, such as [0.76, -0.21, 0.58, ...] (assuming a 3-dimensional space for simplicity, but in reality, these vectors often have hundreds of dimensions).

GloVe Embedding for “Cat”

  • GloVe, like Word2Vec, provides a dense vector for each word based on the aggregate global word-word co-occurrence statistics from a corpus.
  • Hypothetical GloVe Embedding for “Cat”: In a 3-dimensional space, [0.81, -0.45, 0.30]. As with Word2Vec, real-world GloVe embeddings would have a much higher dimensionality.

In these examples, it’s important to note that the TF-IDF and BM25 scores depend on the context of the specific documents, whereas the Word2Vec and GloVe embeddings are more general, trained on a larger corpus and representing the word’s meaning in a broader context. On the flip side, Word2Vec, GloVe, and fastText embeddings, lack contextualized representations (so they cannot represent polysemous works effectively), however, models such as ELMo and BERT overcome that limitation using contextualized embeddings. The specific values used here for TF-IDF, BM25, Word2Vec, and GloVe are illustrative and would vary based on the actual computation and dimensions used.

fastText Embedding for “Cat”

  • fastText, like Word2Vec and GloVe, is a method for learning word embeddings, but it differs in its treatment of words. fastText treats each word as a bag of character n-grams, which allows it to better represent rare words or words not seen during training by breaking them down into smaller units.
  • Hypothetical fastText Embedding for “Cat”: Assuming a 3-dimensional space, [0.72, -0.25, 0.63]. Like the others, real fastText embeddings typically have a much higher dimensionality.
  • In this expanded example, the key addition of fastText is its ability to handle out-of-vocabulary words by breaking them down into n-grams, offering a more flexible representation, especially for languages with rich morphology or a lot of word forms. The specific values for fastText, like the others, are illustrative and depend

BERT Embeddings

  • For more details about BERT embeddings, please refer the BERT primer.

Comparative Analysis

Count-Based Techniques (TF-IDF and BM25)

Pros
  1. Simplicity and Efficiency: Easy to implement and computationally efficient, suitable for basic information retrieval tasks.
  2. Effectiveness in Document Retrieval: Particularly good at identifying documents relevant to specific terms, thanks to their focus on term frequency.
Cons
  1. Lack of Semantic Understanding: They don’t capture deeper semantic relationships between words, leading to limited contextual interpretation.
  2. Sparse Representations: Can result in high-dimensional and sparse vectors, which are less efficient for complex NLP tasks.

Co-occurrence Based/Static Embedding Techniques (Word2Vec, GloVe, fastText)

Pros
  1. Semantic Relationship Modeling: Capable of capturing complex semantic relationships between words, offering richer representations.
  2. Subword Information (fastText): fastText’s consideration of subword elements aids in understanding morphology and handling out-of-vocabulary words.
Cons
  1. Fixed Context: Static embeddings assign a single, context-independent representation to each word, limiting their effectiveness in contextually varied scenarios.
  2. Computational Intensity: Requires significant computational resources for training on large corpora.

Contextualized Representation Techniques (BERT, ELMo)

Pros
  1. Context-Sensitive: They provide dynamic word representations based on context, leading to a more nuanced understanding of language.
  2. State-of-the-Art Performance: Excel in a wide range of NLP tasks, offering superior performance compared to previous models.
Cons
  1. Computational Requirements: Demand extensive computational power and larger datasets for training.
  2. Complexity in Implementation: More complex to implement and integrate into applications compared to simpler models like TF-IDF.

Summary: Types of Embeddings

  • In the field of NLP, a variety of embedding techniques have been developed, each suited to specific applications and use cases. This article categorizes and delves into different types of word embeddings and their functionalities.

Bag-of-Words-based Embeddings

  • These embeddings do not consider the order of words.
    • Bag of Words (BoW): The simplest text representation method, BoW is a count-based approach that tallies the occurrences of each word in a document. However, it disregards any information about the order or structure of words, treating the text as a mere “bag” of words. It focuses only on the presence or absence of words, not their positioning within the document.
    • TF-IDF (Term Frequency-Inverse Document Frequency): An advanced version of count vectors, TF-IDF considers the frequency of words in a document as well as their overall frequency in the corpus. Common words like “the” have lower TF-IDF scores, while unique or rare words have higher scores, reflecting their relative importance.

Predictive Word Embeddings

  • These models predict words based on their context.
    • Word2Vec: A neural network-based model that learns to represent words as vectors in a high-dimensional space. Words with similar meanings are represented by proximate vectors. Word2Vec facilitates capturing meanings, semantic similarities, and relationships within text, exemplified by analogies like king - man + woman = queen.

Contextual and Sequential Data Embeddings

  • Representing order and context of words, and suited for sequential data like text.
    • Recurrent Neural Networks (RNNs): RNNs, and their advanced variants like LSTMs (Long Short-Term Memory), are adept at handling sequential data. They process inputs in a sequence, with each step’s output feeding into the next, capturing information from previous steps.
    • Transformer: A model that revolutionized NLP with its encoder-decoder architecture, leveraging self-attention mechanisms. Transformers excel in learning long-range dependencies, allowing them to focus on specific parts of the input sequence and better understand sentence meanings.

Contextual Embeddings

  • These consider the order and context of words.

    • ELMo (Embeddings from Language Models): Generates contextual embeddings from the internal states of a bi-directional LSTM.
    • BERT (Bidirectional Encoder Representations from Transformers) Embeddings: Provides contextual embeddings based on the entire context of word usage.

Sentence/Document Embeddings

  • For broader textual units like sentences or documents.

    • Doc2Vec: Extends Word2Vec to represent entire documents.
    • Sentence-BERT: Adapts BERT for sentence-level embeddings.
    • Universal Sentence Encoder: Encodes sentences into vectors for various tasks.

Positional Embeddings

  • Encodes the position of words within sequences.

    • Absolute Positional Embeddings: Used in Transformers to encode the absolute position of words.
    • Relative Positional Embeddings: Focuses on relative distances between words, beneficial in models like Transformer-XL and T5.
    • Rotary Positional Embeddings/RoPE (Rotary Positional Encoding): Employs rotational operations to encode relative positions.

Relative Embeddings

  • Capture relative positions between word pairs in sequences.

    • Relative Positional Embeddings: Encodes the relative positioning of words, like in the sentence “Alice threw the ball to Bob,” where “ball” has a relative position to other words. In Transformer models, the difference between positions \(i\) and \(j\) in the input sequence is used to retrieve corresponding embedding vectors, enhancing the model’s ability to generalize to new sequence lengths.
  • This categorization of embedding techniques underscores the diversity and evolution of approaches in representing linguistic elements in NLP, each with distinct advantages and suited for specific applications.

  • Proposed in Matryoshka Representation Learning by Kusupati et al. from UW, Matryoshka Representation Learning (MRL) is a novel approach for adaptive and efficient representation learning. This technique, adopted in OpenAI’s latest embedding update, text-embedding-3-large, is characterized by its ability to encode information at multiple granularities within a single high-dimensional vector. Drawing an analogy from the Russian Matryoshka dolls, MRL encapsulates details at various levels within a single embedding structure, allowing for adaptability to the computational and statistical needs of different tasks.
  • The essence of MRL lies in its ability to create coarse-to-fine representations, where earlier dimensions in the embedding vector store more crucial information, and subsequent dimensions add finer details. You can understand how this works by the analogy of trying to classify an image at multiple resolutions – the lower resolutions give high-level info and the higher resolutions add finer details – human perception of the natural world also has a naturally coarse-to-fine granularity, as shown in the animation below.

  • MRL achieves this by modifying the loss function in the model, where the total loss is the sum of losses over individual vector dimension ranges: \(Loss_{Total} = L(\text{upto 8d}) + L(\text{upto 16d}) + L(\text{upto 32d}) + \ldots + L(\text{upto 2048d})\). As a result, MRL incentivizes the model to capture essential information in each subsection of the vector. Notably, this technique allows for the use of any subset of the embedding dimensions, offering flexibility beyond fixed dimension slices like 8, 16, 32, etc.
  • The figure below from the paper shows that MRL is adaptable to any representation learning setup and begets a Matryoshka Representation \(z\) by optimizing the original loss \(L(.)\) at \(O(\log(d))\) chosen representation sizes. Matryoshka Representation can be utilized effectively for adaptive deployment across environments and downstream tasks.

  • MRL’s adaptability extends to a wide range of modalities, including vision, vision+language, and language models (such as ViT, ResNet, ALIGN, and BERT). The method has shown remarkable results in various applications, such as adaptive classification and retrieval, robustness evaluations, few-shot and long-tail learning, and analyses of model disagreement. In practical terms, MRL facilitates up to 14x smaller embedding sizes for tasks like ImageNet-1K classification without compromising accuracy, up to 14x real-world speed-ups for large-scale retrieval, and up to 2% accuracy improvements in long-tail few-shot classification.
  • One of the striking outcomes of using MRL is demonstrated in OpenAI’s text-embedding-3-large model, which, when trimmed to 256 dimensions, outperforms the full-sized text-embedding-ada-002 with 1536 dimensions on the MTEB benchmark. This indicates a significant reduction in size (to about 1/6th) while maintaining or even enhancing performance.
  • Importantly, MRL integrates seamlessly with existing representation learning pipelines, requiring minimal modifications and imposing no additional costs during inference and deployment. Its flexibility and efficiency make it a promising technique for handling web-scale datasets and tasks. OpenAI has made the pretrained models and code for MRL publicly available, underlining the method’s potential as a game-changer in the field of representation learning.
  • Code; OpenAI Blog

Citation

If you found our work useful, please cite it as:

@article{Chadha2021Distilled,
  title   = {Word Vectors},
  author  = {Jain, Vinija and Chadha, Aman},
  journal = {Distilled AI},
  year    = {2021},
  note    = {\url{https://aman.ai}}
}