Natural Language Processing • NLP Tasks
- Overview
- Named Entity Recognition (NER)
- Dependency Parsing
- Sentiment Analysis
- Text Summarization
- Question Answering
- Text Classification
- Citation
Overview
- In this article, we will look at a few tasks that can be solved via NLP.
Named Entity Recognition (NER)
- Named Entity Recognition (NER) is an integral process in information extraction, aiming to locate and categorize named entities within a text into predefined categories. These categories can include names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, and more. In essence, the task of NER is to find and classify names within a text, making it a critical tool in many NLP applications.
- Entities in NER can refer to any word or series of words that consistently signify the same thing. Each identified entity is classified into a predetermined category. For instance, an NER machine learning model might identify the term “OpenAI” within a text and classify it as a “Company”. This classification task is typically achieved through the analysis of the context window of neighboring words for each word in the text.
- The NER model works in a two-step process. The first step is to detect a named entity, and the second step is to categorize that entity. This is achieved by utilizing word vectors and creating a context window of these vectors. These vectors then feed into a neural network layer, followed by a logistic classifier, which identifies the specific entity type such as “location”.
- The following slides, sourced from Stanford CS224n course, illustrate this:
- For NER, Bidirectional LSTM (BiLSTM) along with a Conditional Random Field (CRF) layer is a commonly used architecture. The BiLSTM captures context from both directions for each token in the sentence, and the CRF helps to use the predictions of surrounding tokens to predict the current token’s class. More recently, Transformer-based models like BERT have shown high performance on NER tasks, given their ability to understand the context of each word in a sentence better.
Dependency Parsing
- Dependency parsing is an NLP task that aims to extract a sentence’s dependency parse, representing its grammatical structure and defining the relationships between “head” words and words modifying those heads. Essentially, it examines the dependencies between the phrases of a sentence to establish its grammatical structure.
- A sentence is dissected into several sections based primarily on the interdependencies among its linguistic units. This process hinges on the idea that there is a direct relationship between each unit within a sentence, and these relationships are termed as dependencies. The parsing of a sentence involves identifying dependencies for each word, determining which other word it depends on.
- A neural dependency parser, like the one proposed by Chen and Manning in 2014, takes as input the parts of speech tags and dependency labels, providing a structured representation of a sentence’s grammatical dependencies.
- The following slides, sourced from Stanford CS224n course, illustrate this:
- For dependency parsing, architectures traditionally used include transition-based parsers and graph-based parsers. Transition-based parsers use a stack to process the sentence left-to-right and create the parse, while graph-based parsers consider all possible parses and select the highest scoring one.
- With the advent of deep learning, BiLSTM-based dependency parsers have also been proposed, which consider the entire sentence context for making parsing decisions.
- More recently, Transformer-based models have also been utilized for this task. They can be used to encode the sentence, and then a separate parser can be used to predict the dependencies. This architecture allows for better capture of sentence-wide dependencies, improving the parsing accuracy.
Sentiment Analysis
- Sentiment analysis, also known as opinion mining, refers to the application of natural language processing, text analysis, and computational linguistics to identify and extract subjective information from source materials. It aims to determine the attitude of a speaker, writer, or text towards a particular topic, product, or overall contextual polarity of a document.
- Sentiment analysis can be broadly performed on three levels - document level, sentence level, and entity & aspect level. It finds immense usage in areas such as brand management, product analytics, and understanding customer sentiments.
- For sentiment analysis, one popular architecture is the Long Short-Term Memory networks (LSTM), a type of Recurrent Neural Network (RNN) that can capture long-term dependencies in the text. Other architectures include Convolutional Neural Networks (CNNs), which can efficiently extract local features. More recently, Transformer-based models such as BERT, GPT, or RoBERTa, which are capable of capturing complex contextual relationships between words, have shown superior performance in this task.
Text Summarization
- Text summarization is another significant task in NLP that involves producing a succinct summary of lengthy texts while preserving key information content and overall meaning. There are two primary types of summarization: extractive and abstractive.
- Extractive summarization involves extracting key phrases and sentences from the original text to create the summary, essentially creating a subset of the original content. Abstractive summarization, on the other hand, generates a new summary, potentially using words and phrases not present in the original text, much like a human would summarize a document.
- Text summarization is often approached with sequence-to-sequence models, such as those based on LSTM or GRU (Gated Recurrent Units) networks. These models read the input text as a sequence and generate the summary as another sequence. For abstractive summarization, Transformer-based models like T5 or BART have shown strong performance due to their capacity for understanding and generating complex text.
Question Answering
- Question answering is an NLP task where a system accurately answers a human-posed question. This task can range from answering simple factoid questions, like “Who is the president of the United States?” to more complex ones that require reasoning and understanding context, such as “What factors led to World War II?”
- The goal of a question answering system is to provide accurate, succinct, and relevant answers to user queries. The development of such systems involves a deep understanding of both natural language understanding and generation, making it a challenging but impactful task in the field of NLP.
- Question answering tasks have seen great advancements with the introduction of Transformer architectures, especially BERT and its variants. These models are pre-trained on a large corpus of text and fine-tuned for the specific question answering task, making them powerful tools for understanding context and generating precise answers.
Text Classification
- Text classification is one of the fundamental tasks in NLP. It involves categorizing or classifying text into groups based on its content. This is highly useful in several applications like spam filtering, sentiment analysis, and topic labeling. By analyzing text data and predicting its class, effective and efficient categorization can be achieved, making the information more manageable and easier to interpret.
- Text classification can be tackled with a variety of architectures depending on the complexity of the task. Traditional approaches include CNNs and RNNs, including their gated variants like LSTMs and GRUs, which can capture the sequential information in text. For more complex tasks, Transformer-based models like BERT or XLNet can be used, offering superior performance by leveraging self-attention mechanisms to understand the context of each word in a text.
Citation
If you found our work useful, please cite it as:
@article{Chadha2021Distilled,
title = {Neural Nets},
author = {Jain, Vinija and Chadha, Aman},
journal = {Distilled Notes for Stanford CS224n: Natural Language Processing with Deep Learning},
year = {2021},
note = {\url{https://aman.ai}}
}