Named Entity Recognition

  • Named entity recognition is the first step towards information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages,etc.
  • The task in named entity recognition is to find and classify names in text.
  • Named entity recognition (NER) — sometimes referred to as entity chunking, extraction, or identification — is the task of identifying and categorizing key information (entities) in text. An entity can be any word or series of words that consistently refers to the same thing.
  • Every detected entity is classified into a predetermined category. For example, an NER machine learning (ML) model might detect the word “super.AI” in a text and classify it as a “Company”.
  • It’s a simple NLP task that classifies each word in its context window of neighboring words.
  • How does it work?
    • NER model is a two step process:
      • Detect a named entity
      • Categorize the entity
    • Use word vectors and make context window of word vectors -> neural network layer -> logistic classifier for specific entity type such as location

Dependency Parsing

  • Dependency parsing is the task of extracting a dependency parse of a sentence that represents its grammatical structure and defines the relationships between “head” words and words, which modify those heads.
  • It is the process of examining the dependencies between the phrases of a sentence in order to determine its grammatical structure.
  • A sentence is divided into many sections based mostly on this. The process is based on the assumption that there is a direct relationship between each linguistic unit in a sentence. These hyperlinks are called dependencies.
  • A sentence is parsed by choosing for each word what the other word it is dependent of.
  • Neural dependency parser(Chen and Manning 2014) takes the main parts of speech tags and dependency labels.

Citation

If you found our work useful, please cite it as:

@article{Chadha2021Distilled,
  title   = {Neural Nets},
  author  = {Jain, Vinija and Chadha, Aman},
  journal = {Distilled Notes for Stanford CS224n: Natural Language Processing with Deep Learning},
  year    = {2021},
  note    = {\url{https://aman.ai}}
}