Skip to main content

Coreference Resolution

Coreference Resolution
In computational linguistics, coreference, sometimes written as co-reference is the concept which occurs when two or more words in the text refers to the same person or thing or situation [1]. For e.g. FC Barcelona president Joan Laporta has warned Chelsea off star strike Lionel Messi. This warning has generated discouragement in Chelsea. Here, “This warning” refers to the first sentence. 
The goal of coreference resolution is to determine the correct interpretation of the text or even to estimate the relative importance of various mentioned subjects or pronouns and referring expressions must be connected to the right individuals [7].
Humans naturally associate these references together – however for a computer program it is difficult to understand. Following are the types of coreference –

  1. Anaphora - "The house is not for sale. We do not want to let it go."
  2. Cataphora - "We do not want to let it go. The house is not for sale."
  3. Exophora - "Spitting is not allowed here"
In this blog, we are mainly going to focus on the anaphora. The anaphora is an expression whose meaning depends on that of the other expression called the antecedent. In the above example, “the house” is the antecedent and “it” is the anaphora.
The basic approaches for resolving coreference involves finding mentions - Words that are potentially referring to real world entities.

For e.g.
In the above conversation, mentions are
"My brother", "a friend named Alice", "me”, "her”, "he", "she".

Mentions are generally detected by recursively visiting the parse trees and choosing pronouns, noun phrases and proper names [6].

Earlier approaches involved heuristic approaches based on linguistic theories. For each mention and pair of mention, we compute a set of features like syntactic (the gender of the two mentions should be same) and semantic pragmatic (they should discuss about the same topic) constraints.
Then we find the most likely antecedent for a mention if there exists based on these set of features.
These features are generally handcrafted and there can be many such features. Hobbs [3] applied deepest first tree approach for finding the first mention that satisfies the given constraints in the syntactic parse tree.

Unsupervised learning techniques adopted Bayesian models based on Latent Dirichlet processes were also developed but the results turned out to be less satisfactory.

Now comes the modern NLP techniques like word vectors and neural networks. These approaches allow us to learn a lot of hand-crafted features (machine learning) and reduce the number of features while maintaining a good accuracy.
For e.g. 
"Her is a feminine pronoun and has more chances of referring to Alice than my brother which is masculine"

This way we can define rules for our model or we can represent each word in our vocabulary as a vector (word2vec) and let our model train on a well-annotated corpus without supplying any prior information about the gender.
Although the accuracy of the model highly depends on the training corpus, since most of the NLP corpus are based on news which are formal text. For training a chatbot, however, they don't provide informal language texts which is generally expected from it.
In fact, we can also define our own text in the model but it would work on some mentions but not on others which are less formal.

Now comes the state of the art technique for resolving co-reference, that is none other than neural networks. [4] introduces an end-to-end coreference resolution model and it claims that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. "The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each."[4]. It also combines context knowledge for calculating spans. 

[5] also uses neural network for co-reference resolution. In this model, "some simple context information obtained by averaging word vectors surrounding each mention is added. The first neural net gives a score for each pair of a mention and a possible antecedent while a second neural net gives a score for a mention having no antecedent (sometimes a mention is the first reference to an entity in a text) [5]. Then simply comparing all these scores together and taking the highest score to determine whether a mention has an antecedent and which one it should be" [5].

References:

[1] https://en.wikipedia.org/wiki/Coreference
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3226856/
[3] Hobbs JR. Resolving pronoun references. Lingua. 1978;44(4):311–38. doi:10.1016/0024-3841(78)90006-2.
[4] End-to-end Neural Coreference Resolution Kenton Lee † , Luheng He , Mike Lewis , and Luke Zettlemoyer arXiv:1707.07045 [cs.CL]
[5] https://medium.com/huggingface/state-of-the-art-neural-coreference-resolution-for-chatbots-3302365dcf30
[6] http://www.cs.upc.edu/~ageno/anlp/coreference.pdf
[7] “Speech and Language Processing: An introduction to natural language processing, computational linguistics, and speech recognition” by Daniel Jurafsky & James H. Martin

Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We can use NLP to impr

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE

Coreference Resolution and Applications in NLP

In computational linguistics and natural language processing coreference resolution (CR) is an avidly studies problem in discourse which has managed to be only partially solved by the state of the art and consequently remain one of the most exciting open problems in this field. Introduction and Definition The process of linking together mentions of a particular entity in a speech or text excerpt that related to real world entities is termed as coreference resolution. This process identifies the dependence between a phrase with the rest of the sentence or other sentences in the text.  This is an integral part of natural languages to avoid repetition, demonstrate possession/relation etc. A basic example to illustrate the above definition is given below : Another example which uses elements from popular fiction literature : Harry  wouldn’t bother to read “ Hogwarts: A History ” as long as  Hermione  is around.  He  knows  she  knows  the book  by heart. The different type