Skip to main content

WIKIFIER


Recent trends have shown text classification as important aspect of Natural language processing. Wikification is one such task of recognizing and associating words in text with their significant Wikipedia pages.
The blog describes about a software tool Wikifier by COGINITIVE COMPUTATION GROUP that takes text as input and returns text enhanced with links to encyclopedic resources such as Wikipedia. In doing so, the Wikifier can assist human readers and automated systems in three key ways:
  • By resolving entities and clarifying ambiguous and variable text.
  • By providing background knowledge about unfamiliar topics.
  • By grounding controversial claims from partisan sources in impartial encyclopedic resources.
Snapshot of Wikifier with random example and wikified data is shown below:

 
In the wikified text, entities are highligted for instance Mubarak could be referred as either Ishq Mubarak song or Suzanne Mubarak, wife of former Egptian President Hosni Mubarak. The Wikifier, clearly disambiguates based on context and associates with Suzanne Mubarak.

APPROACH

The Wikifier takes the document as input, considering set of mentions (entities as highlighted), M = {m1, m2, …..., mn}. The system maps the set of mentions to corresponding the set of Wikifipedia titles, W = {w1, w2, …..., wk}. There could be certain cases where a mention does not have any Wikipedia page, considering this as exception NULL Wikipedia title is added.
Visualising the bipartite graph, with mentions as one set and Wikipedia titles as other. Approach perform two-level optimization for disambiguates:
  • Local Disambiguation
  • Global Disambiguation
The following graph illustrates the optimization, marking dark edges as correct titles for corresponding mentions.
In local approach, each mention mi is disambiguated separately with score function f(mi,tj) that describes how likely title tj correctly disambiguates mention mi and assigns title with highest score with content similar to document:
Γ*local = argmax i=1N f(mi,tj)
                                                 Γ
After getting all the titles, now considered all of them have same score with respect to the context. Global optimization is applied with coherence function ψ as:
Γ* = argmax [ i=1N f(mi,tj) + ψ(Γ) ]
                                                 Γ
The global optimization is NP-Hard problem and thus approximation is done by estimating from pairwise ψ(ti, tj) to get optimal Γ*.

LIMITATION

Unlikely the mentions taken from general text, Wikipedia mentions are likely to have Wikipedia pages. Thus, primary challenge is when mentions does not have Wikipedia pages.

REFERNCES

1. L. Ratinov and D. Roth and D. Downey and M. Anderson, Local and Global Algorithms for Disambiguation to Wikipedia ACL (2011)
2. Xiao Cheng and Dan Roth, Relational Inference for Wikification EMNLP (2013)

Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...