Skip to main content

Topic Modelling in NLP

Topic Modelling is essentially a method to find the topics from a document. These topics are set of words which best describe the document. It helps us to understand and organize large amount of information.
In the era of exponentially increasing data, it is very difficult to understand and summarize the large collections of unstructured textual documents. Topic Modelling not only helps to understand the semantics of documents, it also helps to annotate them.
It helps to recover hidden and recurring patterns in texts and can be considered as a text mining tool.


There are multiple Topic models like Latent Dirichlet Allocation, TextRank and Probabilistic Latent Semantic Analysis etc.

Latent Dirichlet Allocation (LDA)
LDA is a statistical model. Intuition behind LDA is that every document contains some topics and every word in the document is attributable to one of the topics. It is different from PLSA in the idea that every document is a mixture of small number of topics only and each topic contains few words which are used very frequently.
The topic is identified on the basis of likelihood of term co-occurrence. It might happen that a word appearing with probability p in topic A, can occur with probability p’ in topic B in the same document. However, the set of neighbours for the word would be different in both the topics.



LDA is a generative model. To find out the topics in a document, we need to know the number of topics ie value of k.
Working steps :
  • Iterative over each word in every document and assign it randomly to one of the k topics
  • This random assignment will give us a distribution which will not be accurate
  • Until we get an improved static model :
    • For each word, w in every document, d, calculate
    • p ( topic t | document d) =  number of words in doc d with topic t / number of words in document d
    • p ( word w | topic t ) = frequency word w is assigned topic t in all docs / number of words assigned topic t in all docs
  • Probability that word w would be assigned topic t’ = p(topic t’|doc d)*p(word w|topic t’)
  • Thus we have the probability that topic t’ is associated with word w


Number of Topics
But the question arises that how can we find the correct number of topics. We will make use of perplexity in this. We will train our model for a range of number of topics and calculate the perplexity. Where the downward curve makes an elbow bend, it marks the optimum number of topics.


Applications
Topic Modelling finds it application in a range of topics. From information retrieval, networks, genetics, images to bio-informatics, it is not confined to the domain of text analysis anymore.
Its other applications include dimensionality reduction, recommendation systems and text summarisation.


There are a lot of libraries we can use to train our Topic Models. Some of them are :


References :


Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...