Skip to main content

Latent Dirichlet Allocation

Latent Dirichlet Allocation

Introduction

With the increase of the large Data, different learning methods to automate data analysis has also been increasing. One of the hot topics is Topic Modelling in Data Analysis in NLP. LDA is one of the most commonly used Topic Modelling algorithm, developed by David Blei, Andrew Ng, and Michael Jordon. It is a kind of generative model which focuses on the information retreival part. It is also considered as the dimensionality reduction technique. 

The Basic idea of LDA is that, each Document is a mixture of latent topics and each topic is distributed over the words. Given the corpus, LDA tries to discover the following:
  1. The set of topics.
  2. The set of words.
  3. Distribution of the topics with each document.
Figure1: Plate notation
Figure 1 shown above is the plate notation for LDA. Outer plate represent the documents, whereas inner plate represents the choice of topics and words within a document. M stands for number of documents and N stand for number of words in the document. Θ denotes the topic distribution for document.α is the parameter of Dirichlet prior on the per-document topic distribution whereas β is the Dirichlet prior per topic word distribution. z denotes the topic and w denotes the word here. 

While applying LDA, you need to choose that how many topics and how many words in each topic you want. Let us look at the example of LDA.
  1. I like to eat broccoli and bananas.
  2. I ate a banana and spinach smoothie for breakfast.
  3. Chinchillas and kittens are cute.
  4. My sister adopted a kitten yesterday.
  5. Look at this cute hamster munching on a piece of broccoli.
Consider them as five different documents and you want to generate topics for these documents. Applying LDA(Latent Dirichlet Allocation) on these documents will produce the following results.
  1. Sentences 1 and 2: 100% Topic A
  2. Sentences 3 and 4: 100% Topic B
  3. Sentence 5: 60% Topic A, 40% Topic B
Topic A: 30% broccoli, 15% bananas, 10% breakfast, 10% munching
Topic B: 20% chinchillas, 20% kittens, 20% cute, 15% hamster
Now from topics generated you can interpret that the topics A is related to food and Topic B is related to the animals.

LDA is a bag-of-words model. It can be very useful in knowing the general theme of documents. It is able to generalize the model it uses to separate documents into topics outside the corpora. LDA is often used in recommendation systems, document classification, data exploration and document summarization etc.

WorkFlow

In the First step data preprocessing is done where all the stopwords are removed from the documents and also the unnecessary words. Then stemming is done for all the words before giving it as an input to the LDA. Now we have a corpus of words which are all stemmed. For learning , LDA model mainly looks at the frequency of the words in the corpus. Each word in the corpus is considered as the independent of other words. This is often known as Bag of words approach in data analysis part.

Number of words, number of topics, and the documents is an input to the LDA. LDA assigns each word of document to a particular topic and calculates the probability using P(word|topic)*P(Topic|documnent). This probability is calculated again by assigning the word to all the topics. In the end that word is assigned to the topic based on the probability it calculated. This process is repeated for each word in the document. This is mainly one iteration. If we increase the number of iterations, more accurate will be the results.

Interpretation of results

After running LDA on the corpus, it will mainly assign the most frequent words to each of the topics. Each document will be assigned with a topic or a mixture of topics as shown in figure1 and each word in the topic will have some contribution to that particular topic.

References

  1. https://edlab.tc.columbia.edu/blog/13139-Topic-Modeling-with-LDA-in-NLP-data-mining-in-Pressible
  2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5028368/
  3. "Statistical Topic Modelling for news articles" by Suganya C, and Vijaya M S.
  4. "Latent Diriclet Alloation" by David M. Blei, Andrew Y. Ng, and Michael I. Jordan.
  5. http://pythonhosted.org/trustedanalytics/LdaNewPlugin_Summary.html
  6. http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation/

Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...