Skip to main content

Language Modelling using Convolutional and Recurrrent Neural Networks

Introduction

In this post I will tell you about the use of various Neural Network architectures for Language Modelling. We have already seen in the lectures how Naïve Bayes can be used as a Unigram Language Model and how in general N-Gram models capture the probability of a sentence. These models, given a context(words from n-K+1 to n-1) can tell which words are the most likely to occur next.

A previous post on this blog [1] explains the basics of Neural Networks and how Multi Layer Perceptrons are helpful in NLP because of being able to learn useful features/representations from the training set and hence lead to a stronger Language Model.

Here are a few nice tutorials which you can go through before proceeding if you want to know how CNNs(Convolutional Neural Networks) and RNNs(Recurrent Neural Networks) work: [2], [3], [4].

This post is divided into 3 sections:
First explains how words/suffixes/etc. can be fed to a Neural Network using vector representations.
Second presents how CNNs have been used for a related problem of sentence modelling and classification.
Third presents how RNNs have been used for Language Modelling.

Neural Networks for NLP

Neurons in modern networks are no longer imitating neurons in the human brain and are simply computational units which take a vector as input and produce a scalar as output, by the means of taking a weighted sum of the input scalar and applying an activation function(such as ReLU). Neural Nets share the same mathematics of Logistic Regression, but are more powerful since the latter requires carefully chosen and engineered features, but the former are able to learn these automatically.
A typical Multi Layer Perceptron; Image credits: [6]

The input to a Neural Network needs to be a vector of real values, hence for a Language Model(LM), we need a way to represent words as vectors. This is done by embedding a word into a vector space of dimension d in the range 50 to 500 [5], depending on the system. The advantage of this representation(called an embedding) is that we can define vectors to be similar, for example "cake" and "mousse". They both have the same PoS tag and are food items. So if the training set we have the sentence "Today, I will make a cake" and "mousse" occurs in a different sentence, an N-Gram model would not generate the sentence "Let's make a mousse", while the vector representation would lead to "mousse" having a high chance to occur after "make a".
Example of word embeddings, (a) is a sparse representation while (b) is dense; Image credits: [6]

Further details on vector semantics can be found in chapter 15 of [5]

CNNs for Sentence Modelling and Classification

A Sentence Model analyses and represents the semantic structure of a sentence. Since individual sentences are rarely encountered(considering all possible sentences of a particular length), we have to represent a sentence in terms of features which depend on the short N-grams that occur in the training set. The process of extracting these features is at the core of a sentence model. Previous approaches to this include exploiting co-occurrence statistics, building extracted logical forms, etc. As shown in [7], CNNs can be used to model sentences. The proposed network handles sentences of varying length and it composed of 1D Convolutional Layers, interleaved with what they call Dynamic Max-k Pooling Layer. This pooling layer is much like regular max pooling, except that instead of the max value, it output the top-k max values. Each layer leads to multiple feature maps, as a result of convolving the filters of the neurons with the vector representation of the sentence.
Basic CNN components for NLP ;Image credits: [9]

This leads to the induction of a structured feature map over the sentence which is akin to a parse tree, but also captures semantic relationships, and is internal to the network.
Structured feature map induced on the sentence; Image credits: [7]

The exact specifications of the network(dubbed DCNN) can be looked up in [7] and other architectures can be found in [9]. An interesting result in this paper is the accuracy of DCNN on Six Way Question Classification problem, where a sentence needs to be classified into 6 coarse categories(such as location, person, numeric information, etc.) and 50 fine grained categories, see [8]. The key point is that methods such as SVM require upto 60 hand crafted features and unigrams and PoS tags and so on, whereas the DCNN learns all the features itself and gets similar accuracy!
Accuracies on Question Classification task; Image credits: [7]
Features(7-grams) learned on the first layer of the network; Image credits: [7]

RNNs for Language Modelling

An RNN is essentially a CNN which has a feedback loop, which leads to a structure which unfolded over time looks like a series of CNNs, where the CNN at time t take input as both the input vector x(t) as well as the representations from the CNN at t-1. See this figure:
A simple RNN; Image credits: [10]

Having understood the idea of DCNN, it is easy to think of RNNs as Language Models. The input to the network at time t has the context(features/represenations) of the previous t-1 state and hence it is similar to the rationale behind an N-gram model predicting the nth word on the context of the previous n-1 words. Details of the network can be looked up in [10]. The result of using RNNs is a massive 50% drop in perplexity as compared to state-of-the-art Backoff with Interpolation models and 18% reduction in word error rate on the WSJ task.

Conclusion

Neural Networks can be used for NLP tasks by converting words to embeddings. CNNs and RNNs outperform or get similat accuracies on a variety of NLP tasks, without the need for manually engineered features. The only drawback of these models currently is the time that it takes to train them, which may be in the order of weeks!


References:
[1] Use of Neural Networks in Natural Language Processing, Himani Kaira
[2] Convolutional Neural Networks, CS231n Stanford
[3] Recurrent Neural Networks, DL4J
[4] The Unreasonable Effectiveness of Recurrent Neural Networks, Andrej Karpathy
[5] Vector Semantics, Speech and Language Processing, Dan Jurafsky
[6] A Primer on Neural Network Models for Natural Language Processing, Goldberg
[7] A Convolutional Neural Network for Modelling Sentences, Kalchbrenner et al.
[8] Learning Question Classifiers, Li & Roth
[9] Convolutional Neural Networks for Sentence Classification, Kim
[10] Recurrent Neural Network based Language Model, Mikolov et al.

Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...