Skip to main content

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful!

The basics, what is a Word vector?

We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner

Man : {0, 0, 1}
Woman : {0, 1, 0}
Child : {1, 0, 0}

Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dimension space):

Man : {0.9, 0.1}
Woman : {0.1, 0.9}
Child : {0.5, 0.5}

The different dimensions can be thought of as latent features, which are learnt by the model. In this case, it could be that the first dimension is related to "masculinity", and the second to "femininity".

Let's talk a little more about Word2Vec

Probably the most famous among the set of word embeddings, Word2Vec word embeddings are developed by Google. It first randomly initialises embeddings. Then using raw text as input, it learns the embeddings for a word by predicting its context. These embeddings are trained using backpropagation.

These vectors capture semantic relations!

The exciting part about these learnt vectors is that they capture both semantic and syntactic relationships between different tokens. This means that vectors of similar words are also similar to one another. Not just that, word analogies can also be represented by difference and addition of word vectors!

Quoting from Mikolov et al.

We find that the learned word representations, in fact, capture meaningful syntactic and semantic regularities in a very simple way  ...  For example, if we denote the vector for word i as xi, and focus on the singular/plural relation, we observe that xapple – xapples ≈ xcar – xcars, xfamily – xfamilies ≈ xcar – xcars, and so on.




Image from Turian et al. (2010)

An interesting application in Machine Translation

We see a very interesting application of word embeddings in machine translation by Socher et al. (2013). They show that we can translate by learning to embed words embeddings learnt from two different languages (English and Chinese) into the same space.

We can do this by first learning two different set of embeddings, Wen and Wch using their respective monolingual corpora. Additionally, we know a small curated set of translated word pairs. So while training, we can optimise for an additional constraint, that the embeddings of the word pairs from English and Chinese should be closer in the embedded space.

As expected, the embeddings of the known translation pairs ended up together, after all this was the constraint. Interestingly, words whose translation was not known while training also ended up together, hence creating a mapping between English and Chinese words having similar meaning!



Image from Socher et al. 2013

The above image is a T-SNE plot of the embeddings in a 2-d space. The green points are Chinese embeddings, and yellow are English embeddings.

Comments

  1. Making it very uncovered with regards to the we understand one small curated range of translated word of mouth twos. Which means that despite the fact that exercise, you can easily optimize for the even more only here constraint, that your embedding within the word of mouth twos with English language together with Far eastern has to be magnified during the inlay ed room or space.

    ReplyDelete

Post a Comment

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...

Dbpedia Datasets

WHAT IS Dbpedia? It is a project idea aiming to extract structured content from the information created in the wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datsets. BUT? But why i am talking about Dbpedia ? How it is related to natural language processing? The DBpedia data set contains 4.58 million entities, out of which 4.22 million are classified in a consistent ontology, including 1,445,000 persons, 735,000 places, 123,000 music albums, 87,000 films, 19,000 video games, 241,000 organizations, 251,000 species and 6,000 diseases. The data set features labels and abstracts for these entities in up to 125 languages; 25.2 million links to images and 29.8 million links to external web pages. In addition, it contains around 50 million links...