Skip to main content

Google's Smart Reply : a summary of NLP techniques and insights used

One of the more recent additions to Google's plethora of services offered is Smart Reply.As of now, Smart Reply humbly seeks to provide short crisp responses for replying to emails on the go.Interestingly enough, a lot has gone into developing the system to a deployable ,efficient package, which has been found useful by a large fraction of the user base.




Google's smart reply system can be understood as an amalgam of 4 key components: 

1.Response Selection
2.Response Set Generation
3.Suggestion Diversity 
4.Triggering Model 

Response Selection:

Google uses a neural network with LSTM (Long Short Term Memory cells) to generate responses.
The email corpus used for training the network was extracted from Google's own mail database, after anonymization.It consists of around 238 million messages, which include 153 million messages that have no response.

Given an email e ,and set of all possible responses ,the score of a response r is defined as P(e|r)[P=Probability].
Top k responses are then taken for further processing.

Response Set Generation:

To ensure better quality of responses,i.e reduce redundant responses such as "Thanks for the update.","Thank you for the update!","Thanks for the status update!" , Google uses something called semantic intent clustering.The sentences are parsed using a dependency parser and a canonicalized representation is created.Thereafter,all responses are assigned a semantic cluster , which is a broad category of what the intent of the message is.For example, "Haha","LOL!",etc. would be categorised as funny.
To achieve this task of semantic clustering,Google uses a semi-supervised learning algorithm using scalable graph algorithms ,which could learn automatically from the data and a few human-annotated samples.

Suggestion Diversity:

The ideal behind this is to provide no two responses with the same intent to the user.The more the variety in response intents, the more utility lies for the user.This is done by intent-checking and enforcing negative and positive variations of intent, by filtering the response space suitably.The filtering mechanism identifies the intent of the mail as affirmative, missing(indirect) negatives and exclusive negatives.And picks the best suited responses from each.

Triggering Model:

The triggering model suppresses the response generator by deciding whether a response should be given to the mail or not.This decision is taken with respect to factors like whether the mails are auto-generated or not, and whether short replies are appropriate for the same or not(since it could be a sensitive letter demanding more composition).

The model was built using a feed-forward neural network which produces a score for each mail based on probability.If the score is below a threshold, it doesn't trigger the response mechanism ,and hence no response is generated!


Conclusion:

The efforts have surely paid off.Google has reported around 10% of mobile replies use Smart Reply,which is surely a good sign.Also,the system is language-agnostic and hence can be extended to other native languages as well in the future.
From this we see Google attacked each of the sub-problems individually,which were primarily in the NLP domain, in a novel way , and combined everything together to create a market-ready product.Some interesting challenges that one can think of to further this paper are :
1.How to compose longer and at the same time , legitimate mails?
2.How to take references in the real world into context and use it in the response? (In other words, grasp a proper noun say Los Angeles, as a location and not a simple token , and possibly make a better fitted response.)













Comments

  1. If these 4 models are aligned with he given data so there is no problem while generating the answers for the users so http://www.ukraineoutsourcingrates.com/top-it-outsourcing-companies-ranking-in-sumy/ can provide a better help to the users in case of generating the ideal situation.

    ReplyDelete

Post a Comment

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...