Skip to main content

Google's Smart Reply : a summary of NLP techniques and insights used

One of the more recent additions to Google's plethora of services offered is Smart Reply.As of now, Smart Reply humbly seeks to provide short crisp responses for replying to emails on the go.Interestingly enough, a lot has gone into developing the system to a deployable ,efficient package, which has been found useful by a large fraction of the user base.




Google's smart reply system can be understood as an amalgam of 4 key components: 

1.Response Selection
2.Response Set Generation
3.Suggestion Diversity 
4.Triggering Model 

Response Selection:

Google uses a neural network with LSTM (Long Short Term Memory cells) to generate responses.
The email corpus used for training the network was extracted from Google's own mail database, after anonymization.It consists of around 238 million messages, which include 153 million messages that have no response.

Given an email e ,and set of all possible responses ,the score of a response r is defined as P(e|r)[P=Probability].
Top k responses are then taken for further processing.

Response Set Generation:

To ensure better quality of responses,i.e reduce redundant responses such as "Thanks for the update.","Thank you for the update!","Thanks for the status update!" , Google uses something called semantic intent clustering.The sentences are parsed using a dependency parser and a canonicalized representation is created.Thereafter,all responses are assigned a semantic cluster , which is a broad category of what the intent of the message is.For example, "Haha","LOL!",etc. would be categorised as funny.
To achieve this task of semantic clustering,Google uses a semi-supervised learning algorithm using scalable graph algorithms ,which could learn automatically from the data and a few human-annotated samples.

Suggestion Diversity:

The ideal behind this is to provide no two responses with the same intent to the user.The more the variety in response intents, the more utility lies for the user.This is done by intent-checking and enforcing negative and positive variations of intent, by filtering the response space suitably.The filtering mechanism identifies the intent of the mail as affirmative, missing(indirect) negatives and exclusive negatives.And picks the best suited responses from each.

Triggering Model:

The triggering model suppresses the response generator by deciding whether a response should be given to the mail or not.This decision is taken with respect to factors like whether the mails are auto-generated or not, and whether short replies are appropriate for the same or not(since it could be a sensitive letter demanding more composition).

The model was built using a feed-forward neural network which produces a score for each mail based on probability.If the score is below a threshold, it doesn't trigger the response mechanism ,and hence no response is generated!


Conclusion:

The efforts have surely paid off.Google has reported around 10% of mobile replies use Smart Reply,which is surely a good sign.Also,the system is language-agnostic and hence can be extended to other native languages as well in the future.
From this we see Google attacked each of the sub-problems individually,which were primarily in the NLP domain, in a novel way , and combined everything together to create a market-ready product.Some interesting challenges that one can think of to further this paper are :
1.How to compose longer and at the same time , legitimate mails?
2.How to take references in the real world into context and use it in the response? (In other words, grasp a proper noun say Los Angeles, as a location and not a simple token , and possibly make a better fitted response.)













Comments

  1. If these 4 models are aligned with he given data so there is no problem while generating the answers for the users so http://www.ukraineoutsourcingrates.com/top-it-outsourcing-companies-ranking-in-sumy/ can provide a better help to the users in case of generating the ideal situation.

    ReplyDelete

Post a Comment

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...

Dbpedia Datasets

WHAT IS Dbpedia? It is a project idea aiming to extract structured content from the information created in the wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datsets. BUT? But why i am talking about Dbpedia ? How it is related to natural language processing? The DBpedia data set contains 4.58 million entities, out of which 4.22 million are classified in a consistent ontology, including 1,445,000 persons, 735,000 places, 123,000 music albums, 87,000 films, 19,000 video games, 241,000 organizations, 251,000 species and 6,000 diseases. The data set features labels and abstracts for these entities in up to 125 languages; 25.2 million links to images and 29.8 million links to external web pages. In addition, it contains around 50 million links...