Skip to main content

Posts

Showing posts from October, 2017

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We can use NLP to impr

NLU analysis for question generation

How do we communicate? Not a minute passes without asking or answering any question. (In IIITD, not even a second. :P) As machines have become part of our day today communications, they also need to ask us questions. The question generation uses the Natural language processing ideas as a backbone. This question generation has many applications in various fields such as the IVR systems, Tutoring systems, requirement elicitation activities before development of systems. It can be used to enhance the learning of students using dialogue-based systems, which help in deeper learning and understanding. Even used to generate question for our quizzes. Techniques 1.        Syntax analysis or phase structure analysis The most common approach would be to convert the complex sentences into simple sentences, identifying parts of speech and the entities in the sentence using syntactic parser. Depending on the relationship of the parts of speech and the entities, grammatical rules can

Google's Smart Reply : a summary of NLP techniques and insights used

One of the more recent additions to Google's plethora of services offered is Smart Reply .As of now, Smart Reply humbly seeks to provide short crisp responses for replying to emails on the go.Interestingly enough, a lot has gone into developing the system to a deployable ,efficient package, which has been found useful by a large fraction of the user base. Google's smart reply system can be understood as an amalgam of 4 key components:  1.Response Selection 2.Response Set Generation 3.Suggestion Diversity  4.Triggering Model  Response Selection: Google uses a neural network with LSTM (Long Short Term Memory cells) to generate responses. The email corpus used for training the network was extracted from Google's own mail database, after anonymization.It consists of around 238 million messages, which include 153 million messages that have no response. Given an email e ,and set of all possible responses ,the score of a response r is defined as P

Dbpedia Datasets

WHAT IS Dbpedia? It is a project idea aiming to extract structured content from the information created in the wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datsets. BUT? But why i am talking about Dbpedia ? How it is related to natural language processing? The DBpedia data set contains 4.58 million entities, out of which 4.22 million are classified in a consistent ontology, including 1,445,000 persons, 735,000 places, 123,000 music albums, 87,000 films, 19,000 video games, 241,000 organizations, 251,000 species and 6,000 diseases. The data set features labels and abstracts for these entities in up to 125 languages; 25.2 million links to images and 29.8 million links to external web pages. In addition, it contains around 50 million links

Story Generation

We all grow up reading stories, some of which even form the basis of our etiquette. They help us enjoy and relieve stress.  The automation of stories' generation has been a long researched problem. Major problem arises because neither the inputs nor the features of the output are defined clearly and thus evaluation of the output story becomes problematic. How do you say your algorithm performs well or not if you can't judge the output. How do you make an algorithm for generating the story when the result desired is not clearly defined? Many techniques have been used to make such a system like planning approach where start state of characters, world, etc. and goals - character goals or author goals were given and story was generated by creating a plan to reach end state from start state to reach the goal, case based approach where a database had all the previous stories and new story was generated using preexisting ontology, recurrent neural network approach by training the

Cross Language Plagiarism Detection

Plagiarism: the practice of taking someone else work and presenting that as of their own. So its reverse Plagiarism detection is detecting weather a given piece of document is plagiarised or not. Detection of plagiarism was so naive in earlier times ,People used to just compare strings across 2 documents , then as the technology grew people also grew smarter , they started coming with ideas by which they can copy a document while go undetected from detection. In this post i will talk about a particular type of plagiarism detection which is essentially one of hardest in this domain i.e Cross-Language plagiarism detection(CLPD) . People take a piece of document and convert it into some other language and then post it, now our task is to detect weather it is copied from some other language or not ? In this post i will keep my limits to documents only not programming code, though there are many ways as well by which it can be detected that weather a piece of code in converted into some

Identification of Sarcasm in Tweets

Sarcasm means expressing our feelings in opposite of what we actual feel. It can also be defined as a satirical wit intending to insult, mock, or amuse but it is to be removed during natural language processing. In the usage of Twitter, we observed that many sarcastic tweets have a common structure that creates a positive/negative contrast between a sentiment and a situation. Specifically, sarcastic tweets often express a positive sentiment in reference to a negative activity or state. Let’s consider the tweets below, where the positive sentiment terms are underlined and the negative activity/state terms are italicized . (a) Wow! I feel happy when he denied my payment . (b) Oh how I love being ignored . (c) Absolutely adore it when my bus is late. (d) I’m so pleased mom woke me up with vacuuming my room this morning. The sarcasm in these tweets arises when a positive sentiment word (e.g., love, adore, pleased) with a negative activity (e.g., den

Word Embeddings

In this post, I will talk about word embedding and their role in success of many deep learning models in  natural language processing . Other interesting read related to this topic can be found in [1], [2].   Word Embedding obtained from English and German A word embedding is a representation of word W into an n -dimensional vector space ( From words to real numbers ).  Suppose there are only five words in our vocabulary king, queen, man, woman, child. Queen can be encoded as shown. Word embedding helps in efficient and expressive representation of words. Such representation helps in capturing semantic and syntactic similarities and are able to identify relationship between each other in a very simple manner. Word Embedding capturing the gender relation. Arrows are mathematical vectors denoting the relationship How are word embedding generated ? A word embedding matrix J is generated by training an unsupervised algorithm on a very large corpus and then

Contextual Encryption with Natural Language Processing

Contextual Encryption Using Natural Language Processing Various symmetric and asymmetric cryptography algorithms ensure the security of millions of documents being transferred from man in the middle attacks. Most of these algorithms contain a computationally infeasible task for any hacker trying to gain access to some information. However, with the advent of post quantum cryptography, existing security algorithms may turn to be breakable. A prominent mechanism of hiding a message from an attacker in post quantum cryptography, would be to prevent him from knowing that the message is encrypted. Principles A way to contextually encrypt sentences is to alter the semantics of the sentence, without bringing a change in the sentence structure. This is usually followed by a Pretty Good Privacy(PGP) algorithm encryption. We have a precomputed database of public domain articles such as news, blogs, columns and magazines and store them in a online database. We can then replace sentences