Skip to main content

Posts

Showing posts from September, 2017

A Neural Attention Model For Abstractive Sentence Summarization.

For natural language, Summarization plays a vital role for understanding or interpretation of our knowledge that is how we perceive the information given in a document. Basic crux of summarization is to get a condensed representation of an input text which revolves around the core meaning of the original document.  There have been many summarization mechanisms which uses “extractive approaches” to get crux of the document but during that process and it does so by cropping out and stitching together portions of the input text to get a condensed version of it. Henceforth, this paper titled “ A Neural Attention Model For Abstractive Sentence Summarization ” beautifully focuses on the task of sentence-level summarization. Underlying techniques, which it uses are neural language model with contextual input encoder. To, the approach which is devised; is called as “Attention-Based Summarization” . Below is the heatmap, which illustrates a soft alignment between the input which is a sente

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE

Paragraph Vectors and Word Vectors

Word prediction model using bag-of-word representation or n-grams model can be limiting. Bag-of-word representation of given text disregarded the linguistic context of the word, the semantic of a given word is not taken into consideration. N-gram models can capture dependencies and relation upto a short distance but fail to capture this over long distances. In bag-of-word representation words such as “small”, “little”and “white” would all be considered the same, that is, they would be considered equidistant form each other; but according to linguistic context “small” and “little” should be considered to be closer than “small” and “white”.  Word Vectors Word Vectors are distributed representations of words in given text. Each word is represented by a unique vector, each vector is a set of features. Word Vectors keep the linguistic dependencies and semantic structures of words intact. Vectors of words “small” and “ little” are lot closer in the vector space as compared to vectors

Machine Translation

                         Machine Translation What is machine translation? Translation of text from one language to another without human involvement. Also called automated translation "Translation performed by a computer". Why? Communication is our life.Without communication we can't express our feelings, share our thoughts.But there are about 6909 languages in the world and I don't think anyone in the world knows more than 10 languages.So to understand others' language machine translation is required which converts one language to another language with the use of software.It  also helps in translating web content and web pages. Types of Translation 1.Machine Translation Sometimes the general meaning of a text is all you need from your translation. Machine translation  provides the perfect combination of rapid, trusted and cost-effective translations when getting the general meaning across is sufficient. 2.Community Translation Translating w
Music Modeling and Generation Music is the ultimate language. Many amazing composers throughout history have composed pieces that were both creative and deliberate. Composers such as Bach were well known for being very precise in crafting pieces with a great deal of underlying musical structure. Is it possible then for a computer to also learn to create such musical structure? Automatic Music generation is one of the hot topics in AI Research nowadays, with many big companies like Sony, investing in reviving old classics from Beatles, Michael Jackson etc. The problem of music generation is similar to that of Language/Text generation, but is much more difficult. It is difficult to generate likable/good sounding music. Generating music with long-term structure is one of the main challenges in the field of automatic composition. Over the years, many different techniques have been proposed, some relying on standard NLP techniques, like N-Grams, Hidden Markov Models and others

Natural language interactions for learning

From so many years, humans are constantly trying to make machines as they are to make their life easier and advance. Natural Language Processing was one big step towards making the computers understand human language. Natural Language Processing made computers able to convert a set of natural language rules into computer code. Now computers do more than that, Yes!! Now computers not only understand what we say but also understand what we mean with the help of Natural Language Interactions. Natural Language Interaction is the advancement of NLP that allows computers and humans to communicate using natural language. Today, NLIs on machines are often trained once and deployed and user is bound to their limitations. Research suggests that when learning a language, rather than consciously analyzing increasingly complex linguistic structures (e.g. sentence forms, word conjugations), humans advance their linguistic ability through meaningful interactions [1].  The standard machine

Jumping NLP Curves: A review of Natural language processing research

 In the current internet age, the civilization has undergone rapid changes and NLP research has produced multiple great things related to artificial intelligence e.g., Google, IBM’s Watson, Apple’s Siri etc. In this blog, we will discuss the evolution of NLP research and present them in the form of the intersection of three overlapping curves namely Syntactic, Semantics and Pragmatics Curves.   Poising on the Syntactic Curve (Bag of Words): Syntax centered NLP is still used very popularly to manage different tasks like information retrieval and extraction, topic modeling, auto-categorization  etc. It is broadly grouped into three main categories:  keyword spotting, lexical  and statistical methods.   Keyword Spotting  is the most popular approach due to its cost-effectiveness. Keyword spotting we can use for text classification. Some of the most popular project on keyword spotting includes: (a) Ortony’s Affective Lexicon: it groups words into effective catego