Introduction
=========
Have you ever wondered why video lectures are much more effective in making us understand a topic than text document containing the same information (sometimes even more)? Or why does a skype call a better idea to communicate compared to text messages.
The idea is simple! The communication between humans is much more than just an informative text irrespective of it's source, i.e either verbal text or in written form. There are many things and not just the words that do the talking on our behalf. Some researchers claim that almost half of our communication relies on things that aren’t words: body language, tone of voice, and stuff that just isn’t conveyed by text. And this includes : facial expression, gestures, postures, prosody etc. This describes why a large portion of mails are misunderstood.
Videos are a best example conveying the importance of body language. And this can be supported with the fact that face to face meetings are the most productive.
Designer of Robots must take into consideration these aspect of communication. There is a well known idiom in English which captures the essence of non-verbal aspects : "Speech is silver, but silence is gold.". This essentially means that non-verbal attributes(body-language) are more powerful tools of communication.
Talking about the body language, what is it and what all things come under it? Body language is the unspoken or non-verbal mode of communication that we do in every single aspect of our interaction with another person. It is like a mirror that tells us what the other person thinks and feels in response to our words or actions. In real life situations, 60% to 80% of the messages that we convey to other people are transmitted through body language and the actual verbal communication accounts
for only 7% to 10%.
Modelling
=========
Just as Natural Language Processing has language understanding and generation, Body-language also have these two components. Generation of Body language means coordinating with textual components. A gesture or animation has to have the same duration of time, happen at the same moment, and include the same emotional content, or affect, as the message conveyed. For instance, “Hi” should, of course, be accompanied by a gesture that is about one second long — a friendly-looking signifier that’s commonly understood. Raising the hand and wagging it back and forth usually gets the job done. But building this can be tricky.
There are 3 variables associated with the modelling of Natural Language Text :
1. Duration : How long is the sound or string of text we’re dealing with?
2. Affect/Emotion : What is the emotional value of that source string of text? This is the second factor we need to know in order to calculate an animation, and it’s harder than just measuring the letters in a line: it requires either realtime sentiment analysis and/or a pre-built library that identifies the emotional content of a word.
3. Signifiers/Specific Animations : Is there a common gesture that normally accompanies the text? The wave of a hand that goes with the word “Hi” isn’t normally used in conversation as much as, say, nodding, or showing our palms when we speak.
Problem
========
Some of the most talented natural language engineers are some of the least talented communicators. They understand the language of code better than the code of language. There is a terminal amputation that cuts the text of language from the language of the body. That’s because important elements of natural language,like body language, are easily overlooked if we focus too much on the code and not enough on the people.
Because ultimately, that’s what roboticists are designing: a kind of people.
http://robohub.org/in-corporating-body-language-into-nlp-or-more-notes-on-the-design-of-automated-body-language/
http://www.successlearned.com/neuro-linguistic-programming-nlp/reading-body-language-and-nlp-techniques/
Comments
Post a Comment