Skip to main content

Natural Language Processing in Indian Languages

Did you ever realize how many spoken languages and dialects are present in India? Did you think of the variations in languages which are distributed among people of India that has huge population of 1.3 billion?


There is no denying in the fact that less than one percentage of Indian population speaks in English. Majority population speaks other Indian languages. Currently NLP tools are establishing their usability but only for English which is rarely spoken in India. This is the reason why developing applications for markets which focus on linguistic functionalities like call center, social listening, research, virtual agents, etc comes out as a challenging task.

Can you think of Challenges in Developing NLP for Indian languages?

Applications that work by parsing text need to learn and memorize rules of language in order to get good precision. There must be development such that there is a support for such applications linguistically. Currently, we have achieved success in developing linguistic tools having functionalities like lemmatization, text categorization, pos tagging, entity extraction, parsing and others. But these tools work for English. Now it is needed to take these tools a step ahead by enabling them for Indian languages.


Language Complexity, lack of language document standards, differences in scripts and difficulty in obtaining data constitute challenging issues in development of NLP platform for Indian languages.

Although Indian languages vary in great extent but there are no sufficient resources available providing description of these languages. This leaves developers with one of the tenacious issues i.e. the lack of literature about spellers, literature or grammar. Each language has its own diversity of alphabets which needs to be learned by applications to perform NLP. Indian dialects don't utilize Latin letters yet but they do utilize diverse contents or scripts between themselves. Every one of them are Brahmic inferred letters in order, however there is many between the dialects talked in north and south India. This further increases the difficulty level of understanding the language by linguists.

The image below illustrates the variations in alphabet of some languages-

 


Previous Study in Indian Languages:

Many researchers and developers have attempted to progress in Indian languages and have succeeded. Here is a glimpse on the previous study.
Few text categorization techniques studied and explored are discussed below:
  • ·         Decision Tree: Decision Trees comparatively takes more time to perform text categorization. This technique has been used for language: Bengali 
  • ·         K-nearest Neighbor: This technique is observed to be more efficient where training sets are organized and small. It is not a capable approach and implemented on language: Bengali, Telugu, Marathi
     
  • ·         Naïve Bayes: Naïve Bayes is more proficient for the purpose of preparing training sets. NB gives better outcome after SVM. Languages on which Naïve Bayes is studied: Bengali, Punjabi, Urdu, Telugu, Marathi
     
  • ·         Support Vector Machine: Among all the techniques; decision trees, naïve Bayes and others, support vector machines gives higher value of F-score. Languages studied: Bengali, Urdu

  • ·         Centroid algorithm: Efficiency of centroid algorithm in terms of F-score is relatively low. Language studied: Punjabi
There are many other features of NLP which are studied in these languages

Current Scenario:

Currently, Bharat Operating System Solutions is working on Indian language. It is processing 18 Indian languages. These languages include Bodo, Assamese, Bengali, Gujarati, Kannada, Maithili, Hindi, Konkani, Manipuri, Kashmiri, Malayalam, Oriya, Urdu, Tamil, Sanskrit, Punjabi, Telugu, etc. C-DAC Chennai developed Bharat Operating System Solutions (BOSS) for the its usability as free or open software source in India. It is GNU/Linux distribution.

It can be expected that soon Natural Language Processing platform in Indian languages will evolve with great success 

References:
  1. http://ijcsit.com/docs/Volume%207/vol7issue2/ijcsit2016070206.pdf
  2. http://www.ijert.org/view-pdf/9061/text-based-language-identification-system-for-indian-languages-following-devanagiri-script
  3. https://www.academia.edu/31858718/Natural_Language_Processing_in_Indian_Languages_Current_Scenario
  4. https://blog.bitext.com/dictionaries-for-lemmatization
 








 




Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...