Skip to main content

Programming Language Naturalization


Have u ever thought of a programming language which can be written in natural language. We came across different kind of applications which need graphs to be plotted and required data has to be stored and some complex actions have to be performed using Internet of things or on any other data. The above requirements can be accomplished using a programming Language which has to be written precisely following all the rules.
   
On the other hand, there is a method which can convert natural language into formal language. This can be done using semantic parsing. The ability of this parsing is limited and not as powerful as implementation through programming. Example for this is “Voxelurn”.


Example for natural language programming   

This concept is called “naturalization”. This bridges gap between natural language and core language. In any application development, we need to select a core language and we need to train the system with rules (conversion of natural language to core language).  Rules will be like ‘A’ means ‘M’. Initially, user has to instruct the system about the rules. After training the system will be able to understand the natural language (which is selected by the user) and can convert to core language such that it can be executed.     

“Voxelurn” was implemented by naturalizing “Voxel world”. Voxel has lot of loops, structures, Conditionals. A group of people started training system and system can be used by other users. A lot of short forms and alternative syntax are added making system works with natural language instead of core language.

Some rules used to train the system:


Modeling and Learning:

·         ‘X’ utterances to be mapped to core language.
·         ‘Y’ is tokens in core language.
·         ‘S’ set of tokens we know in natural language that can be mapped to core language.
·         'S’' is the new set which is reconstructed after learning new items.

The conversion of ‘X’ to ‘Y’ can be done using a derivation tree which has been constructed by the feature set.

Features:
Derivations are based on the weights given to different type of features.

Rule Features:
“Id” is for specific rules.
“Type” feature is to identify whether it is a core language or induced language.
Social Features: These features capture linguistic similarities and dissimilarities among users.
“Author” is to generalize the precise things to be accepted.
Friends” is mapping between authors and users which means users tend to expect the rules which are defined by authors those are in their community.
“Self” this specifies whether it is written by the user
Spam Features:
Considering adjacent tokens of the border to work effectively using context specific
Scope Features:
This specifies how each community prefers to have scoping features

Parameter Estimation:      
        
                After each utterance or token, user will be given all the possibilities of next state and user can choose one among them. Then system performs an online AdaGrad update on the parameters according to gradient of loss function.

Conclusion: 
            In Future, this kind of conversion can be implemented for other programming languages. Semantic parsing, Grammar plays an important role.

References:

[1] Sida I. Wang, Samuel Ginn, Percy Liang, Christopher D. Manning, Naturalizing a Programming Language via Interactive Learning

[2] S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars.

[3] S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. 

Comments

  1. The capacity is about the financial aspects of prostitution as a calling in an excuse authority. Its new angles are unexpectedly made however rich. You can visit https://www.onlineproofreadingservices.net/our-cheap-proofreading-services/ website for more info. To whole amount your vocabulary.

    ReplyDelete

Post a Comment

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We can use NLP to impr

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE

Dbpedia Datasets

WHAT IS Dbpedia? It is a project idea aiming to extract structured content from the information created in the wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datsets. BUT? But why i am talking about Dbpedia ? How it is related to natural language processing? The DBpedia data set contains 4.58 million entities, out of which 4.22 million are classified in a consistent ontology, including 1,445,000 persons, 735,000 places, 123,000 music albums, 87,000 films, 19,000 video games, 241,000 organizations, 251,000 species and 6,000 diseases. The data set features labels and abstracts for these entities in up to 125 languages; 25.2 million links to images and 29.8 million links to external web pages. In addition, it contains around 50 million links