Plagiarism is using someone's work, idea without giving the original author/researcher any credit for it. It is easy to detect When the whole text is copied from somewhere but when the source text was written in entire different language the tasks becomes harder. This area is still an open area for research. There exist some papers where plagiarism detection method to detect texts written in Malay that have been copied from English sources is proposed, but has not been implemented.
Like any of the plagiarism detector, one of the major drawback is that the lack availability of all the potential documents in the local database and here we need it for two or more languages. We may could use Google intensive searches($5 for every 1000 queries) to find the documents and apply plagiarism detection but this method is very expensive.
In [1] a new method has been proposed for Automatic Cross Language Plagiarism Detection. In this method using supervised learning techniques(SVM) we will separate the document to be checked in two parts.
First part will contain portions of documents which was written in Language N(Native Language) only. And Second parts will contains the portions of the documents which were originally written in language F but were translated to language N. We could apply normal Plagiarism checker over the first part. And we translate second part to Language F so now the Normal Plagiarism Detection could over the text in language F. This is basically pre-processing the text so that now it is easier to find Plagiarism.
This method is divided into 3 phases:
Translation Detection:
In this phase we check whether the text was originally written in language F or N and if it is written in F translate it back to its native language. For the detection purpose we use a binary classifier.
Here we have Used SVM to detect the original Language of text T. We first convert each paragraph of T into a vector and then use SVM to detect the language of the text. This SVM was implemented using library LIBVSM[2] and was trained using the corpus generated by the transcription of the European Parliament sessions[3].
Internet Search State:
In this stage we divide each paragraph into sentences, remove all irrelevant words and use a Search Engine to find the relevant URLs
Generating report:
We download documents from these URLs and divide these documents into sentences and finally we match the sentences in T to sentences of these documents to find the similarity. In this method similarity between sentences are evaluated using cosine similarity. Sentences having similarity higher than 0.6 are considered to be likely source of plagiarism.
Conclusions:
When this method was applied over some sample cases we observed that there are some significant cases of undetected plagiarism. One of the major factors was internet searching, as it is not able to discover and identify the original written text or similar kind of information copied. The adverse impact could be seen when translation is executed by human as it includes personalized lexicon and expression contrary to automatic translation done by machines. It could affect the performance of the systems and sometimes the non-detection of larger amount of originally extracted text.
Another factor is type of translation and inclusion of symbols which ultimately affect the non-detection of plagiarism. Hence we conclude that present scenario ultimately accentuate the requirement of research in identification of cross language copy.
References:
[1]:http://ieeexplore.ieee.org/document/6138189/
[2] http://www.csie.ntu.edu.tw/-cjlinilibsvm
[3] P. Koehn, "Europarl: A parallel corpus for statistical machine translation",M IT Summit 2005
Like any of the plagiarism detector, one of the major drawback is that the lack availability of all the potential documents in the local database and here we need it for two or more languages. We may could use Google intensive searches($5 for every 1000 queries) to find the documents and apply plagiarism detection but this method is very expensive.
In [1] a new method has been proposed for Automatic Cross Language Plagiarism Detection. In this method using supervised learning techniques(SVM) we will separate the document to be checked in two parts.
First part will contain portions of documents which was written in Language N(Native Language) only. And Second parts will contains the portions of the documents which were originally written in language F but were translated to language N. We could apply normal Plagiarism checker over the first part. And we translate second part to Language F so now the Normal Plagiarism Detection could over the text in language F. This is basically pre-processing the text so that now it is easier to find Plagiarism.
This method is divided into 3 phases:
- Translation Detection:
- Internet Search State: Searching for the documents on the internet.
- Generating report
Translation Detection:
In this phase we check whether the text was originally written in language F or N and if it is written in F translate it back to its native language. For the detection purpose we use a binary classifier.
Here we have Used SVM to detect the original Language of text T. We first convert each paragraph of T into a vector and then use SVM to detect the language of the text. This SVM was implemented using library LIBVSM[2] and was trained using the corpus generated by the transcription of the European Parliament sessions[3].
Internet Search State:
In this stage we divide each paragraph into sentences, remove all irrelevant words and use a Search Engine to find the relevant URLs
Generating report:
We download documents from these URLs and divide these documents into sentences and finally we match the sentences in T to sentences of these documents to find the similarity. In this method similarity between sentences are evaluated using cosine similarity. Sentences having similarity higher than 0.6 are considered to be likely source of plagiarism.
Conclusions:
When this method was applied over some sample cases we observed that there are some significant cases of undetected plagiarism. One of the major factors was internet searching, as it is not able to discover and identify the original written text or similar kind of information copied. The adverse impact could be seen when translation is executed by human as it includes personalized lexicon and expression contrary to automatic translation done by machines. It could affect the performance of the systems and sometimes the non-detection of larger amount of originally extracted text.
Another factor is type of translation and inclusion of symbols which ultimately affect the non-detection of plagiarism. Hence we conclude that present scenario ultimately accentuate the requirement of research in identification of cross language copy.
References:
[1]:http://ieeexplore.ieee.org/document/6138189/
[2] http://www.csie.ntu.edu.tw/-cjlinilibsvm
[3] P. Koehn, "Europarl: A parallel corpus for statistical machine translation",M IT Summit 2005
Comments
Post a Comment