With the extensive use of e-commerce platforms, online reviews have emerged as a major criteria for critical decision making regarding product purchases, design and business. At the same time, it has also become a way to promote or defame a target product or services by posting fake reviews and giving unfair ratings. Such reviewers are called opinion spammers and their activities are called opinion spamming.
In recent years, researchers have also studied the problem and proposed several techniques. However, the problem is still wide open. Unlike many other forms of spamming, the key difficulty for solving the opinion spam problem is that it is hard to find gold-standard data of fake and non-fake reviews for building a model. But since it is also difficult to manually label each review as true or fake, some automation is needed. This blog intend to mention few of the tactics to filter out fake reviews from the chunk.
Various heuristic approaches divided into 3 phases are used to detect low quality and untruthful reviews.
Phase I : Spammer Detection
This phase focuses on reviews posted by each reviewer in order to detect spammers.
- Excessive Reviews per day : reviewers posting a large number of reviews per day and that too frequently over the considered time period is most likely a spammer.
- Very Short Reviews : If a reviewer frequently posts very short length reviews over the considered time period then the reviewer might be a spammer.
- Extreme Ratings : reviewer always giving extreme ratings(i.e. either minimum allowed rating or maximum allowed rating) might be a spammer trying to promote or defame the product/service.
All the reviews posted by such spammers are either low quality or fake reviews and hence should be discarded.
Phase II : Review Text Analysis
This phase focuses on text content and other below mentioned features of the review data.
- Very short length Reviews : If the review is very short in length, it might be fake or low at quality. If the writer just wants to affect the overall score, their main intent maybe to just vote via the rating feature, and to boost or lower it.
- Reviews with similar content : A large number of duplicate and near-duplicate reviews written by the same reviewers or different reviewers (possibly different userids of the same persons) for the same products or different products. The same can be detected by cosine similarity measure between each pair of review.
- Reviews with low objectivity : Objectivity tells about the real content present in the text. If the review contains less content then it is not useful for customers and hence should be discarded. SentiWordNet can be used to get the objectivity score of the text.
- Rating inconsistency with review text : One can expect that the sentiment expressed in the review should be consistent with the rating. However, often it is not the case. Mismatch of rating and reviews can be marked as a potential spam review – or at the very least, inconsistent review where other readers should be careful not to put too much stock in.
- With links : Reviewers sometimes leave a link to their website in order to promote their product. Such reviews are most likely biased ones and target to advertise specific product.
- In CAPITAL letters : Some reviews are written generally to grab attention or for advertisements. Same can be done be using capital letters in the text. (For example “SELLING CHEAP!!!!!!”). Researchers have consistently found out that an unusual (big) number of capital letters in a sentence is a spam.
- With questions : Reviews that contain questions are called non-reviews. Usually the questions in the review text indicate that the “reviewer’ is more interested to have a information from other reviewers or the seller. So, these reviews cannot be counted as an opinion for the product.
The reviews lying under any of the above category is better to be considered as spam.
Phase III : Sentiment analysis
To determine reviewer's state of mind from the review text sentiment analysis can be done. A standard dictionary of sentiment bearing words SentiWordNet can be used to calculate sentiment score for each review.
Reviews with highly deviated sentiment score from the average score for each product can be considered as spam.
References:
- N. Jindal and B. Liu. Opinion spam and analysis. In Proc. WSDM '08. ACM, New York, NY, 2008, 219-230.
- Ott, M., Choi, Y., Cardie, C. and Hancock, J.T. 2011. Finding Deceptive Opinion Spam by Any Stretch of the Imagination. ACL (2011), 309–319.
- Jose Mara G´omez Hidalgo, Guillermo Cajigas Bringas, Enrique Puertas Sanz, and Francisco Carrero Garca. 2006. Content based SMS spam filtering. In Proceedings of the 2006 ACM symposium on Document engineering pp. 107-114.
- Jose Mara G´omez Hidalgo, Manuel Mafia L6pez*, Enrique Puertas Sanz Combining text and heuristics for cost-sensitive spam filtering. In proceedings of CoNLL-2000 and LLL-2000, pages 99-102, Lisbon, Portugal
- SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining.
Comments
Post a Comment