Skip to main content

Towards Making Document Ranking Effective

While making the best use of Big Data, data science aims at avoiding an information overload. For instance, search engines mine information based on a user query. Essentially, recommendation engines, such as Amazon, help narrow down on the appropriate information that the user may be seeking.
Such recommendation systems are built based on broadly two types of filtering: collaborative filtering and content-based filtering.


This blog focusses on the latter. Content-based recommenders make use of keywords associated with products or services and the user profile(including past likings by the user or current information/product/service being examined by the user). For instance, if a user follows news updates on Federer and Nadal, (s)he may be interested in news updates on these tennis players or on tennis. This is where techniques to process natural language may come to use. With a motley of information available, the TF-IDF term weighting scheme is one such technique to retrieve an appropriate set of information(documents) based on the presence of certain key terms. The steps in building a recommender involve obtaining score for the terms and deciding on a threshold score to assign certain terms as tags to a document.

This blog discusses a TF-IDF scheme as proposed by Paik [1]. In the context of a recommender, given an input document (which acts as a query Q) and information (set of documents D1, D2..DN) the task is to assign score to each of the documents(D1, D2..DN) with respect to Q, to quantify the saliency of the query terms in the information documents. Such a scoring can then be used to extract top relevant documents for recommendation.

Most information retrieval models make use of three factors to infer the importance of a term in a document:
  1. Term frequency(tf) in a document
  2. Document length
  3. Document frequency of the term, which gives higher importance to documents that contain terms that are rare in the collection

Most term weighting models apply either term frequency based normalization or length based normalization to the terms. Both these models have their limitations. For instance, length based normalization may constrain the retrieval of long documents.

Paik proposes a two aspect term frequency normalization scheme, that is a weighted combination of
  1. relative tf weighting within a document(RITF) and
  2. the tf normalization based on document length(LRTF)

to compute a TF-IDF score for terms, resulting in the formulation:

TF(t, D)= w * RITF(t, D)+ (1 - w)*LRTF(t, D), where 0 < w < 1

Such a scheme maintains a balance in preferring long and short length documents. The task comes to determining the appropriate value of the weight w. Assigning the appropriate weight is dependent on the query length. As has been observed, for a long query, the RITF prefers long documents as the term matches is proportional to the length of the document [2]. On the other hand, as discussed before, LRTF, which is a length based normalization, prefers short documents. Hence, to balance out the effect, more weight should be assigned to the RITF factor in case of short queries. In case of long queries, LRTF should be given more weight.

While MATF is a TF-IDF based model, language and probabilistic models are other options also used in generating ranks of recommendations, based on the content.

References:
[1] Paik, Jiaul H. "A novel TF-IDF weighting scheme for effective ranking." Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, 2013.
[2] Singhal, Amit, Chris Buckley, and Manclar Mitra. "Pivoted document length normalization." ACM SIGIR Forum. Vol. 51. No. 2. ACM, 2017.

Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...