Rating

    No results.

Finding relevance judgements in the wild

GD Star Rating
loading...

We recently heard our poster on online forum search was accepted to SIGIR 09, and I’ve been wanting to post something about the test setup we used in that study.

There’s no existing IR test collection for such a task, although some similar datasets do exist. For various reasons we weren’t able to create a traditional test collection, with user-issued queries and deep pools of relevance judgements. But, this particular dataset and possibly other online dialog archives can be mined to produce a ready-made IR test collection.

The users of the online forum we’ve been looking at frequently include links in their forum posts — often to previous messages and threads in the same forum. These links are sometimes in response to a new user’s question, and refer the user to a previous instance of the same (or similar) question and an answer contributed by another user. Here’s a few examples to illustrate my point. This interaction among forum users can be used as a form of query/relevance judgement pair. See the paper for a few more details on how we characterize the presence of a question-post/answer-link pair.

This type of test collection creation does have some distinct advantages over the typical retrieval test collections used at TREC. First, the queries represent real information needs of real users of the online forum. Many TREC queries are pulled from search engine logs, but frequently (as in the Blog Track’s Feed Distillation task) the queries are invented by participants or assessors. The information needs present in the online forum posts are much more verbose than typical keyword queries on a web search engine, providing a retrieval system more evidence with which to use in relevance scoring. The “relevance judgement”, provided by another forum user linking to a previous thread, also presents in-situ relevance information — sensitive not only to the original question, but also to the overall nature of the forum and the time when the question was asked.

There are several drawbacks inherent in this type of corpus creation, most importantly with regard to the exhaustiveness of the relevance assessment. Typically in TREC-style collection development, ranked results from several retrieval systems are pooled and those pooled documents are assessed for relevance. When the systems’ output is sufficiently diverse and relevance assessment is sufficiently deep, this produces a reasonably complete relevance assessment for each query — if a relevant document is in the collection, it would most likely be retrieved by one of the systems and be judged by being admitted into the pool. The method of collecting relevance judgements we use in our SIGIR poster, on the other hand, will not produce anything close to an exhaustive set of relevant threads. In the great majority of cases, only a single thread is linked to in a subsequent reply message. There is no guarantee that this thread is the best or only relevant thread in the collection. For this reason, we must take care not to assume non-judged threads are necessarily irrelevant.

There are plenty of datasets that seem to be ready-made for classification or regression tasks, without any need for annotation — for example the classic 20 newsgroups for text classification and Yahoo! Answers for a number of prediction tasks. For relevance ranking, however, I haven’t seen any ready-made datasets with real relevance judgements, as opposed to noisy interaction indicators such as click-through statistics. Conversation archives like the one we use offer one way to mine behavioral data for relevance judgements, offering ground-truth preferable in many ways to post-hoc relevance assessment.

LinkedInShare

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>