Rating

    No results.

TREC 2010 Roundup

GD Star Rating
loading...
Back from another successful TREC conference on the NIST campus. 2010 is a transition year, with the end of old tracks and the proposition of new ones. Indeed, TREC is moving with the times, looking at new data sources and test collections, as well as new evaluation strategies.
Outwith the old . . .
For example, TREC 2010 marks the end of the Relevance Feedback and Blog tracks. While TREC 2010 will be the last year of the Relevance Feedback track, the Blog track, which has been running for the last 5 years, is now morphing into a new Microblog track, investigating real-time and social search tasks in Twitter. A brand new test collection possibly containing 2 months of tweets is planned, with linked web-pages and a partial follower graph. Join the Microblog track googlegroup to obtain the latest updates and follow the Microblog track on Twitter.

TREC 2011 will also witness the initiation of the new Medical Records track, dedicated to investigating approaches to access free-text fields of electronic medical records.
On the test collection front, the Web track is also forward planning a new large-scale dataset to replace ClueWeb09. Indications are that this new dataset will be about the same scale as ClueWeb09 but might provide more temporal information (multiple versions of a page or site over time). Moreover, we have suggested that this might be the heart of a larger dataset comprised of multiple parallel/aligned corpora, for example blogs and news feeds covering the same timeframe.
TREC Assessors, Relevant?
In terms of evaluation, 2010 marks the first year where evaluation judgments were crowdsourced using an online worker marketplace, as opposed to relying on TREC assessors, the participants themselves, or a select group of experts. Indeed, both the Blog track and the Relevance Feedback track crowdsourced some of their evaluation (although the Relevance Feedback track suffered many setbacks and its crowdsourcing process is still incomplete). Furthermore, to investigate the challenges in this new field of crowdsourcing, a specific Crowdsourcing track has been created and will run in 2011. More details can be found here.
Themes
As usual, themes emerged within the various tracks. Learned approaches were far more prevalent this year, now that training data was available for the ClueWeb09 dataset. Indeed, the Web track was dominated by trained models mostly based on link and proximity search features. Diversification, on the other hand, remains a challenging task, with the top groups leaving their initial rankings as is. An outstanding exception is our own approach using the xQuAD framework under a selective diversification regime, which further improves our strongly performing adhoc baseline. Craig Macdonald presented our work in the Web track plenary session.
In the Blog track, voting model-based and language modeling approaches proved popular for blog distillation. For faceted blog ranking, participants employed variants of facet dictionaries to either train a classifier or as features for learning. For the top news task, participants deployed a wide variety of methods to rank news stories in a real-time setting, from probabilistic modeling to blog post voting with historical evidence. Richard Mccreadie presented our work on the blog track as a poster during TREC 2010, which attracted very interesting discussions.
During the TREC conference, Iadh Ounis, Richard Mccreadie and others have done a fair amount of tweeting. You can follow some bits of the TREC conference through the #trec2010 hashtag.
LinkedInShare

Comments are closed.