# A weighted similarity measure for non-conjoint rankings

GD Star Rating
Having an article in submission sharpens one’s appreciation for situations to which the article applies; and the ACM’s leisurely eighteen-month-plus turnaround on journal articles gives plenty of time for that appreciation to develop. Since early 2009, it has seemed to me that every third retrieval paper either has, or should have, compared non-conjoint rankings for similarity. The most common instance of such rankings are the document lists produced by different retrieval methods, but numerous other examples appear: the frequency-ordered vocabulary of different corpora; the most-common queries from different logs or periods; the weighted connections of different individuals in a social network; and so forth. Indeed, large domain rankings having the features of top-weightedness, non-conjointness, and arbitrary truncation — what I term indefinite rankings — seem more common than the unweighted, conjoint, complete rankings assumed by traditional rank similarity measures such as Kendall’s $\tau$. And yet there are few if any suitable measures described in the literature for comparing such rankings.
I have put together an implementation of RBO (in C). The implementation is somewhat unpolished, although efficient. I hope to add other rank similarity metrics in future; included so far are only Kendall’s tau (in an $O(n \log n)$ implementation, compared to R’s $O(n^2)$) and a metric we called Average Overlap, developed in parallel by two different research groups (under different names). Comments and suggestions are, as ever, welcome.