Rating

    No results.

Why training and review (partly) break control sets

A technology-assisted review (TAR) process frequently begins with the creation of a control set—a set of documents randomly sampled from the collection, and coded by a human expert for relevance. The control set can then be used to estimate the richness (proportion relevant) of the collection, and also to gauge the effectiveness of a predictive […] → Read More: Why training and review (partly) break control sets

Total assessment cost with different cost models

In my previous post, I found that relevance and uncertainty selection needed similar numbers of document relevance assessments to achieve a given level of recall. I summarized this by saying the two methods had similar cost. The number of documents assessed, however, is only a very approximate measure of the cost of a review process, […] → Read More: Total assessment cost with different cost models

Total review cost of training selection methods

My previous post described in some detail the conditions of finite population annotation that apply to e-discovery. To summarize, what we care about (or at least should care about) is not maximizing classifier accuracy in itself, but minimizing the total cost of achieving a target level of recall. The predominant cost in the review stage […] → Read More: Total review cost of training selection methods

Finite population protocols and selection training methods

In a previous post, I compared three methods of selecting training examples for predictive coding—random, uncertainty and relevance. The methods were compared on their efficiency in improving the accuracy of a text classifier; that is, the number of training documents required to achieve a certain level of accuracy (or, conversely, the level of accuracy achieved […] → Read More: Finite population protocols and selection training methods

Research topics in e-discovery

Dr. Dave Lewis is visiting us in Melbourne on a short sabbatical, and yesterday he gave an interesting talk at RMIT University on research topics in e-discovery. We also had Dr. Paul Hunter, Principal Research Scientist at FTI Consulting, in the audience, as well as research academics from RMIT and the University of Melbourne, including […] → Read More: Research topics in e-discovery

Random vs active selection of training examples in e-discovery

The problem with agreeing to teach is that you have less time for blogging, and the problem with a hiatus in blogging is that the topic you were in the middle of discussing gets overtaken by questions of more immediate interest. I hope to return to the question of simulating assessor error in a later […] → Read More: Random vs active selection of training examples in e-discovery

Can you train a useful model with incorrect labels?

We, in this blog, are in the middle of a series of simulation experiments on the effect of assessor error on text classifier reliability. There’s still some way to go with these experiments, but in the mean time the topic has attracted some attention on the blogosphere. Ralph Losey has forcefully re-iterated his characterization of […] → Read More: Can you train a useful model with incorrect labels?

Assessor error and term model weights

In my last post, we saw that randomly swapping training labels, in a (simplistic) simulation of the effect of assessor error, leads as expected to a decline in classifier accuracy, with the decline being greater for lower prevalence topics (in part, we surmised, because of the primitive way we were simulating assessor errors). In this […] → Read More: Assessor error and term model weights

Annotator error and predictive reliability

There has been some interesting recent research on the effect of using unreliable annotators to train a text classification or predictive coding system. Why would you want to do such a thing? Well, the unreliable annotators may be much cheaper than a reliable expert, and by paying for a few more annotations, you might be […] → Read More: Annotator error and predictive reliability

Repeated testing does not necessarily invalidate stopping decision

Thinking recently about the question of sequential testing bias in e-discovery, I’ve realized an important qualification to my previous post on the topic. While repeatedly testing an iteratively trained classifier against a target threshold will lead to optimistic bias in the final estimate of effectiveness, it does not necessarily lead to an optimistic bias in […] → Read More: Repeated testing does not necessarily invalidate stopping decision