Addendum: Time falloff

Tracking badness through academic publishing
6 Mar 2014, 10 a.m.

It's just occurred to me that to reduce some of the potential issues with the Badness score, the algorithm could take into account how far apart in time connections are.

For example, if a journal published a bunch of terrible papers back in 1980, this is arguably a lot less damning than if they did so in 2012. Editorial boards change, standards can improve, and what was once considered acceptable (such as non-evidence-based medicine) may now be frowned upon.

Also, to what degree should the algorithm contaminate things backwards in time? The specific scenario that has me worried is that respected researcher Xavier may be wary of collaborating with fresh-on-the-scene researcher Yvonne, for fear that Yvonne later gets implicated in something bad, perhaps co-authoring a paper with researcher Zeno based on data faked by Zeno.

By restricting the algorithm to only contaminating nodes forward in time, this scenario is averted. On the other hand, this may be too restrictive: if Yvonne goes on to have a malodorous career of dubious data and incorrect conclusions, it may turn out that her contribution to the work with Xavier was actually also rotten.

Other, more complex schemes are possible, of course: A threshold could be introduced, where a small amount of Badness is essentially ignored, which means that if the collaboration with Zeno was her only mistake, Yvonne's record is still spotless. Or perhaps contamination backwards in time stops at two hops across the network: the bad Zeno paper contaminates Yvonne, and Yvonne contaminates the Xavier paper, but that paper does not contaminate Xavier, as it would be a third hop.

I guess the thing to do would be to try this out on a well-defined set of papers from some field where there's a well-known set of Bad Stuff to seed the algorithm from...