Friday 13 June 2014

Tragedy of the AntiCommons: reviewer veto edition

Shaun Hendy reports on an accidental experiment in the effects of the number of referees on grant decisions.
In mid-2012, the newly formed Ministry of Business, Innovation and Employment (MBIE) inadvertently conducted such a trial, albeit by accident. When it assessed the quality of the funding proposals that it had received, the Ministry failed to ensure that each proposal received an equal number of external peer reviews. Some proposals received just a single peer review while others received up to four.
As I wrote in a post last year, this exposed their funding decisions to a potential bias. Even if two proposals were of equal merit, the proposal that by chance received more reviews would also be more likely to have at least one negative review. A cautious Ministry might be reluctant to fund proposals that received a negative review, even if all others were positive. Proposals that received more reviews would then be less likely to be funded than equally good proposals that, by chance, received fewer.
Indeed, more than a third of the proposals that only received one review were funded, while only one quarter of those that received two or more were successful. Was MBIE too conservative in its funding decisions?
To answer this question, we need to know how likely it is that this could have been generated by chance in the absence of bias. It turns out that without bias, one in every twelve funding rounds would produce such a skewed result, so while one might be suspicious, the data does not allow us to draw a solid statistical conclusion. Nevertheless, this example illustrates how we might use randomness to evaluate the effectiveness of our decision-making processes.
Hendy's post is excellent throughout, including some assessment attempts on the productivity of Marsden grants. He writes:
Closer to home, Adam Jaffe, Director of Wellington-based Motu Economic and Public Policy Research, is currently undertaking a similar study of Marsden funded projects. Using the discontinuity regression method, Jaffe is comparing the subsequent academic performance of those who just made it over the threshold for funding to that of those who just missed out*, on the assumption that differences in the quality of proposals and teams that are being compared will be small. Proposals that just missed out on funding are effectively being used as a control group for those that just made it.
Once the study is complete, Jaffe will be in a position to estimate the scientific impact that a Marsden grant generates. If he finds that the Marsden allocation process suffers from the same problems as that of the NIH, the fund may be able to take steps to improve this process and thereby increase its impact.
While that could tell us the output effects of winning a Marsden grant, it would be a really big mistake to aggregate up from that figure to the overall effects of the Marsden system. Why? The also-rans invested enormous effort into the grant-seeking process, some of which is then sunk on the grant's failure. So we start by overestimating the effects of the Marsden grant: the relevant counterfactual ought to be equally capable research teams who didn't bother applying for Marsden.

Second, even if there were a strong productive effect on those teams receiving funding, it's still possible for the system as a whole to be a waste: if total investments in grant-seeking exceed the returns from the granted projects, then the system does net harm. One study found that Canada's Natural Science and Engineering Research Council grants do net harm on this kind of basis.

I've watched colleagues put weeks of work into Marsden applications that went nowhere. I've seen the College run huge time-consuming workshops on how to improve your Marsden grant applications. Outcomes, at least in Marsden's Economics section, seem to be random-draw with weighting towards prior recipients. After being part of three unsuccessful Marsden applications from the Department and observing how the thing seemed to work, I concluded it a mug's game and not worth the effort. But because universities put just tons of weight on Marsden grants as compared to industry funding, people still invest in those lottery tickets.

It would be interesting to calculate NZ academia's collective person-hours invested in Marsden applications, both by applicants and by the reviewers. That time cost has to be part of any honest assessment of the system.

No comments:

Post a Comment