The problem of evaluating automated large-scale evidence aggregators
In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis (which depends on the nature of the input variables and parameters of the aggregator) as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of eviden
| Item Type | Article |
|---|---|
| Keywords | evidence aggregation,evidence-based medicine,statistical meta-analysis,robustness analysis |
| Departments | Philosophy, Logic and Scientific Method |
| DOI | 10.1007/s11229-017-1627-1 |
| Date Deposited | 23 Jan 2018 11:50 |
| URI | https://researchonline.lse.ac.uk/id/eprint/86497 |
