21 May 2020
20 May 2020
Large-scale replication studies projects in medicine and the social sciences have shown that studies often yield different results when they are repeated. This problem is driven problematic aspects of the academic culture: a bias towards publishing results that are novel and ‘find something’ and the pressure to publish frequently drive researchers to engage in questionable research practices. These practices increase the amount of results in the literature which don’t represent real world effects or relationships. Conducting replication studies allows us to investigate which portions of the literature could form a reliable basis for decisions and which portions need further study. Hannah will discuss some of her findings about researchers usage of questionable research practices and views of replication studies as well as some of her ongoing and prospective work on replication in ecology.
4 May 2020
(Lecture, Science and Pseudoscience subject, University of Melbourne)
22 Feb 2020
7 Sep 2019
P-values are frequently misinterpreted. Confidence intervals are too. So are Bayesian statistics. Sometimes this simple equivalence is used as an argument that statistical cognition shouldn’t play a role in deciding which analysis approach to adopt in practice, or to teach to students. But are misinterpretations of these different displays of statistical evidence equally severe? Do they have the same consequences in practice? In this talk I’ll present the limited empirical evidence related to these questions that we have so far, and suggest that, at the very least, we don’t know enough to assume Abelson’s law yet, i.e., “Under the law of the diffusion of idiocy, every foolish application of significance testing is sooner or later going to be translated into a corresponding foolish practice for confidence limits” (Abelson, 1997, p. 130). There may be other sound reasons – technical or philosophical reasons—to reject one approach or another, but we shouldn’t (yet) consider them cognitively equivalent.