Dept. of Psychology and Cognitive Sciences, University of Trento

Methodological workshop - May 20, 2016

How to get it right: why you should think twice before planning your next study

Marco Perugini and Luigi Lombardi

Workshop description:

In the first part of this workshop, a basic introduction is provided to the different definitions of power analysis (post hoc power analysis, a priori power analysis, sensitivity analysis, criterion analysis) togheter with some dangerous and pervasive fallacies about power calculations and interpretations in applied data analysis. Next, a comprehensive framework based on generalized linear mixed-effects models to efficiently compute empirical effects sizes is introduced. Finally, a simple statistical procedure for assessing Type S (Sign) and Type M (Magnitude) errors is described. All the presented procedures are illustrated using a combination of dedicated R scripts.

In the second part of the workshop, the main focus is on how and when an inference from results is more likely to be correct. Starting from the replicability crisis in psychology, conditions for replicable results are set out. Basic statistical concepts are reviewed, with an emphasis on their implications for correct inferences. After clarifying the pivotal role of power and precision for correct inferences, strategies to increase both in a specific study are reviewed. Strategies include both pre-hoc (i.e., methodological features) and post-hoc (i.e., statistical analyses) choices. Appropriate use of this methodological and statistical tool-box should allow researchers to increase the likelihood that a given inference is correct, that is that they are more likely to get it right.

Slides:

Part 1 (pdf), Part 2 (pdf)

References - part one

Hoenig J.M. & Heisey D.M. (2001). The abuse of power. The pervasive fallacy of power calculations for data analysis. The American Statistician, 55, 19-24. (pdf copy).

Faul F., Erdfelder E., Lang A.-G. & Buchner A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191. (pdf copy).

Nakagawa S. & Cuthill I.C. (2007). Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews, 82, 591-605. (pdf copy).

Gelman A. & Carlin J. (2014). Beyond power calculations: assessing Type S (Sign) and Type M (Magnitude) errors. Perspectives on Psychological Science, 9, 641-651. (pdf copy).

A list of papers about fake data analysis according to the SGR approach can be found here: Fake data analysis project.

References - part two

Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological bulletin, 76, 105-110. (pdf copy).

Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard power as a protection against imprecise power estimates. Perspectives on Psychological Science, 9, 319-332. (pdf copy).

Cohen, J. (1990). Things I have learned (so far). American psychologist, 45, 1304-1312. (pdf copy).

Schonbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609-612. (pdf copy).