Evaluating research on its own merits
How can we tell which scientific findings are credible? Peer-reviewed journals, even prestigious ones, do not provide much assurance regarding the credibility of any individual report. Ideally, we would read each report carefully when deciding what to trust, but this is often impossible (e.g., when we lack the expertise to evaluate the methods) or impractical (e.g., when we need to evaluate research at scale). Moreover, rather than each of us making private judgments, we would all benefit from collecting and sharing evaluations from a range of experts with different areas of expertise and different blind spots and biases. The ideal would be to validate a rubric for eliciting structured quantitative ratings of quality along a wide range of dimensions, and collect and make publicly available ratings from many different and diverse experts. These scores could be combined into a variety of metrics, or “Quality Factors” (QFs), that vary in the weight placed on different qualities. These QFs would provide easily digestible and flexible quality ratings of individual scientific papers that could be useful to other scientists, to journalists and policymakers, and to the public. QFs would also help incentivize authors to “get it right” rather than just get published in prestigious journals, because rewards and recognition could be tied to these more transparent, accountable, and valid metrics rather than to journal prestige. In this talk, I discuss what this could look like for my home discipline of psychology, and describe some progress towards producing Quality Factors for psychology papers.
Simine Vazire is an associate professor in the department of psychology at the University of Melbourne. She is the director of the Personality and Self-Knowledge laboratory. She is the co-founder and current president of the Society for the Improvement of Psychological Science, a senior editor at Collabra: Psychology, and editor in chief of Social Psychological and Personality Science. Her research is funded by the National Science Foundation, and examines accuracy and bias in people’s perceptions of their own behavior and personality. She also conducts meta-science examining how people interpret scientific findings, and tracking trends in the methods and results of published studies in psychology over time. She teaches and blogs about research methods and reproducibility.