When are people willing to pay for environmental protection? How can students be motivated to perform better? Is there a connection between poverty and the reporting of criminal offences? Social scientists collect and analyse data in surveys or field studies for such investigations. However, the results are not always replicable. This was revealed in a recent publication in the scientific journal Nature, written by 500 experts of the SCORE collaboration. Co-authors Hilmar Brohmer and Ziva Korda from the Department of Psychology explain why this is the case and how studies can be improved.
How reliable are studies in the social and economic sciences?
Ziva Korda: In our current analysis, we only came to exactly the same results in 27 per cent of the newly evaluated studies. However, we were able to draw the same conclusions as the authors of the original publications in three quarters of them.
What is the reason for the different results?
Hilmar Brohmer: We only had the original studies with the conclusions and the underlying data - this was clearly not enough to fully understand the scientists' thought processes. For example, if they also provided their specific analysis scripts, it would be much easier to reconstruct how their decisions came about.
How do you decide which result is the correct one? Or are they all equally valid?
Korda: Many analysis options basically produce justifiable and sensible results. Do you take outlier values into account or not? Do you combine similar variables or do you look at them individually? This is precisely the problem: how do you avoid a random result? In our research project, we provide an answer to this question in principle: we report a spectrum of several possible alternatives.
Can your findings also be transferred to other disciplines - such as the natural sciences or medicine?
Korda: There is a crisis of reproducibility in many areas. This means that repeat studies produce different results. One reason for this is that only a minority of researchers disclose their data and the specific steps they take to analyse them. This is of course particularly critical when, for example, new procedures are being tested in the health sciences or when biomedical data are later to serve as the basis for the development of drugs. It is precisely in such fields that we want the evidence from basic research to be truly reliable.
Can consumers rely on scientific studies at all?
Brohmer: Basically, scientifically generated findings are the best verified knowledge. Before a new drug is launched on the market, it undergoes several years of extensive testing. Before the discovery of a new elementary particle is announced, its existence has to be confirmed several times through precise experiments. But in the social sciences, standards are often too lax. Findings from individual studies based on small samples should therefore not be regarded as the last word in wisdom until they have been properly replicated.
What conclusions can be drawn from your study?
Brohmer: There needs to be significantly more transparency throughout the entire research process. Scientists should make their materials and methods accessible to other colleagues in repositories. They should also disclose their ideas and planned steps in advance. Less than a fifth of early career researchers do this systematically. Therefore, the low reproducibility rates are not really surprising. At the university, we are discussing these issues in the Graz Open Science Initiative. We would be happy if more interested parties joined in and contributed their experiences.
How did the idea for this study come about and how did you both come to work on it?
Korda: We had already been active in the community for some time through various open science projects. That's why we were asked whether we wanted to test the robustness of the research in a large-scale reanalysis. That seemed sensible to us. It should also be noted that the organisation of large study projects has improved significantly since Covid.
Publications:
Aczel, B. et al : Investigating the analytical robustness of the social and behavioural sciences, Nature 652, 2026.
DOI: 10.1038/s41586-025-09844-9
Brohmer, H., & Hoffmann, M. F. (2025). The struggle to make transparency mainstream: initial evidence for a slow uptake of open science practices in PhD theses. Royal Society Open Science, 12(10), 250826.