Findings from a landmark study that could have major implications for the future of academic research assessment have been published by researchers in the University of Glasgow’s Adam Smith Business School.

The study challenges mainstream thinking that academic judges are best suited for evaluating research outputs. It aims to inspires methods to utilise metrics effectively, taking into account the diverse communities requiring clear evaluation criteria.

In an article published in the Research Policy journal, the research team used data from the UK's 2021 Research Excellence Framework (REF) to examine the interplay between metrics and expert judgment in evaluating research outputs from 108 institutions, covering 13,973 publications in business and management - one of the evaluation’s largest and most heterogeneous fields.

This is the first conclusive evidence on post-DORA research assessment, offering valuable lessons for research evaluations worldwide.

The research finds a strong correlation between journal rankings and expert scores, particularly for top-tier journals, suggesting that the perceived prestige of a journal may be influencing the expert review process, despite claims of journal ranking blindness.

Despite the widespread endorsement of the Declaration on Research Assessment (DORA), which advocates for responsible assessment practices, DORA affiliation does not seem to make a difference in this correlation.

Many institutions publicly endorse DORA and wider responsible assessment principles, yet their REF submissions reveal an implicit reliance on traditional metrics like journal rankings. This "institutional peacocking" suggests a superficial adherence to principles which do not necessarily change the status-quo.

Anna Morgan Thomas is Professor of Digital Management and Innovation at the Adam Smith Business School and the lead author of the study. She said: “This research comes at an extremely challenging time for the higher education sector, with uncertainty over funding and an increasing number of academics experiencing unsustainable workloads and high amounts of stress. Early career researchers are particularly vulnerable, with the current method of academic evaluation relying on knowledge of the system and more experienced voices dominating.

“Concerns over a knowledge gap in responsible research assessment have been raised for a while now, and we were keen to contribute to the ongoing debate by providing empirical evidence that challenges the application of these principles in practice and highlights the need for genuine commitment to change.

“We hope our findings will stimulate further discussion and lead to more transparent and effective research evaluation practices worldwide.”

Adina Dudau, Professor of Public Management and co-author of the paper, said: “Ultimately, as academics we all have a responsibility to build a research environment in which everyone can thrive and undertake impactful research that could improve society and humanity.”

Dr Beth Cloughton is an early career researcher in the Adam Smith Business School. Reflecting on the findings of this study, and on existing evaluation structures, she describes the current situation as “affecting oppressed and early career academics, who are perhaps more modest in their self-evaluations of novelty and contributions to a body and practice of work which neither reflects, nor offers, a promising career horizon.”

Recommendations for policymakers and academic institutions to consider include reassessing evaluation practices. Institutions need to critically examine their assessment practices to ensure they align with the principles they publicly endorse. There is a need for genuine adherence to responsible assessment beyond mere declarations, and to blend qualitative and quantitative assessment methods, when and as appropriate.

Greater transparency and accountability in the evaluation process is important. Both REF panels and HEI institutions should provide clear evidence of how they implement responsible assessment principles in practice. In addition, REF panels are encouraged to make their evaluation outcomes available to researchers to enable service improvement and open research into evaluation practices.


Related links

First published: 4 October 2024