Quality indicators in the social sciences

This advisory report will offer organisations that evaluate research in the social sciences a set of guidelines for defining sound evaluation instruments, in particular quality indicators in the social sciences.

The disciplines that fall within the social sciences are much more heterogeneous than those in the natural sciences or humanities. The work of criminal lawyers, for example, is obviously very different from that of neuropsychologists. So far, however, research evaluation procedures have taken very little account of this heterogeneity, making it impossible to arrive at a fair evaluation of the quality of social science research.

The problem that arises when evaluating such research can be broken down into two parts:

  • Publication culture
    How can we do justice to differences between publication cultures in the various disciplines when evaluating research, and prevent a one-size-fits-all evaluation from becoming dominant? Not only do the research products (articles, books, annotations, and so on) vary greatly, but so do the relevant quality criteria.
  • Social relevance
    How can we measure and evaluate the quality of the impact that social science research has on society?

Scientific knowledge finds its way into society through many different channels, from professional journals and authoritative advisory councils to contributions to the op-ed pages of newspapers. There are differences between disciplines in that respect: business and government use economic knowledge differently from psychological knowledge. There are even differences within disciplines; human rights lawyers do not address the same forums as attorneys specialising in administrative law.

Research funding bodies and university administrators have indicated that they require better evaluation instruments for social science research. They recognise that different disciplines within the social sciences have differing publication cultures, but the indicators used in ex ante and ex post evaluation of social science research are still largely the same across the board.

The advisory report will consider criteria for developing sound evaluation instruments for social science research.
The Royal Netherlands Academy will need to expand on its 2005 advisory report Judging research on its merits by examining the social sciences in greater depth. Other questions that must be considered are ‘How do we measure the effects of social science research on society?’ and ‘What mechanisms contribute to the social impact of social science research?’


2011-2012. The advisory report is available as from 19 March 2013.