Mind & Society

, 10:169 | Cite as

Disaggregating quality judgements

  • Bruce EdmondsEmail author


The notion of quality is analysed for its functional roots as a social heuristic for reusing others’ quality judgements and hence aiding choice. This is applied to the context of academic publishing, where the costs of publishing have greatly decreased, but the problem of finding the papers one wants has become harder. This paper suggests that instead of relying on generic quality judgements, such as those delivered by journal reviewers, that the maximum amount of judgemental information be preserved and then made available to potential readers to help them find papers that meet their particular needs. The suggestion is that: multidimensional quality data be captured on review of papers, this information is stored on a database, and then used to filter papers according to the criteria set by the searcher—personalising the quality filter. In other words the quality judgements and subsequent use are maintained in a disaggregated form, maintaining the maximum informational context of the judgements for future use. The advantages, disadvantages, challenges and possible variations of this proposal are discussed.


Quality Heuristic Bottlenecks Commonality Search Filtering Judgement Customisation Academic publishing 



Thanks to all with whom I have discussed these ideas, including all those at the “Quality Commons” workshop in Paris, as well as David Hales, Dirk Helbing and Mark Jelasity. Steven Harnad must be credited with launching the debate concerning changing how academic ideas are disseminated (Harnad 1998) but he argues with me concerning the importance of the traditional editorial process.


  1. Edmonds B (2000) A proposal for the establishment of review boards—a flexible approach to the selection of academic knowledge. J Electron Publ 5(4).
  2. Ginsparg P (1994) First steps towards electronic research communication. Comput Phys 8:390–396. Google Scholar
  3. Harnad S (1996) Implementing peer review on the net: scientific quality control in scholarly electronic journals. In: Peek R, Newby G (eds) Scholarly publication: the electronic frontier. MIT Press, Cambridge, MA, pp 103–108.
  4. Harnad S (1998) On-line journals and financial fire-walls. Nature 395(6698):127–128. Google Scholar
  5. Helbing D, Balietti S (2011) How to create an innovation accelerator. Eur Phys J Special Top 195:101–136CrossRefGoogle Scholar
  6. Henderson M (2010) Problems with peer review. Br Med J 340:1409CrossRefGoogle Scholar
  7. Horton Richard (2000) Genetically modified food: consternation, confusion, and crack-up. MJA 172(4):148–149Google Scholar
  8. Peters DP, Ceci SJ (1983) Peer-review practices of psychological journals: the fate of published articles, submitted again. In: Harnad S (ed) Peer commentary on peer review: a case study in scientific quality control. CUP, New York, p 3Google Scholar
  9. Rothwell PM (2000) Reproducibility of peer review in clinical neuroscience: is agreement between reviewers any greater than would be expected by chance alone? Brain 123(9):1964CrossRefGoogle Scholar
  10. Shatz D (2004) Peer review: a critical inquiry. Rowman & Littlefield, New YorkGoogle Scholar
  11. Tenopir C, Grayson M, Zhang Y, Ebuen M, King DW, Boyce P (2003) Patterns of journal use by scientists through three evolutionary phases. D-Lib Mag 9(5).
  12. Van Noorden R (2010) A profusion of measures. Nature 465:864–866CrossRefGoogle Scholar
  13. Williamson A (2002) What happens to peer review? Paper presented at an ALPSP International Learned Journals Seminar, London, UK. 12 April 2002

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  1. 1.Centre for Policy ModellingManchester Metropolitan UniversityManchesterUK

Personalised recommendations