Responsible Metrics

Statement on the Use of Quantitative Indicators in the Assessment of Research Quality

At the University of Glasgow we apply fair and transparent mechanisms for monitoring and reporting research performance. These principles underpin the institutional 20152020 key performance indicators (KPIs) for the quality of our research [1]. As we explain below, these principles are also applied in our processes for recruiting staff and assessing their research performance. 

The University uses both qualitative and quantitative indicators to assess individual and institutional performance. We acknowledge the limitations of using either approach alone: qualitative indicators can be perceived as being subjective, whereas quantitative indicators can be viewed as being unsophisticated; conversely, qualitative indicators allow the application of expert disciplinary judgement, whereas quantitative indicators allow the application of assessment methodologies that are transparent and consistent. 

Both approaches are important, and indeed both are used successfully in the assessment processes used by the UK Research Excellence Framework (REF) [2]. The University additionally recognises the ever-increasing role of quantitative indicators in the external measurements of our reputation, as measured by various league tables and funding agencies. 

Below we list the principles by which the University uses quantitative indicators, and then describe how we apply them specifically in assessing research outputs (e.g. journal articles, book chapters, monographs), income, postgraduate research (PGR) supervision, and in recruitment, performance management and promotion. 

1 - Guiding Principles for the Use of Quantitative Indicators in Research Assessment

The University will:

  1. Adopt assessment procedures that are evidence-based and, as such, will use quantitative indicators only in tandem with qualitative indicators to assess the quality of research.
  2. Apply quantitative indicators responsibly by using a defined and balanced set of measures that are normalised by subject. We will also take account of potential sources of bias, and aim to reduce them: such a consideration applies, for example, to the chosen source of assessment data, career stage and full-time equivalent (FTE) status of the individual being assessed, or their race, gender or disability status. It is acknowledged, for example, that the most widely used citation databases are not equally representative of all our disciplines or output types (e.g. monographs), and that publishing practices vary by gender [3]
  3. Declare the quantitative indicators used, and apply them fairly and consistently. Fairness and transparency of the methodology will be exercised by ensuring that metrics are simple and open, and therefore available for scrutiny by those being assessed.
  4. Evaluate researchers based on performance across different dimensions, with expectations set in advance and clearly communicated to researchers on the University’s webpages, and in line with the values outlined in the University strategy.[4]
  5. Undertake regular review of the quantitative indicators used, so that they are appropriate and up-to-date. The University’s Research Planning and Strategy Committee would undertake such a review on a biannual basis, drawing on expert knowledge and evidence across the sector.

For the avoidance of doubt, the University will consequently not use single, non- normalised metrics (e.g. raw citation counts) in research assessment. It will also not apply indicators that are opaque or that are decontextualised (e.g. from citation practices in a subject area). It is acknowledged, for example, that variation between disciplines both in citation practices and in their representation in publication databases affects the degree to which citation metrics can be used as indicators of output quality.

2 - Applications of Quantitative Indicators in Research Assessment

Research Outputs.

High-quality research outputs are central to the University’s vision and to the careers of our individual researchers.

To inform the assessment of individual outputs, article-level metrics are more appropriate than journal-level metrics, and consequently the University will not use a Journal Impact Factor as an indicator of output quality. Although article-level citation counts can inform the peer-review assessment of outputs quality, all such indicators will be normalised to account for both publication dates and sub-discipline variations. Such normalisation is possible within several publication databases for many hundreds of sub-disciplines (e.g. [5] [6]).

To inform the assessment of outputs at an institutional level it is legitimate to rely on aggregated data that is based on journal-level information. For example, various international league tables (e.g. [7]) record the institution-wide number of journal articles published in a small number of specific journals.

Research Income and Postgraduate Research Student Supervision.

The volume of research income and the number of postgraduate research students supervised by staff FTE are primary research KPIs for the University. These measures are also important indicators of the quality and vibrancy of the research environment as captured in the REF and in many international league tables. When the University applies such metrics at a more granular level, to units and/or individuals, they will always be normalised to account for discipline variations and career stage. Discipline normalisation can be made through HESA cost centres, using data that higher education institutions report annually and that are openly available. [8]

Staff Recruitment, Performance Management, and Promotion.

The use of metrics in any process should be declared in advance of the process commencing, and their use indication should be considered alongside other metrics and other more qualitative assessments. Any quantitative indicator that is used will be based upon published formulae and will rely on openly available data, such that other experts in the field can reproduce the quantification of the metric.

We encourage practices that combine quantitative with qualitative indicators: the role of the metric is to inform assessment within a broader context, and not to dictate. To support the application of this principle, job or promotion candidates will be asked to provide a narrative that highlights their best outputs and to justify their contribution to advancement of the field.

3 - Context and Implementation

The policies of the University of Glasgow for the use of quantitative indicators for assessing research comply with and extend the principles outlined in the San Francisco Declaration on Research Assessment, 2012 (DORA; Ref [9]), The Metric Tide, 2015 (Ref [10]), and the Leiden Manifesto for Research Metrics, 2015 (Ref [11]).

Colleges, Institutes, and Schools at the University of Glasgow are invited to develop local, more detailed policies provided that they are consistent with the institutional framework outlined in this document, and to make these widely known to staff.

Approved by the Research Strategy and Planning Committee, 13 December 2018


  1. https://www.gla.ac.uk/about/strategy/kpi/
  2. http://www.ref.ac.uk
  3. See, for example: https://arxiv.org/abs/1607.00376 and http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2001003
  4. http://www.gla.ac.uk/about/strategy/
  5. https://www.elsevier.com/solutions/scopus
  6. href="http://clarivate.libguides.com/home/
  7. Higher Education Statistics Agency (HESA): https://www.hesa.ac.uk/support/documentation/cost-centres
  8. https://sfdora.org
  9. http://www.hefce.ac.uk/pubs/rereports/year/2015/metrictide
  10. http://www.leidenmanifesto.org

Use of quantitative indicators in the assessment of research quality, v1.1 (17 December 2018).

Office of the Vice Principals (viceprincipalsoffice@glasgow.ac.uk)