Responsible metrics

At Maastricht University Library, we are committed to the responsible use of metrics. We believe that research intelligence should support, not replace, qualitative evaluations. Analyses should be based on a careful selection of indicators and methods tailored to the particular goals of the evaluation.

Research intelligence should be question-driven, in other words: measuring what you want to know, rather than measuring what we can measure. Excellence and performance are multidimensional concepts of which metrics can capture only some aspects.

Developments like the signage of the San Francisco Declaration on Research Assessment (DORA), Leiden Manifesto, Strategy Evaluation Protocol 2021-2027 (SEP) and Recognition & Rewards all concern responsible metrics. What all these initiatives have in common is:

  1. Greater flexibility in the aspects on which both scientists and research units can be assessed;
  2. Use a narrative supported by responsible metrics instead of focusing on excellence (and thus competition) and metrics like the JIF or the H-index.
DORA

On October 25, 2019, rector magnificus Rianne Letschert signed the DORA declaration on behalf of UM.

In line with DORA, the Research Intelligence team does not use the Journal Impact Factor (JIF) or other journal-based metrics to assess individuals or individual publications.

Furthermore, DORA states that consideration should be given to the value and impact of all research outputs (including datasets and software) in addition to research publications, and that a broad range of impact measures should be considered, including qualitative indicators of research impact, such as influence on policy and practice.

For organizations that supply metrics, such as the Research Intelligence team, DORA states that they:

  • Be open and transparent by providing data and methods used to calculate all metrics.
  • Provide the data under a license that allows unrestricted reuse, and provide computational access to data, where possible.
  • Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate manipulation and what measures will be taken to combat this.
  • Account for the variation in article types (e.g., reviews versus research articles) and in different subject areas when metrics are used, aggregated or compared.

The Research Intelligence team guarantees these principles as follows:

  • The data and methods used are described in justification/method of reports;
  • Pure and InCites data are available, and use of the Dashboard guarantees reproducibility;
  • We check how analyses or figures we deliver are included in the final report and what purpose journal-based metrics are requested. Journal-based metrics are not provided for the assessment of individual scientists or individual articles.
  • Normalization takes into account the variation in article types. Furthermore, the Dashboard has the functionality to automatically include only reviews and articles for the InCites numbers to prevent strange outliers. There is also a filter for peer-reviewed publications on the Open Access tab in the Dashboard.
Leiden Manifesto

The Leiden Manifesto contains 10 principles that are intended as a guideline for research evaluation.

The following principles from this manifesto are relevant to our Research Intelligence services:

  • Quantitative evaluation supports qualitative assessment by experts;
  • Measure performance against the research missions of an institute, group or researcher;
  • Give those who are being evaluated the opportunity to verify data and analyzes;
  • Take into account variations between disciplines in publication and citation practices;
  • Base the assessment of individual scientists on a qualitative assessment of their portfolio (and not on their H-index);
  • Prevent misplaced concreteness and false precision;
  • Recognise the effects assessments and indicators have on the system;
  • Regularly review the indicators used and update them where necessary.
Why not the Journal Impact Factor (JIF)?

The Journal Impact Factor (JIF) is frequently used as the primary parameter to compare the scientific output of individuals and institutions.

The Journal Impact Factor was created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has several well-documented deficiencies as a tool for research assessment.

These limitations include:

  • Citation distributions within journals are highly skewed;
  • The properties of the Journal Impact Factor are field-specific and therefore not comparable between fields: it is a composite of multiple, highly diverse article types, including primary research papers and reviews;
  • Journal Impact Factors can be manipulated (or “gamed”) by the editorial policy;
  • Data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public.

Source: https://sfdora.org/read

For more information on the limitations of the Journal Impact Factor as a tool for research assessment:

 

Why not the H-index?

CWTS: Halt the H-index - Infographic

 

Support for UM faculties/research groups and support staff

We provide our support in the context of a standard research evaluation (SEP) or on a question-driven basis.

While standard research evaluations are part of the university library’s general service for all UM faculties, charges may apply for other analyses, depending on the scope of the question. To find out more, please get in touch with us via research-i@maastrichtuniversity.nl.

If you are affiliated with FHML, please contact Cecile Nijland of the FHML office first.

Research evaluations (SEP)

The Research Intelligence team supports UM faculties or research units in preparing periodic research evaluations based on the Strategy Evaluation Protocol 2021-2027.

Together with the unit under assessment, we decide on suitable indicators that fit the unit’s aims and strategy. These indicators may allow for accounting of past performance and strategic decision-making for the future. Please get in touch with us well in advance for your unit’s SEP (midterm) evaluation.

UM Research Intelligence Dashboard

The Dashboard provides insights by linking the research output registered in Pure to data from sources such as InCites, Web of Science, Unpaywall and Altmetric. It offers the possibility to apply various filters directly to the available data so that you can easily adapt the arrangement of the data and the level of detail to your needs. 

Access to and use of the Dashboard is on request. A dataset is prepared, and access is provided to the particular dataset in the Dashboard via authentication.

Other analyses and/or visualisations

The data and plots in the Dashboard can be regarded as our standard service, especially aimed at research evaluation for the Strategy Evaluation Protocol (SEP). Your evaluation may require additional analyses and/or visualisations. Depending on the nature and scope of additional services, costs may be charged for this.

Alternative metrics

Altmetric is one of the platforms that gather alternative metrics, or for short ‘altmetrics’: a new type of metrics that aims to capture the societal impact of research output instead of traditional metrics such as citation counts.

As altmetrics are changeful and never fully captured by any database, we recommend refraining from focusing on the numbers. It can be a valuable source to identify who is building upon certain research work commercially and politically and to explore the general public’s sentiment.

Altmetric provides free data under certain restrictions. It also provides a paid platform called the Explorer for Institutions (EFI). This Altmetric Explorer allows the user to browse data by author, group, or department. As of July 2020, the UM has a license for the Altmetric Explorer, allowing us to make Altmetric data available on the ‘Social attention’ tab in the Dashboard. This Altmetric Explorer can be accessed by UM employees using their institutional e-mail address and password.

When using Altmetric data for evaluation purposes, be cautious as the quantitative indicators support qualitative, expert assessment and should not be used in isolation (see: Social media metrics for new research evaluation and The Leiden manifesto for research metrics).

For any guidance on using Altmetric responsibly, please get in touch with the Research Intelligence Team via research-i@maastrichtuniversity.nl.

Examples of faculty and research group questions
  • How can non-bibliometric sources, such as news and policy documents be studied?
  • Which publications address topics relevant to one or more of the UN’s Sustainable Development Goals (SDGs)?
  • What is the position of the research work to work in the subject area?
  • Which scholars are potential collaborators (inside/outside UM)?
  • Which journals are suitable to publish certain research in?
  • How strong/productive is the collaboration with other research units?
  • What academic and/or societal impact did the research have?
  • How can potential reviewers for a paper/proposal be recommended?
  • What are the frequently investigated topics by a certain research unit?
  • What were the main accomplishments in terms of research impact over a certain period of time?
  • Which opportunities to increase impact are potentially neglected?
  • Who are the co-authors of a certain researcher?
  • What is the position of a certain research unit in relation to comparable research units?
  • What is the position of our work in relation to our strategy?
  • What could be the most strategic choice in terms of funding application, based on the previous impact of our research?
  • Which journals do the researchers of a certain research unit usually publish in?
  • What could be a suitable publication strategy based on the vision and mission of the research unit?
  • Which research areas are potentially neglected?
  • What are inspiring best practices that have gained frequent attention?
  • Which researcher fits a certain profile based on their research outputs?
  • How does a certain candidate compare to other candidates in terms of network and/or impact?

 

Support for individual UM researchers

There can be various reasons why as a researcher you want to gain insight into the (potential) impact of your work. For example when applying for a grant, when preparing for an evaluation meeting with your supervisor/manager, or to support a narrative about the impact of your work on your profile page or personal website.

The services of the Research Intelligence team can help you to gain this insight. To find out more, have a look at the items on this webpage or get in touch with us via research-i@maastrichtuniversity.nl.

If you are affiliated with FHML, please contact Cecile Nijland of the FHML office first.

Subject guides & workshops

We provide more detailed information on impact-related issues through subject guides and workshops.

Have a look at these guides or workshops if you want to learn more about research impact and what you can do to improve the (potential) impact of your research work.

Subject guides

Workshops

Toolkit Societal impact 

As one of the results of a strategic partnership between Springer Nature and The Association of Universities in the Netherlands (VSNU), a toolkit has been created to help you understand how other researchers view societal impact and how they have been successful in creating it.
It is filled with plenty of advice and insights from researcher interviews, as well as further reading resources to help you find out more about societal impact and how to create it for your own research.

Alternative metrics

Altmetric is one of the platforms that gather alternative metrics, or for short ‘altmetrics’: a new type of metrics that aims to capture the societal impact of research output instead of traditional metrics such as citation counts.

As altmetrics are changeful and never fully captured by any database, we recommend refraining from focusing on the numbers. It can be a valuable source to identify who is building upon certain research work commercially and politically and to explore the general public’s sentiment.

Altmetric provides free data under certain restrictions. It also provides a paid platform called the Explorer for Institutions (EFI). This Altmetric Explorer allows the user to browse data by author, group, or department. As of July 2020, the UM has a license for the Altmetric Explorer. This Altmetric Explorer can be accessed by UM employees using their institutional e-mail address and password.

When using Altmetric data for evaluation purposes, be cautious as the quantitative indicators support qualitative, expert assessment and should not be used in isolation (see: Social media metrics for new research evaluation and The Leiden manifesto for research metrics).

For any guidance on using Altmetric responsibly, please get in touch with the Research Intelligence Team via research-i@maastrichtuniversity.nl.

Examples of researcher questions
  • How can non-bibliometric sources, such as news and policy documents be studied?
  • Which publications address topics relevant to one or more of the UN’s Sustainable Development Goals (SDGs)?
  • What is the position of the research work to work in the subject area?
  • Which scholars are potential collaborators (inside/outside UM)?
  • Which journals are suitable to publish certain research in?
  • How strong/productive is the collaboration with other research units?
  • What academic and/or societal impact did the research have?
  • How can potential reviewers for a paper/proposal be recommended?