This site attempts to protect users against Cross-Site Request Forgeries attacks. In order to do so, you must have JavaScript enabled in your web browser otherwise this site will fail to work correctly for you. See details of your web browser for how to enable JavaScript. Skip to main content Library - University of Liverpool Skip navigation
Toggle mobile navigation

Open Access: Measuring Impact

What are bibliometrics?

Bibliometrics are quantitative measures of the citation impact of publications. They can be sourced from a variety of products such as SciVal, InCites and Dimensions. The University subscribes to SciVal, and citation data from Dimensions can be accessed freely by anyone.

The best bibliometric measures are those that have a transparent methodology and a clearly defined dataset.

 

Guiding Principles

The University of Liverpool is a signatory to the San Francisco Declaration on Research Assessment (DORA) which outlines a number of recommendations around the use of metrics. This means that as an institution we have committed to avoiding the use of journal-based metrics as surrogate measures of the quality of research in decision-making for funding, appointing and promoting staff, and to assess contributions of an individual researcher, and instead assess research based on its own merits. In practice this means when we use data, we should

  • Use metrics related to publications (article-based metrics e.g. field weighted citation ratio) rather than the venue of publication (journal-based metrics e.g. Journal Impact Factor™, SJR or SNIP) or the author (e.g. h-index).
  • Be clear and transparent in the metric methodology you use  If a source does not give information about the origins of the dataset e.g Google Scholar, it isn’t seen as reliable.
  • Be explicit about any criteria or metrics being used and make it clear that the content of the paper is more important than where it has been published.
  • Use metrics consistently - don’t mix and match the same metric from different products in the same statement.

For example: don’t use article metrics from Scopus for one set of researchers and article metrics from Web of Science for another set of researchers.

  • Compare Like with Like - an early career researcher’s output profile will not be the same as that of an established professor, so raw citation numbers are not comparable.

For example: the h-index does not compare like-for-like as it favours researchers who have been working in their field for a long time with no career breaks.

  • Consider the value and impact of all research outputs, such as datasets, rather than focussing solely on research publications, and consider a broad range of impact, such as influencing policy.

The overarching advice is not to use one metric in isolation – research performance should only be evaluated against a number of metrics used together – and to think about what and why you are measuring to identify the most appropriate indicator to act as the proxy for the quality you are measuring. The Bibliomagician blogpost on understanding what and why you are measuring, and the associated levels of risk, provides some further commentary.

What metrics should I use, and why?

Metric Usage Pros Cons

FWCI - Field Weighted Citation Impact

The no. of citations received by a document, divided by the expected no. of citations for similar documents.

A FWCI of 1 means that the output performs as expected for the global average; an FWCI of 1.44 means 44% more cited than expected.
Can be sourced from SciVal, using data from Scopus

It measures the citation impact of the output itself, not the journal in which it is published.

It attempts to compare like-with-like by comparing an output’s citations with those of other outputs of the same age and type classed by Scopus as being in the main subject area. This side-steps the problems inherent in using one measure to compare articles in different disciplines - an FWCI of 1.44 is just as good in History as in Oncology.
It could be seen as disadvantaging work that is purposefully multi- and cross-disciplinary.

Publications in top percentiles of cited publications (field-weighted)

The number of publications of a selected entity that are highly cited, having reached a particular threshold of citations received.

Can be sourced from SciVal, using data from Scopus.

Measures include the top 1%, 5%, 10% or 25% of most cited documents worldwide.

Should be field-weighted from within SciVal to benchmark groups of researchers.

Should use ‘percentage of papers in top percentile(s)’ rather than ‘total value of papers in top percentile(s)’ when benchmarking entities of different sizes.

Counts and ranks the number of citations for all outputs worldwide covered by the Scopus dataset.

Percentile boundaries are calculated for each year, meaning an output is compared to the percentile boundaries for its publication year.


Can be used to distinguish between entities where other metrics such as no. of outputs or citations per output are similar.
Data are more robust as the sample size increases; comparing a unit to one of a similar size is more meaningful than comparing one researcher to another.

Altmetric scores

Captures online attention surrounding academic content e.g. Twitter, Facebook and Social Media activity; mentions in Policy documents and registered Patents; Media coverage etc.

Can be sourced from altmetric.com’s Explorer for Institutions; any UoL user can access this resource.

These metrics are also displayed in Liverpool Elements and the Institutional Repository for a publication with a DOI.

Can give an indication of the wider impact of outputs, tracking their use in policy documents, news items, and so on.

Can provide an early indicator of the likely impact of a paper, before it has had time to be cited in the future - there’s a correlation between number of Mendeley readers saving a paper (which can be tracked via Altmetric) and eventual number of citations.
Open to being artificially influenced. Altmetric Explorer will discard where someone has repeatedly tweeted about research for example, but may not be sophisticated enough to detect where multiple accounts have tweeted a DOI just to increase an Altmetric score.

Which metrics should I avoid, and why?

Metric Usage Pros Cons

h-index

An author has an h-index of n where they have published n or more articles that have each been cited n or more times by other items index by the particular product being used (Scopus, Web of Science, etc.)

In external material - can be sourced from Scopus to cover full range of career rather than SciVal which only covers from 1996.

Other sources of h-indices are Web of Science and Google Scholar.

Is focused on the impact of an individual researcher, rather than on venue of publication.

Is not skewed by a single highly-cited paper, nor by a large number of poorly-cited documents.
Not

recommended as an indicator of research performance because of its bias against early career researchers and those who have had career breaks.

The h-index is also meaningless without context within the author’s discipline.

There is too much temptation to pick and choose h-indices from different sources to select the highest one. h-indices can differ significantly between different sources due to their different datasets – there is no such thing as a definitive h-index.

Journal Impact Factor™ 

A journal’s JIF for year X is the sum of all citations indexed in Web of Science from year X referring to articles published in the journal in years X-1 and X-2, divided by the total number of articles the journal published in years X-1 and X-2.

Only available from Clarivate Analytics. Dataset is those journals indexed by the Web of Science citation indices (Science Citation Index, Social Science Citation Index, Arts and Humanities Citation Index) and output types are only articles and reviews. May be useful for identifying journals to which to submit work for larger readership.

Citation distributions within journals are extremely skewed - the average number of citations an article in a specific journal might get can be a very different number to the typical number of citations an article in a specific journal might get.

The JIF is nothing more than the mean average number of citations to articles in a journal, and thus highly susceptible to outliers.

Journal metrics do not well reflect new/emerging fields of research.

Citescore 

A journal’s Citescore for year X is the sum of all citations indexed in Scopus from year X referring to articles published in the journal in years X-1, X-2 and X-3, divided by the total number of articles the journal published in years X-1, X-2 and X-3.

Only available from Elsevier. Dataset is those journals indexed by the Scopus citation indices and covers all output types. It covers a wider range of item types than the Impact Factor. May be useful for identifying journals to which to submit work for larger readership. Citation

distributions within journals are extremely skewed – the average number of citations an article in a specific journal might get can be a very different number to the typical number of citations an article in a specific journal might get.

As with JIF, the CiteScore is nothing more than the mean average number of citations to articles in a journal, and thus highly susceptible to outliers.

Journal metrics do not well reflect new/emerging fields of research.

Source Normalized Impact per Paper (SNIP)

A journal’s SNIP is number of citations given in the present year to publications in the past three years divided by the total number of publications in the past three years.  SNIP citations are normalised in order to correct for differences in citation practices between scientific fields.
Owned by Scimago Institutions Rankings and based on Scopus data.  Covers articles, conference papers and reviews. SNIP corrects for

differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact.

SNIP comes with a ‘stability interval’ which reflects the reliability of the indicated - the wider the stability interval, the less reliable the indicator.

Although consideration is taken to correct for differences in fields, the SNIP is still a journal-based metric and thus the metric applies to the place that an output is published rather than the merits of the output itself.

Journal metrics do not well reflect new/emerging fields of research.

SCImago Journal Rank (SJR)

A journal’s SJR is the average number of weighted citations received in a year divided by the total number of publications in the past three years.  SJR citations are weighted depending on the source they come from.
Owned by Scimago Institutions Rankings and based on Scopus data.  Covers articles, conference papers and reviews.  

Citations are weighted based on the source that they come from.  The subject field, quality and reputation of the journal directly affect the value of a citation.

The SJR is a journal-based metric and thus the metric applies to the place that an output is published rather than the merits of the output itself.

Journal metrics do not well reflect new/emerging fields of research.

Raw citation count

Not of value  unless comparing like-for-like.

Can be sourced from SciVal/Scopus, Web of Science, PubMed, etc.  Most publishers’ websites will display a citation count, either sourced from a provider such as Scopus or from their own databases. A simple-to-read measure of attention when comparing outputs of the same type and age within the same field/.

Citation practice varies across fields; the same number of citations could be considered low in one field e.g. immunology but high in another e.g. maths.

Certain output types such as Review Articles will frequently be more highly cited than other types.

As an example of how citation counts can be artificially inflated, the paper “Effective Strategies for Increasing Citation Frequency” lists 33 different ways to increase citations.
Loading ...