Skip to Main Content

Bibliometrics: How do I use bibliometrics responsibly?

DMU's commitment to responsible and fair use of bibliometrics

As a signatory to the San Francisco Declaration on Research Assessment, DMU is committed to the responsible and fair use of bibliometrics. Our Policy Statement on the Responsible Use of Bibliometrics details this commitment.  Below is guidance on for any DMU researchers who may be using bibliometrics, on how to do so fairly and responsibly.

Key principles for responsible and fair use of bibliometrics

If you are using bibliometrics in any form of research assessment, the following guiding principles will help you to ensure assessment is responsible and fair:

 

Appropriateness 
Bibliometrics should only be used when absolutely necessary and where valid comparisons may be possible, i.e. in some disciplines, bibliometrics are simply not available. If used, don't rely on a single bibliometric, but use multiple metrics and use in conjunction with expert testimony, rather than as a sole source of information. Situate within a broader range of impact (for example, influencing policy).

Any bibliometrics used should be tailored to the question being asked. For example, don't use a journal metric (e.g Journal Impact Factors or other journal rankings) to infer the quality of an individual or an output.

 

Transparency
Have a defined question you want to answer before selecting any bibliometrics you think necessary. Be explicit about what bibliometrics you are using and how the data has been collected. Outline the reliability of the bibliometrics and any limitations. Other people should be able to reproduce your results by using your provided explanation.

 

Equality 

Individuals with protected characteristics may not be directly comparable due to a number of factors, such as length of service, career breaks etc. Disciplines will have different publication practices and citation norms, as well as different perspectives on what constitutes research quality. 

Normalisation (also referred to as "field-weighting" or "field-normalisation") can provide more context when looking at citation performance and should be used in any comparisons. For example, when comparing papers, a normalisation approach would only compare papers of the same publication year, the same type of publication, and subject area. A normalised metric indicates whether a paper is generating the expected, or higher or lower number of citations for a paper of that age, type or discipline.

 

Consistency

Use bibliometrics consistently - only use bibliometrics from the same source if making comparisons.

For example, if comparing two sets of scholars, don't use bibliometrics from Scopus for some and from Google Scholar for others.

 

Continual Reassessment
Continually assess commonly used bibliometrics, especially concerning appropriateness and equality. If a metric is no longer fit for purpose, it should not be used.

 

 

Frequently Asked Questions

Journal impact metrics (e.g. Journal Impact Factor, SNIP etc.) and journal ranking systems (e.g. CABS Academic Journal Guide) do not tell you whether the journal is the most appropriate for your research. Field-normalised journal metrics can be used to help make a decision, but other factors need to be taken into account. Think about:

  • "Will the people I want to reach be able to access the article? What is the journal's Open Access policy?"
  • "Is the journal indexed in internet search engines?"
  • "Has the journal published similar research recently?"
  • "Who tends to read the journal? Are they the audience I want to reach?"

This is technically not an infringement of the policy if the list has been developed in accordance with the policy. However, you should be able to publish in other venues if they would be more appropriate for your output.

If anyone is developing a recommended list of journals in which to publish, they should use multiple metrics to confirm the findings, alongside expert judgement. There should be no consequences for authors who choose to publish elsewhere.

No, the H-index is a flawed indicator of the quality and quantity of an individual's research. It is based on the number of an individual's articles (h) which have received (h) or more citations. It is strongly influenced by discipline, publication volume, and career length.

It is very difficult to use responsibly and consistently, especially when assessing Early Career Researchers or individuals with protected characteristics. The H-index has been severely criticised by some funders, including UKRI.

Good practice guidance

Are you an academic leader or manager, shortlisting for a position? Or assessing an applicant for promotion?  Are you assessing a funding bid (internal or external) or an application for research time?

Are you a researcher, applying for funding (internal or external)? Or for research time (internal)?

Are you assessing potential REF outputs, either as a UoA Co-ordinator or REF peer-assessor?

Are you a Research Institute Director assessing Institute membership