Skip to Main Content

Bibliometrics: What different bibliometrics are there?

What different bibliometrics are there?

There are several different types of bibliometric measures, including:

Publication bibliometrics

Journal bibliometrics

Author bibliometrics

Altmetrics

If you are interested in finding out more about these, and about other measures, check out the Metrics Toolkit 
 

Journal Bibliometrics

Definition: The Journal Impact Factor is a measure reflecting the annual average (mean) number of citations to recent articles published in that journal. It is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period

Sources: Web of Science Database and Journal Citation Reports (not subscribed to by DMU); journal's webpages

Usefulness: The JIF can be useful in comparing the relative influence of journals within a discipline, as measured by citations.

Main limitations: The JIF is not a proxy for the quality of a journal or its contents. Due to differences in citation patterns within disciplines and across different publication sources, it is not a good predictor of whether an article will be highly cited due to the skewed distribution of citations 

More information

Definition: The Eigenfactor Score measures the number of times articles from the journal published in the past five years have been cited in the Journal Citation Reports (JCR) year.

Like the Impact Factor, the Eigenfactor Score is essentially a ratio of number of citations to total number of articles. However, unlike the Impact Factor, the Eigenfactor Score:

  • Counts citations to journals in both the sciences and social sciences.
  • Eliminates self-citations. Every reference from one article in a journal to another article from the same journal is discounted.
  • Weights each reference according to a stochastic measure of the amount of time researchers spend reading the journal

Sources: The Eigenfactor journal rank is freely available to search, or via the Journal Citations Report Index (not subscribed to by DMU). Data is curated from the Web of Science.

Usefulness: Includes a built-in evaluation period of five years. The Eigenfactor attempts to give a more accurate representation of the merit of citations than do raw citation counts.

Limitations: The Eigenfactor assigns journals to a single category, making it more difficult to compare across disciplines. Some argue that Eigenfactor score isn't much different than raw citation counts

More information

Definition: SCImago Journal & Country Rank (SJR) is a free source that includes the journal and country scientific indicators developed from Elsevier’s Scopus database.   Scimago is a prestige metric based on the methodology inspired by the Google PageRank algorithm and the idea that not all citations are the same. With SJR, the subject field, quality and reputation of the journal have a direct effect on the value of a citation and the impact that journal makes. 

The indicator expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years

Sources: SCImago Journal & Country Rank (SJR) can be freely accessed. Data is curated from the Scopus Database

Usefulness: The ranking is weighted by the prestige of the journal, thereby ‘leveling the playing field’ among journals. It can normalise for differences in citation behaviour between subject fields

Limitations: Although this indicator has more merit than a simple Impact Factor or other indicators, the ranking omits a large amount of information, such as gaps in the coverage of journals and errors in assigning published documents in Scopus.

Definition: The SNIP measures contextual citation impact by weighting citations based on the total number of citations in a subject field. It aims to allow direct comparison of sources in different subject fields.

Usefulness: Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines, but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences.

Sources: SCOPUS (subscribed to by DMU)

Limitations: The SNIP is still a journal metric and comes with the same limitations as the Impact Factor and other journal bibliometrics, with the metric applying to the place of publication than the merits of the output itself. Data is based on the SCOPUS database and SNIP scores are therefore only available for journals indexed in SCOPUS.

Definition: The CiteScore is calculated on the number of citations to documents by a journal over four years, divided by the number of the same document types indexed in Scopus and published in those same four years.

Usefulness: The CiteScore can be useful in comparing the relative influence of journals within a discipline, as measured by citations.

Sources: Scopus Database (subscribed to by DMU)

Limitations: The CiteScore is calculated in a similar way to the Journal Impact Factor and therefore shares many of the same limitations. It principally demonstrates the mean average number of citations to articles in a journal, and is thus highly susceptible to outliers. It is produced by the Elsevier publishing group and therefore has a potential "conflict of interest". The CiteScore does not allow for comparison between different disciplines as journals in fields with naturally low citation numbers would be penalised. Furthermore, CiteScore included front matter (editorials, news, letters etc.) in its calculations for how many documents are in each journal. These types of matter are generally not cited and can therefore skew the CiteScore.

Definition: The percentage of manuscripts accepted for publication, compared to all manuscripts submitted.

Sources: Journal editors and publisher websites

Usefulness: The acceptance rate for a journal is dependent upon the relative demand for publishing in a particular journal, the peer review processes in place, the mix of invited and unsolicited submissions, and time to publication, among others . As such, it may be a proxy for perceived prestige and demand as compared to availability.

Limitations: Many factors unrelated to quality can impact the acceptance rate for any particular journal. Therefore the acceptance rate should not be used as a measure of the quality of a particular manuscript. Lower acceptance rates should not be assumed to be the result of higher standards in peer review. Acceptance rate should not be used as a comparative metric across fields or disciplines.

More information

Author Bibliometrics

H-index

Definition: Usually an author-level metric (although it can also be calculated for any aggregation of publications, e.g. journals, institutions, etc.) calculated from the count of citations to an author’s set of publications. For example, an author with an h-index of 6 has at least six journal articles that have each been cited at least six times each.

Sources: Google Scholar (freely available), Scopus (Subscribed to by DMU), Web of Science (not subscribed to by DMU), Dimensions (not subscribed to by DMU) or any other citation index that includes author- and article-level citation information.

Usefulness: The h-index has been used as evidence of the scholarly influence of an author’s, or group of authors’, body of work. 

Limitations: The h-index varies by discipline due to varying norms of publishing speed and quantity. Since it does not take into account the longevity of a scholar’s career, it benefits more experienced scholars over early-career individuals. Therefore, it should not be used as a sole metric of scholarly impact, nor should it be used as a direct measure of quality. In particular, the h-index should not be used to rank authors who are in different disciplines or those at different stages of their careers.

More information

Publication Metrics

Definition: The number of times that a published piece of research has appeared in the reference list of other articles and books.

Sources: Most major databases collect citation data, as well as Google Scholar

Usefulness: Citations can be  a measure of influence amongst other scholars

Limitations: Many factors can impact citation counts, including: database coverage; differences in publishing patterns across disciplines; citation accrual times; self-citation rates; the age of the publication; limited coverage of some discipline areas (particularly arts and humanities in the major databases). Furthermore, citation counts are not a direct measure of research quality. Negative citations can be common and counts alone are not a measure of positive reputation for individual researchers. 

More information

Definition: The position of a paper or group of papers with respect to other papers in a given discipline, country, and/or time period, based on the number of citations they have received. E.g. the proportion of publications that belong to the top 10% most frequently cited of their field.

Sources: Essential Science Indicators; InCites; SciVal (not subscribed to by DMU)

Usefulness: Percentiles based jointly upon subject area and document type can be the most appropriate means of comparison between journal articles or groups of journal articles.

Limitations: Percentile-based indicators are based on citations, so they inherit the same limitations as all citation counts. As such, these percentiles should be interpreted with care.

More information

Definition: The Field Weighted Citation Impact (FWCI) is the ratio between the actual citations received by a publication and the average number of citations received by all other similar publications, i.e. publications in the same subject category, of the same type (i.e. article, review, book chapter, etc.), and of the same age (i.e. publication year).

Primarily used with journal articles, but also other kinds of research outputs, such as book chapters and conference proceedings that are sufficiently covered by abstract and citation databases.

Sources: Scopus (Subscribed to by DMU). Other bibliometrics sources such as Web of Science (not subscribed to by DMU), Dimensions (not subscribed to by DMU) and Google Scholar (freely available) provide possibilities of similarly calculated field-normalized citation based metrics.

Usefulness: The FWCI was conceived to facilitate the benchmarking of citation performance across groups of different size, disciplinary scope, and age. It is meant to correct for the different disciplinary patterns of scholarly communication and publication age can have on non-normalized metrics, such as citation counts. 

Limitations: The FWCI is typically presented as a mean value for an aggregation of papers which can be strongly influenced by outliers. The distribution of citations across publications is often highly skewed. Like for most citation analysis, the FWCI should not be interpreted as a direct measure of research quality.

More information

Altmetrics

Definition: Altmetrics analyse the number of times an article is shared, downloaded or mentioned on social media, blogs, in newspapers, reports etc. 

Sources:  Altmetric is a subscription based database (not subscribed to by DMU) that provides altmetric data. It has a free bookmarklet that can collect some basic information. Some bibliographic databases include basic altmetrics provided by PlumX, such as Scopus (Subscribed to by DMU). Impactstory is an open-source website that provides basic altmetrics such as online mentions. 

Usefulness: Altmetrics can offer complementary ways to inform research impact beyond the traditional methods such as peer review and citation counts. These metrics may appear before citations to a published article, providing earlier impact evidence, and they can track impact outside of academia. They can be informative when evaluating non-journal articles.

Main limitations: Altmetrics are inappropriate for formal research evaluations and they cannot provide comparative data. There are no standards or regulations for altmetrics. Similar to citations, a high number of shares or social media mentions does not necessarily equate with quality. An article may be mentioned on social media because it contains something amusing or unusual or even controversial. Social media can be manipulated and "likes" or "mentions" can be paid for or generated and the numbers may not reflect the actual level of public interest in a piece of work. Almetrics may underestimate scores for older journal articles (pre 2011). The data should never be used alone, but in conjunction with other measures of research evaluation.