Using bibliometrics to measure usage, impact of publications

Published on Friday, 12 February 2016

Published by Leo Magno on Friday, 12 February 2016

Display of different types of ADB publications.
Display of different types of ADB publications.

I recently discussed over coffee with my colleague Riel Tanyag publication metrics and how to measure the impact of knowledge products, a topic only librarians, publishers, web professionals, and Klingons would get excited about. Nonetheless, valuable thoughts and reflections were generated.

We discussed the analysis of publication usage, called bibliometrics. That eventually led to a debate on webometrics and cybermetrics, the measurement of web content usage.

That is usage data, but not impact.

Once a publication gets downloaded, we need to measure its contribution to the general body of knowledge. Was it read at all after being downloaded? If so, was the publication, a portion of it, or its derivative used to create new knowledge? Was that knowledge propagated? Would the sum of the answers to the questions above be an aggregated indicator of publication impact as opposed to publication usage?

And so Riel and I moved into the realm of library and information science, her area of expertise. From a publishing person’s viewpoint—and thinking beyond the usual bibliometrics for usage—I offered some impact indicators for knowledge publications. These are possible metrics beyond the “usual suspects,” although how to measure them is a different matter altogether:

  • Derivative work based on the publication. Shows that the publication spawned new material and is worthy enough to become the base of new knowledge.
  • Rights and permission requests to reuse or reprint. Shows whether the source material was significant enough in full or partly for others to wish to repurpose and redistribute.
  • Mentions or citations in government policy papers, legislative orders, or corporate memos and policies. Shows the publication contributed to the decision-making process.
  • Citations in academic papers. Shows how the publication contributed to the expansion of knowledge by students using the publication in their theses, papers, and dissertations.
  • Citations in public presentations. Shows that others found the publication to be authoritative.
  • Instructional materials or teaching modules created based on the publication. Shows that the source material contains best practice examples worthy of being replicated and propagated.

Such impact factors would indicate the extent to which the original publication or source material is being used by an interested audience (not necessarily the target audience) and then disseminated to a wider body, expanding knowledge on the subject matter. Knowledge is propagated, new knowledge is born, and change is effected. These indicate meaningful impact beyond the usual usage metrics.

To my mind, “usage” and “impact” are not the same. We may do quantitative measurement of downloads, page views, users, and alternative metrics or “altmetrics” for social media. That would give us usage data – but would that tell us if our publications are making an impact? A common flaw among these metrics is that they could be “gamed”: Facebook likes and shares, tweets and retweets can be artificially increased through paid boosts; downloads and page views can be inflated manually or using bots; and even media coverage can be influenced. These tools definitely help disseminate and increase the mindshare of our publications, but we still would not know if anyone actually read them or used them at all. We successfully derived usage data, but again not impact data.

Scholarly papers can be tainted by self-citation, or the paper can be divided into several smaller articles to gain extra citations. Despite these tricks, however, I consider citations as a measurement of impact rather than usage because a citation is proof of the original work being read, appreciated, and reused by another author. I would rather have my paper cited once than have it downloaded a hundred times without any proof of it having been read by anyone.

In the academic world, publication citation—more than downloads—is the currency and main indicator of publication impact. Two of the most popular metrics are the “impact per publication” or IPP tool used by the Leiden University Centre for Science and Technology Studies, and the SCImago Journal Rank (SJR) indicator, which measures not just the number of citations a journal receives but also the prestige of the journal that gave the citation to derive the “average prestige per article.” Online, one may also go to Google Scholar, Microsoft Academic, Scopus, or Thomson Reuters’ Web of Science, all of which provide citation counts for academic publications, although their citation count is limited to publications indexed by the service and thus subject to the whims of an algorithm.

Faced with different metrics for usage and impact, my conversation with Riel morphed into possible research methods to derive the two. What if we do a quantitative-qualitative combo research, mixing the metrics above with qualitative research methods such as interviews, surveys, and focus group discussions? Would we be able to derive impact data beyond usage data? Let’s try, and find out.

Subjects: