Biju Dharmapalan
The productivity of scientific research is often evaluated through bibliometric analysis that mostly depends on the citation indices. Citation indexes are databases or tools used to measure how influential certain academic works have been through their citations. Researchers and institutions typically use these indices to estimate the importance of a given paper, author, or journal in the academic community. The evolution of citation databases can be traced back to the mid-20th century. The Science Citation Index (SCI), established by Dr. Eugene Garfield, is one of the first and most prominent citation indexes. Despite the existence of citation indices for almost fifty years, there are still some intriguing questions that have not been answered. Do the papers with the highest number of citations represent the most significant ones? Do most cited papers produce disruptive research and innovation?
Indeed, citations are universally acknowledged as a conventional gauge of academic impact, indicating that a piece of work has been read and found valuable for subsequent research. The metrics like impact factor, h-index, i10 index, etc., used to evaluate research depend on citations. Many may not be aware that impact factors were designed initially as a tool to help libraries select which journals to subscribe to, not as a way to judge individual researchers.
Even for securing jobs, this is an essential parameter. Scientific papers that have a large number of citations play a crucial role in the network of sharing and discussing scientific information. However, depending exclusively on citation counts fails to reveal the underlying reasons why a study is considered important enough to receive repeated referrals from other academics. Many researchers consider “uncitedness” as a symbol of futile or unproductive research. Nevertheless, there can be instances where myths or misconceptions emerge around the notion of uncitedness in research.
The absence of citations does not necessarily mean that a work is unimportant. It might be ahead of its time, focused on a niche topic, or overlooked by the research community. Research impact can take time, and some groundbreaking ideas might gain recognition only after a considerable period. One of the papers about measuring osmotic pressure, published by the geneticist and Nobel prizewinner Oliver Smithies in 1953, had “the dubious distinction of never being cited”.
While citations can be a measure of impact, they do not always reflect the quality of a work. Many factors influence whether a paper gets cited, including the visibility of the journal, the popularity of the topic, and the author’s network. A paper might be cited frequently for reasons other than its scientific rigour, such as controversy of the topic or popularity of the author or even by the author concerned through self-citations and reciprocal citations. High-quality research may go unnoticed due to factors unrelated to its merit. Of late, many journal editors are compelling prospective authors to cite papers published in their journals to increase the impact factor. This principle is secretly followed by editors for many reputed high-impact journals. A few even ask the authors to do so as one of the prerequisites for publishing in their journals. Because of their growing significance as a sort of currency in the academic world, citations have also become a primary target for various types of fraudulent activity. Several studies have shown how straightforward it is to increase citation counts intentionally.
The number of citations does not determine the accuracy or truthfulness of research findings. It also doesn’t mean that people have read that paper completely. The most cited work in history is a 1951 paper by late US biochemist Oliver Lowry describing an assay to determine the amount of protein in a solution, which has gathered more than 305,000 citations. It will be hard to find how many researchers have gone through the original paper. Scholars may cite work for critique, replication, or to build upon it without necessarily endorsing its conclusions. Not all impacts of research are captured by citations. Real-world applications, influence on public policy, or inspiring future generations may not be immediately reflected in citation counts. Many researchers come out with out-of-box ideas which may not get a citation. The fact that there are a large number of papers that doesn’t get any citation doesn’t mean that these are not read by anyone. Researchers may be reading and even designing experiments based on these uncited papers. Similarly, thousands of journals are not indexed by the Web of Science or Scopus. There are many excellent disruptive research works that get published in low-profile journals, which many eminent scientists fear to cite due to repercussions from competent authority. Many consider low-profile journals classified as ‘predatory’, the term which to date has no clear-cut definition. Even if we go by the definition given by scholars in 2019, “Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.”, many so-called reputed journals fall into this category.
Researchers and institutions are increasingly recognizing the limitations of relying solely on citation metrics to assess the value of research. Altmetrics, which consider social media mentions, downloads, and other non-traditional indicators, are gaining prominence as complements to traditional citation metrics. Researchers should be aware of these myths and consider a broader range of indicators when evaluating the impact and significance of their work. Lack of citation cannot be interpreted as useless articles or valueless research.
(The author is a science communicator and an adjunct faculty at the National Institute of Advanced Studies, Bangalore)