However, have you ever wondered who is actually citing you? Being cited by your colleagues and junior researchers in your department is one thing, but what if your academic hero cites your work? Imagine finding out that your paper was read and acknowledged by the leading researcher(s) in your field; would that not be a more valuable indicator that your ideas are valuable? And what if a Nobel Prize winner cited your work in his or her next paper? How many “lesser” citations would you exchange for this single endorsement?
Professor Ying Ding
Professor Ying Ding

Who’s citing whom?

Ying Ding, Assistant Professor in the School of Library and Information Science at Indiana University, the US, believes citations from recognized experts should count for more. In a recent paper, co-authored with Blaise Cronin, “Popular and/or prestigious? Measures of scholarly esteem”, she explores whether taking the source of a citation into account can help identify groundbreaking contributions to a subject area, in this case, the field of information retrieval. (1)
Ding makes a clear distinction between: “popularity, which is how many citations a paper receives, irrelevant of who is making the citation, and prestige, which gives greater weight to citations coming from highly cited papers.”
She is concerned that raw citation counts might identify educational or other general-interest texts, especially review articles, as the most highly cited works in a field. It is possible to receive a large number of citations from non-experts, but Ding believes that experts in the field are more likely to be citing groundbreaking discoveries.
She explains: “I wanted to use citations to identify which papers were making real contributions to the field. I therefore decided to follow citations from recognized experts only. A real breakthrough is more likely to be recognized by thought leaders in a field, and so it is those citations I wanted to track,” she explains.
While there could be an element of circularity in using highly cited (popular) papers to determine prestige, Ding explains, “we could use peer review and other qualitative measures to pinpoint the leaders, but my objective was to find a quantitative measure of prestige. Based on the 80/20 rule of thumb [in which just 20 percent of all published papers attract 80 percent of citations], I only counted citations from this 20 percent.”
Essentially, Ding is using the most-cited papers in a field as a filter so she can use citations to distinguish between popularity and prestige, with prestige being a finer distinction.

Rising above the crowd

Separating out this 20 percent becomes even more useful when we remember how crowded academia is getting these days. The number of scientists, journals, papers and citations has been climbing exponentially. According to Ding, “now there are so many citations that we need to distinguish those that really indicate scientific impact.”
Many groups need to be able to identify prestige, either quickly or because they are not actually experts themselves. Journal editors need to efficiently find the best experts for peer review, while research institutes, governments and other sources of funding need to be able to identify the best targets. “With more competition for scarcer funding, it is becoming increasingly important for the people who make these decisions to identify where they will get the best return on investment: that is obviously by directing funding at the researchers most likely to create value and impact as a result,” says Ding.

Real quality lasts

Ding sorts her authors into two tables showing the top-10 for prestige and for popularity over a 50-year time period. By checking the names at the top of these tables, Ding finds that the authors identified as prestigious remain in the top-10 for far longer than those who are popular.
She explains: “Popularity doesn’t last because ideas and technologies change. This is why prestige is a better way to identify groundbreaking papers. For instance, a textbook might initially receive a lot of citations, but (depending on how fast the field moves) this will eventually become outdated. On the other hand, real contributions to a field will be cited for a long time. If a paper introduces concepts or terminologies that become building blocks in the field, then many people will cite them for a longer time.”
Some papers are only identified as prestigious, indicating they are only receiving citations from the most-cited papers. This suggests that the content is so innovative that only the leaders in the field are capable of identifying their importance. Ding points out: “If we don’t weight citations, these papers would fall to the bottom of the list, as they don’t receive a high number of citations. However, if the experts are citing this work, it is important that we can see this.”

From popularity to prestige

According to Ding, prestige should be the ultimate aim of all scientists, since this means you have contributed something of real and lasting value to your field.
“Ultimately, ‘prestige’ measures whether you have made significant contributions, which first requires experience and deep understanding of your subject. Not everyone can become a thought leader, and measuring prestige helps us understand which researchers have achieved this level. It helps us understand which authors are being read by the best researchers,” she explains.
And how should researchers work towards this prestige? According to Ding, “you have to write better papers! My strategy starts with only reading the best papers. It’s not possible to read everything, so you should limit your reading to the very best journals and papers in your field. You also need to reserve time for critical thinking. Keep asking yourself ‘what is missing, what can I add?’ There’s no point following the crowd.”
And what about Ding herself; is she putting her theory into practice? “Prestige is obviously my ultimate ambition because that would mean I’ve managed to make a lasting contribution, but I first need to make myself highly cited, so this is what I’m currently working towards.”

*********************************************************************************************








Reference:

(1) Ding, Y. and Cronin, B. (2011) “Popular and/or Prestigious? Measures of Scholarly Esteem”, Information Processing and Management, Vol. 47, issue 1, pp. 80–96.

Additional reading:

1. Bollen, J.; Rodriguez, M.A.; and Van De Sompel, H. (2006) “Journal Status”, Scientometrics, issue 69, pp. 669–687.
2. González-Pereira, B.; Guerrero-Bote, V.P.; and Moya-Anegón, F. (2009) “The SJR indicator: A new indicator of journals’ scientific prestige”, arxiv.org/pdf/0912.4141