Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. Many metrics have been developed that take into account varying numbers of factors (from only considering the total number of citations, to looking at their distribution across papers or journals using statistical or graph-theoretic principles).
These quantitative comparisons between researchers are mostly done to distribute resources (such as money and academic positions). However, there is still debate in the academic world about how effectively author-level metrics accomplish this objective.[1][2][3]
Author-level metrics differ from journal-level metrics, which attempt to measure the bibliometric impact of academic journals rather than individuals, and from article-level metrics, which attempt to measure the impact of individual articles. However, metrics originally developed for academic journals can be reported at researcher level, such as the author-level eigenfactor[4] and the author impact factor.[5]
{{cite web}}
L-index is automatically calculated by the Exaly database.[35]
There are a number of models proposed to incorporate the relative contribution of each author to a paper, for instance by accounting for the rank in the sequence of authors.[41] A generalization of the h-index and some other indices that gives additional information about the shape of the author's citation function (heavy-tailed, flat/peaked, etc.) has been proposed.[42] Because the h-index was never meant to measure future publication success, recently, a group of researchers has investigated the features that are most predictive of future h-index. It is possible to try the predictions using an online tool.[43] However, later work has shown that since h-index is a cumulative measure, it contains intrinsic auto-correlation that led to significant overestimation of its predictability. Thus, the true predictability of future h-index is much lower compared to what has been claimed before.[44] The h-index can be timed to analyze its evolution during one's career, employing different time windows.[45]
Some academics, such as physicist Jorge E. Hirsch, have praised author-level metrics as a "useful yardstick with which to compare, in an unbiased way, different individuals competing for the same resource when an important evaluation criterion is scientific achievement."[1] However, other members of the scientific community, and even Hirsch himself,[46] have criticized them as particularly susceptible to gaming the system.[2][3][47]
Work in bibliometrics has demonstrated multiple techniques for the manipulation of popular author-level metrics. The most used metric h-index can be manipulated through self-citations,[48][49][50] and even computer-generated nonsense documents can be used for that purpose, for example using SCIgen.[51] Metrics can also be manipulated by coercive citation, a practice in which an editor of a journal forces authors to add spurious citations to their own articles before the journal will agree to publish it.[52][53]
Additionally, if the h-index is considered as a decision criterion for research funding agencies, the game-theoretic solution to this competition implies increasing the average length of coauthors' lists.[54] A study analyzing >120 million papers in the specific field of biology showed that the validity of citation-based measures is being compromised and their usefulness is lessening.[55] As predicted by Goodhart's law, quantity of publications is not a good metric anymore as a result of shorter papers and longer author lists.
Leo Szilard, the inventor of the nuclear chain reaction, also expressed criticism of the decision-making system for scientific funding in his book "The Voice of the Dolphins and Other Stories".[56] Senator J. Lister Hill read excerpts of this criticism in a 1962 senate hearing on the slowing of government-funded cancer research.[57] Szilard's work focuses on metrics slowing scientific progress, rather than on specific methods of gaming:
"As a matter of fact, I think it would be quite easy. You could set up a foundation, with an annual endowment of thirty million dollars. Research workers in need of funds could apply for grants, if they could mail out a convincing case. Have ten committees, each committee, each composed of twelve scientists, appointed to pass on these applications. Take the most active scientists out of the laboratory and make them members of these committees. And the very best men in the field should be appointed as chairman at salaries of fifty thousand dollars each. Also have about twenty prizes of one hundred thousand dollars each for the best scientific papers of the year. This is just about all you would have to do. Your lawyers could easily prepare a charter for the foundation. As a matter of fact, any of the National Science Foundation bills which were introduced in the Seventy-ninth and Eightieth Congress could perfectly well serve as a model." "First of all, the best scientists would be removed from their laboratories and kept busy on committees passing on applications for funds. Secondly the scientific workers in need of funds would concentrate on problems which were considered promising and were pretty certain to lead to publishable results. For a few years there might be a great increase in scientific output; but by going after the obvious, pretty soon science would dry out. Science would become something like a parlor game. Somethings would be considered interesting, others not. There would be fashions. Those who followed the fashions would get grants. Those who wouldn’t would not, and pretty soon they would learn to follow the fashion, too."[56]
"As a matter of fact, I think it would be quite easy. You could set up a foundation, with an annual endowment of thirty million dollars. Research workers in need of funds could apply for grants, if they could mail out a convincing case. Have ten committees, each committee, each composed of twelve scientists, appointed to pass on these applications. Take the most active scientists out of the laboratory and make them members of these committees. And the very best men in the field should be appointed as chairman at salaries of fifty thousand dollars each. Also have about twenty prizes of one hundred thousand dollars each for the best scientific papers of the year. This is just about all you would have to do. Your lawyers could easily prepare a charter for the foundation. As a matter of fact, any of the National Science Foundation bills which were introduced in the Seventy-ninth and Eightieth Congress could perfectly well serve as a model."
"First of all, the best scientists would be removed from their laboratories and kept busy on committees passing on applications for funds. Secondly the scientific workers in need of funds would concentrate on problems which were considered promising and were pretty certain to lead to publishable results. For a few years there might be a great increase in scientific output; but by going after the obvious, pretty soon science would dry out. Science would become something like a parlor game. Somethings would be considered interesting, others not. There would be fashions. Those who followed the fashions would get grants. Those who wouldn’t would not, and pretty soon they would learn to follow the fashion, too."[56]
{{cite journal}}
I proposed the H-index hoping it would be an objective measure of scientific achievement. By and large, I think this is believed to be the case. But I have now come to believe that it can also fail spectacularly and have severe unintended negative consequences. I can understand how the sorcerer's apprentice must have felt. (p.5)