What can be the number of quotes to suggest tweaking them

The numbers are really more like a tongue-in-cheek tribute to the man, not a serious measure of achievement. And yet, it touches on a serious question: Is there any way to measure how good or effective a scientist is? A low Erdos number means you’ve worked with some serious mathematicians, thereby indicating that you have some value as a mathematician yourself. But still, it really belongs as a tribute to a man who touched so many people.

However, there is a more serious measure that is often used: how many times a paper you authored has been cited in other papers by other scientists. You can probably tell that this “citation index” carries some weight. If another scientist cites something you researched and wrote, it means that the scientist found your work relevant and useful in their own work. And if many scientists cite your paper, it means that you have built up some relatively broad relevance. If your paper is cited long after it is published, perhaps even after you are dead and gone, that speaks to the lasting impact of your findings.

Of course, this idea of ​​an index can be tweaked. For example, what is the citation potential of your paper? must refer to PeppermintFor example, count the same as a reference Natureeither scientific American, does Peppermint Context means that a wider audience than just academics is reading your paper? If so, surely it suggests wider understanding and appeal? Also, how well do you cite your own references—that is, how aware are you of important results in your field when you do your research?

Such considerations go into the calculation of various metrics—known as the H-index, G-index, and more. For example, the h-index is calculated as follows: of all a scientist’s published papers, if some of them number “h” with h or more citations, and the rest have h or less citations, then its The h-index is h so let’s say researcher Sharvari has published 30 papers. Each of them is cited at least seven times; The other 23 have each been cited seven or fewer times. Sharvari has an H-index of 7.

To give you a quick idea, Erdos has an H-index of 76, Albert Einstein 92. Einstein’s E = mc2 paper has been cited about 500 times; His special relativity paper over 2,000 times. Like all statistical measures, these are used and interpreted in different ways. Certainly, Einstein has to be at or near the top of any ranking of scientists. But given the vast number of scientific journals, some of which are of questionable merit, it should hardly be a surprise to find unknown or unexpected names on such rankings. These may be solid scientists who are not yet widely known; They may not even be so solid that they find ways to increase their citation indices.

And indeed, a recent paper addresses the uses and abuses of such citation metrics, apparently seeking a way to rationalize them (September 2022 data-update for the updated Science-wide author database of standardized citation indicatorsJohn PA Ioannidis, Elsevier, 10 Oct. 2022, https://bit.ly/3UliNVw). It looks at citation data from approximately 200,000 scientists around the world and attempts to rank them according to a composite of various citation indices, such as the H-index—called the C-score. As the paper states, “the C-score focuses on impact (citations) rather than productivity (number of publications).”

It’s not clear to me how many scientists, whether on this list or not, pay attention to a list like this. I suspect that more serious people just go about their work, with no interest in this kind of ranking. But anyway, how did Ioannidis select those nearly 200,000 scientists?

Let me quote their paper: “Selection is based on the top 100,000 scientists by C-score … or 2% or above percentile rank … 200,409 scientists are included in the most recent single-year dataset.”

Never mind the “recent single years”. Language is calculated to constrain this process: specifically, how does 100,000 become 200,409?

Maybe it’s the 2% percentile rank? This may be because the number includes people who are above the bottom 2% in the rankings (meaning the “2nd percentile”); In other words, the top 200,000 scientists make up 98% of the overall list. To me, this seems to be the most appropriate interpretation of this paper.

It’s all leading somewhere, I promise. I was prompted to dig into this by a recent big newspaper advertisement for Patanjali products which claimed: “Pujya Acharya Balakrishna ji and Patanjali (PRI) are among the top 2% scientists in the world” ” Patanjali’s website sent me to the Ioannis paper above, and from there to two huge excel files containing data about all these scientists.

The paper only mentions 2% as having a “percentile rank of 2% or higher”. So the “top 2% of scientists” in the ad should really be the “top 98% of scientists”. Or, equivalently, “Scientists rank higher than the bottom 2%.”

You’ll agree, none of this is as catchy as the “top 2%”.

But wait, where is Acharya Balakrishna in the ranking? With a C-score “rank” of 367,268, he appears in line 192,004 of a total of 200,409.

Means 95.8% of these scientists are higher than Acharya Balakrishna. That is, his percentile rank is 4.2. That is, only 4.2% scientists rank below that. consider that.

Dilip D’Souza, once a computer scientist, now lives in Mumbai and writes for his dinner. His Twitter handle is @DeathEndsFun.

catch all business News, market news, today’s fresh news events and breaking news Update on Live Mint. download mint news app To get daily market updates.

More
low