Transcript: Beyond Impact, Latest Journal Citation Reports Certify Trust

Interview with Dr. Nandita Quaderi, Clarivate

For podcast release Monday, February 19, 2024

KENNEALLY: Impact factor is a global standard for measuring the influence and importance of scholarly journals. Published by Clarivate, this calculation is critical for authors when considering where to publish research as well as for librarians when deciding which publications to hold in collections. For nearly half a century, the Journal Citation Reports, or JCR, have been “must reading” in universities around the world.

Welcome to CCC’s podcast series. I’m Christopher Kenneally for Velocity of Content.

Dr. Nandita Quaderi is a senior vice president and the editor in chief for Web of Science at Clarivate. She has oversight of the Web of Science editorial division, the Institute for Scientific Information, and the Research Professional News team. Dr. Quaderi has overall responsibility for editorial strategy, selection of Web of Science content, and inclusion in Journal Citation Reports. She joins me now from London to share the latest developments in the JCR’s ongoing evolution. Welcome to Velocity of Content, Dr. Quaderi.

QUADERI: Thank you so much, Chris. It’s a pleasure to be here.

KENNEALLY: The journal impact factor may be the most recognized and the most debated metric in scholarly publishing. Remind us exactly what the JIF measures and describe the ways that different audiences interpret the numbers.

QUADERI: So the journal impact factor, the JIF, is a measure of the scholarly impact of a journal. It’s calculated by taking the number of citations a journal receives each year to items published in the previous two years, and then taking that number of citations and dividing it by the number of research and review articles that were published in those previous two years.

It probably helps to give an example. Let’s calculate what the JIF would be for a journal that received 500 citations in 2023 to the 100 research and review articles it published over 2021 and 2022. In this case, the JIF would be 500 divided by 100, which comes to 5.

The JIF was designed as a journal-level metric, and responsible, appropriate uses of the JIF include publishers using it for journal development or portfolio management, and by comparing the JIF of their journals to other journals in the same field. It’s important to remember that the average speed and volume of citations varies greatly between disciplines. In general, the citations in the arts and humanities are far lower and far slower than citations in the sciences. So we must be very careful when comparing journals across different subjects.

Now, unfortunately, the JIF is sometimes used irresponsibly to compare individual articles or to assess researchers, and the JIF isn’t a good or an appropriate measure for article impact or researcher performance. The JIF measures how many times articles in a journal were cited on average, but we can see that there’s a massive spread in how many times a given article is cited, and it makes no sense that an article cited twice in a journal with an impact factor of, say, 25 is considered to be more impactful than an article cited 100 times in a journal with an impact factor of 8. Therefore, it makes no sense that the author of an article cited twice is rewarded more than the author of the article cited 100 times just because the twice-cited article was published in a journal with a higher impact factor.

KENNEALLY: The Journal Citation Reports interface goes beyond a single number, Dr. Quaderi. What other information is available?

QUADERI: The JCR is a journal intelligence platform that provides a comprehensive view of a journal’s profile through a combination of metrics, visualizations, and descriptive data. Unlike the Web of Science, which is updated almost daily, the JCR is updated just once a year, in June, to provide an annual snapshot of journal performance.

So regarding the JIF itself, we don’t just provide the top-level metric. We provide a visualization of the citation distribution. As I mentioned before, there’s a lot of variation in the number of citations individual articles get. We also provide a list of every single article and every single citation that contributes to the JIF calculation.

In addition to the JIF, the JCR contains a whole variety of citation metrics, including something called the Journal Citation Indicator, the JCI. This is a great metric to use alongside the JIF. I’d say the most important difference between the JIF and the JCI is the JCI is field-normalized. This makes comparing journals across different categories so much easier than using the JIF. For example, regardless of subject area, a JCI of 1 tells us that the journal received the average number of citations for that subject. A JCI of 2 tells that the journal received twice the average number of citations. And a JCI of 0.5 means the journal received half the average number of citations. So very easy comparisons.

So it’s not just metrics we find in the JCR. We’ve also got a load of very valuable descriptive data, such as ways in which the JCR helps to map out where a journal is positioned within the scholarly network by showing the top journals that cite that journal’s content and the top journals the journal itself cites. It also shows the top contributors to a journal by organization and by country or region and a breakdown of what proportion of the content is open access there versus subscription.

KENNEALLY: Important changes to the JCR in recent years address growing concerns over research integrity. What issues have appeared over trust in the scholarly publishing record, and Dr. Quaderi, how has Clarivate responded?

QUADERI: So the last few years, sadly, have seen a dramatic increase in the quantity and the sophistication of fraudulent behaviors that are putting the integrity of the scholarly record at risk. There’s always been a degree of cheating from individuals, but in recent years, we’ve seen the emergence of fraudulent enterprises, such as paper mills and so-called predatory journals, that are really exploiting at scale the pressure to publish and to be cited. And we’re only just starting to see what impact generative AI will have in helping bad actors fabricate content, and on the flip side, how AI can also help the detection of the fabricated content. So basically, we find ourselves caught in this AI arms race.

At Clarivate, we are really committed to protecting the integrity of the scholarly record, and we put a tremendous amount of effort into ensuring that the Web of Science only contains content from trustworthy sources. We periodically reevaluate indexed journals to check that they still meet our criteria, and any journals that don’t are removed.

Last year, we developed a new AI tool to help us identify the characteristics that indicate a journal may no longer meet our quality criteria. This technology has really improved our ability to focus our reevaluation efforts on journals of concern. Last year, we delisted over 80 journals for failing our quality criteria.

So it’s not just changes to the Web of Science. Over the last few years, we’ve introduced a series of policy changes that also affect the JCR. These all contribute to our efforts to promote transparency, to provide more data, and help level the playing field. In doing so, we help protect the integrity of the scholarly record.

Going back to 2021, in that year, we added JCR profile pages for journals in the Arts and Humanities Citation Index and the Emerging Sources Citation Index for the very first time. By adding AHCI and ESCI journals to the JCR, this meant that all journals indexed in the Web of Science were now also included in the JCR. Previously, it was just the subset of the science and the social science journals. In 2021, we also introduced the JCI indicator, which I talked about a little bit earlier.

Then last year, in 2023, we extended the JIF to all journals in the Web of Science by including ACHI and ESCI journals to the JIF calculations. This resulted in 9,000 more journals and 3,000 more publishers receiving a JIF and benefiting from the additional data and greater transparency we added to their JCR profiles. This list of journals includes many recently launched journals, open access journals, journals with a niche or a regionally focused scope, and journals from the Global South.

In 2023, we also changed how the JIF is displayed, going from three decimal places to one decimal place. This is an important change, because what it does is create many more ties in JIF ranking. What we hope this will do is to encourage the community to adopt a much more holistic approach to comparing journals by having to consider other indicators and descriptive factors alongside the JIF.

KENNEALLY: The journal impact factor was developed in the 1970s, Dr. Quaderi. A half-century later, why is the JIF still relevant?

QUADERI: That’s a very good question. I think there are a couple of key factors that contribute to the enduring popularity of the JIF. Firstly, the JIF is underpinned by data from the Web of Science, and there’s an appreciation that for a metric to have value, it needs to be derived from a high-quality data source. The Web of Science is known for having a rigorous selection process, implemented by expert in-house editors that have no links to journals, publishing houses, or institutes. That means they are free to make objective, data-based decisions with no potential conflicts of interest. Secondly, the JIF has a long history, and the academic community values stability and values continuity. In this way, the JIF has become deeply ingrained into the scholarly publishing landscape.

The JIF was introduced in the pre-digital age, when the rapidly growing number of journals created a sense of information overload, and there was a need for an indicator of scholarly impact to help identify the must-read journals. But in that era, we weren’t seeing the industrial-scale levels of fraudulent behavior we see today, so there wasn’t much need for an indicator of trust. Nowadays, we see a great need for indicators of trustworthiness at the journal level. And by extending the JIF to all the journals in the Web of Science, we’ve provided such an indicator. So the numerical value of the JIF continues to provide a measure of scholarly impact, but now the very fact of having a JIF, regardless of the number, shows that a journal meets our quality criteria and can be trusted. I think this evolution from an indicator of impact to an indicator of both impact and of trust will keep the JIF relevant for many years to come.

KENNEALLY: What further revisions to the JCR are you preparing for 2024 and beyond, Dr. Quaderi? In what ways, too, are these changes in response to community requests?

QUADERI: So we’re making two changes to JIF category rankings in this year’s JCR, and both are driven both by what we see in the data and changes in the publishing landscape and requests we get from our users and the research community at large.

Currently, we provide separate rankings for the nine subject areas that are indexed in multiple editions. Let’s take psychiatry as an example. Psychiatry is indexed both in SCIE, the science index, and in SSCI, the social science index, and we provide separate psychiatry rankings for these two indices. What we’re changing this year is that we’ll no longer have these separate category rankings, and instead, we’ll have a single ranking for each of our 230-odd science and social science categories.

The second change we’re introducing this year is the addition of ESCI journals to category rankings. If you take psychiatry as an example again, what we will do is display a single psychiatry ranking that includes journals from the science index, the social science index, and from ESCI. Moving to this single category ranking will provide a much simpler and a much more complete category view for our customers.

So what we will see is that in general, ESCI journals have a lower JIF than journals in the science or the social science indices in the same category. That’s because to enter SCIE or SSCI, a journal needs to pass our four impact criteria, which are designed to select journals with the highest scholarly impact, in addition to our 24 quality criteria.

However, what our customers will see is that some ESCI journals have a higher JIF than the science and social science journals in the same category, and there’s a couple of reasons for this. The differentiator between ESCI and SCIE and SSCI isn’t just a journal’s JIF at a given moment in time. It’s whether a journal passes these four impact criteria that we have. Furthermore, over the last couple of years, we’ve had to pause our impact evaluation of indexed journals, as we must put more and more effort into our quality evaluations to tackle the ever-increasing pollution of the scholarly record – the scholarly literature.

We had intended to make an additional change this year, and that was to introduce rankings for the 25 arts and humanities-specific categories. However, we took a deep dive into the data, and we saw that doing this would create massive ties in rank, which in turn would create very skewed quartile distributions that would be very hard to interpret and of very questionable value. So we shared our findings and our hesitation to introduce these rankings with a set of publishers and some other industry stakeholders, and there was a consensus that it would be better not to publish rankings for these categories. It wouldn’t be the responsible thing to do.

So this year, we won’t be introducing JIF rankings for our 25 and arts and humanities-specific categories. We probably don’t have time in this podcast to go into more detail, but if anyone is interested to know more, we will have a series of blogs on the clarivate.com site that will provide a bit more detail.

KENNEALLY: Dr. Nandita Quaderi, senior vice president and the editor in chief for Web of Science at Clarivate, thanks so much for speaking with me today.

QUADERI: Thank you, Chris. It’s been an absolute pleasure.

KENNEALLY: That’s all for now. Our producer is Jeremy Brieske of Burst Marketing. You can subscribe to the program wherever you go for podcasts. You can also find Velocity of Content on YouTube as part of the CCC channel. I’m Christopher Kenneally. Thanks for joining me.

To stay connected to CCC, please subscribe to our Velocity of Content blog

X
Share This