Recently, Clarivate, the organization behind prominent scholarly metrics such as the Journal Impact Factor, announced the suspension of the journal eLife from its indexes. According to Clarivate, this action was based on eLife's decision to stop assigning journal-level metrics, such as impact factors, in favor of a more transparent, article-level evaluation system. This move aligns with eLife's vision of prioritizing the intrinsic quality of individual research outputs over aggregate metrics.
In response to this suspension, the San Francisco Declaration on Research Assessment (DORA)-a global initiative advocating for the responsible use of research metrics-issued a strong statement condemning Clarivate's decision. DORA emphasized that eLife's approach aligns with their principles, which call for the de-emphasis of journal-level metrics in research assessment. They argued that suspending eLife sends the wrong message to journals seeking to adopt innovative and more equitable evaluation practices. DORA also pointed out that such actions may hinder the global push for fairer and more transparent research evaluation standards.
On a broader scale, this situation has sparked intense debate within the academic and publishing communities. Critics of Clarivate's action, including contributors to the Scholarly Kitchen, raised concerns about the potential chilling effect on publishers who might consider breaking away from traditional metrics. They questioned whether Clarivate's actions are stifling progress and innovation in research assessment. These critics highlighted the broader risks of prioritizing metrics over substance, warning that such decisions could discourage diversity in evaluation practices across the scholarly ecosystem.
However, supporters of Clarivate argue that standardizing metrics like the Impact Factor ensures comparability and accountability across journals. They believe that abandoning these metrics risks creating opacity rather than transparency in research evaluation. Proponents suggest that while the current system may have its flaws, it provides a foundation for evaluating journals that has been widely recognized and relied upon for decades.
The clash between eLife's innovative approach and Clarivate's traditional stance raises key questions about the future of research assessment. While eLife prioritizes individual research quality through transparent, article-level evaluations, Clarivate underscores the enduring influence of established journal-level metrics. This serves as a test for the academic community to determine whether research evaluation will evolve to embrace diverse methodologies or continue to rely on traditional metrics. As more journals explore innovative practices, the challenge lies in balancing legacy systems with approaches that promote fairness and inclusivity.
A Call for Your Thoughts
As this debate unfolds, we invite ACSE members to share their perspectives. How do you view Clarivate's decision and DORA's response? Does this situation reflect a broader tension between traditional and innovative research assessment practices? Let us know your thoughts on how this might impact the future of research evaluation and publishing.
The historic Staszic Palace in Warsaw set the stage for an extraordinary event on December 12, 2024, titled Polish Science an...
Read more ⟶Silverchair, a leader in independent hosting for scholarly publishers, has announced its acquisition of ScholarOne Manuscrip...
Read more ⟶We are thrilled to announce Sciendo's new role as a Corporate Member of the Asian Council of Science Editors (ACSE), a vita...
Read more ⟶
Yoseph Leonardo Samodra
04 December, 2024Academic papers should be assessed on their quality, not just citations or publisher prestige, and the academic community needs to resist the control of large publishing companies.