Leave a Comment
Your email address will not be published. Required fields are marked *
Recent partnerships between major academic publishers Wiley and Taylor & Francis with AI companies have sparked outrage among scholars. These deals, which allow tech companies to use academic content to train AI models, have raised serious ethical concerns within the academic community.
The controversy erupted after Ruth Clemens, a literary studies lecturer at Leiden University, shared the news on social media. Her tweet, highlighting the lack of transparency around these agreements, quickly went viral, drawing widespread attention. Many academics are alarmed by the idea that their work could be repurposed without their knowledge or proper compensation.
As per the report, Microsoft is one of the tech companies involved, paying Informa, the parent company of Taylor & Francis, $10 million to access academic content for its AI assistant, Copilot. Wiley has also confirmed that it has sold content rights to an undisclosed tech firm, with more deals on the horizon. Both publishers have assured that authors will benefit from these agreements through compensation and rights protection, but these promises have done little to ease scholars' concerns.
The Bookseller was informed by Taylor & Francis that they are dedicated to "protecting the integrity of our authors' work" and ensuring that authors receive royalties per their contracts. This was in reaction to the mounting outrage. Many academics, meanwhile, are still dubious because they fear their intellectual property will be misappropriated or given little credit. Academics fear their work might not be properly cited or could be misused in ways that conflict with their values. The issue of intellectual-property rights is at the forefront, with many questioning how their work will be safeguarded in this new AI-driven environment.
Oxford University Press (OUP) and Cambridge University Press (CUP) have also been drawn into the debate. OUP is actively exploring collaborations with companies developing large language models (LLMs) and emphasizes the importance of responsible AI development. CUP, on the other hand, has taken a more cautious approach, offering authors the choice to opt in to future licensing agreements with AI providers and promising "fair remuneration."
Nathan Kalman-Lamb, a sociology professor at the University of New Brunswick, views these deals as a stark example of how capitalism exploits academic labor. He points out that while scholars and peer reviewers do much of the work, they receive little to no compensation, with profits instead going to publishers.
Farhana Sultana, a professor at Syracuse University, shares similar concerns, worried about her work being repurposed without fair compensation. She sees this as part of a broader trend where academics' intellectual property is inadequately protected while publishers profit.
The Authors Guild has also criticized these AI agreements, emphasizing that they were made without the explicit consent of the authors. They advocate for stronger protections, including clauses in contracts that would prevent the use of academic content for AI training without the author's express permission.
As discontent grows, some scholars are considering boycotts of these publishers. Kalman-Lamb suggests that labor activism, such as withholding scholarly contributions, might be the most effective way to push back against these practices.
This controversy highlights the growing tensions between the commercialization of academic work and the rights of those who produce it, as AI increasingly reshapes the landscape of scholarly publishing.
World Brain, a leading innovator in AI-driven solutions for the academic publishing industry, has launched an enhanced vers...
Read more ⟶China's participation in foreign research collaborations has been progressively decreasing, a trend that is indicative of a m...
Read more ⟶The Asian Council of Science Editors (ACSE) is pleased to announce the return of Peer Review Week, an annual event that uni...
Read more ⟶