At the 2023 Frankfurt Book Fair last week, the halls were alive with the sounds of AI.
Recorded Thursday, October 20, 2023
Subscribe: Apple Podcasts | Spotify | Google Podcasts | Email | Download
Language interfaces are going to be a big deal. That’s how Sam Altman, chair of OpenAI, put it when the company launched ChatGPT last November.
Going to be a big deal? Definitely a big deal.
At the 2023 Frankfurt Book Fair last week, the halls were alive with the sounds of AI. In the Frankfurt Studio, CCC’s Chris Kenneally moderated a panel discussion, “Trained With Your Content,” considering what limits should be placed on training Large Language Models (LLMs) and how to address concerns over equity, transparency, and authenticity.
“Right now, the current status situation is that the AI governance is far behind the AI capabilities, which is dangerous,” noted Dr. Hong Zhou, Director of Intelligent Services & Head AI R&D, Wiley. “This has impacted the research and also the publishing, because it’s very hard for the people to manage all these AI capabilities.
“That’s why we need to create the legal framework to catch up to these technologies to have the response,” he explained. “I do have several concerns about this. The first concern, as everyone knows, is copyright infringement. Today, generative AI generates content which infringes on copyright without permission. This is a problem. Another concern, actually, is that AI can generate content that is similar to the original content but is not enough to be considered as copyright infringement. This is one scenario. Another scenario is it generates some content which infringes the copyright, but it’s hard to detect. In both cases for the copyright holders, it’s very difficult for them to enforce the rights – in both cases.”
According to Dr. Namrata Singh, Founder and Director, Turacoz Group, the ICMJE has developed guidelines on the responsibility of scientific authors when using AI in their work.
“If you have used an AI tool, then you mention that in your methods section. You mention the name of the tool. You mention the version if it is there or the whole technology part behind it. This is where, I guess, the transparency works. But ultimately, the responsibility is on the author. But guidelines and recommendations do help us just to know what is right and what is wrong and what we can do and what we cannot do.”
The demand for AI tools in research and scholarly publishing raises copyright-related questions about the use of published materials that feed the tools. Carlo Scollo Lavizarri described how licensing solutions might meet that demand.
“These licenses can either be from segments of publishing, perhaps, that have large content that they can license, or it could be voluntary collective license, linking many-to-many situations. For example, you have many writers, many publishers on the one side, and you have many pieces of content on the other side used by different AI tools. So that is one such mechanism – voluntary collective licensing.”