“AI may not replace you, but a person who uses AI could,” says Martin Delahunty.

Long before ChatGPT arrived on the hype cycle, Martin Delahunty was considering what AI technology could mean for scholarly publishing; how it might change processes developed over centuries: and how publishers should react.

In 2019 – a long time ago in robot years – he asked in the European Medical Writers Association journal, “Will medical writers be replaced by robots?”

“After some deliberation, the answer was no. Four years on, I think my opinion still hasn’t changed. However, the challenges regarding AI-powered technologies are now much more apparent. I think what we’re seeing is a growing consensus amongst those creating and using AI is that it remains a tool. It still requires human expertise and skilled use. However, the warning I would give is that AI may not replace you, but a person who uses and is skilled in using AI could.”

Even with a $10 billion investment from Microsoft and 45 gigabytes of training data, the engineers at OpenAI, who developed ChatGPT, still must rely on human curation for ensuring the necessary confidence in the narratives that their remarkable machine gives to all comers and all questions.

In his work with universities, science research organizations, and open science publishing, Delahunty has identified critical issues of immediate concern with AI, and he has called for swift responses by human curators.

Nature, Science, and other publishers moved very quickly to update their authorship guidelines to state clearly that ChatGPT cannot be an author, but also to tighten up their ethical policies,” he tells CCC’s Chris Kenneally. “Any publishers that have not moved on those fronts, I would advise considering updating your authorship guidelines and ethical policies to address ChatGPT and related AI tools.”

AI in Publishing
Share This