AI tools now conduct systematic review of scientific literature for government officials by searching millions of information sources. And the machines are expected to do much more in years ahead.
Interviews with Dr. Christopher Tyler & Rachel Martin
Subscribe: Apple Podcasts | Spotify | Google Podcasts | Email | Download
Last spring, not long after OpenAI launched ChatGPT, an AI working group in the US House of Representatives obtained 40 licenses for the generative AI tool. ChatGPT and other available AI tools now conduct systematic review of scientific literature for government officials by searching millions of information sources. And the machines are expected to do much more in years ahead.
Members of government and public policymakers around the world rely on science and science publishing when shaping regulation and legislation. The responsibility to stay current on research is a formidable challenge for the public sector, especially as the volume of science publishing grows. Ethical concerns, of course, temper the enthusiasm over AI. Congressional staff, for example, must limit their ChatGPT use to research and evaluation only, and they should only input non-sensitive data.
Early in his own career, Dr. Christopher Tyler in the Department of Science, Technology, Engineering, and Public Policy at University College London was a science advisor in the House of Commons. Looking back from the perspective of 2024, he wishes ChatGPT were there to help with his work.
“Oh, a thousand times yes. It would have been fantastic. I can’t tell you how long I used to spend doing things like scoping new inquiries for select committees, where I would have been able to just throw into ChatGPT a question,” says Dr. Tyler, who has written for Nature about the powerful potential of AI in developing science policy.
“We’ll probably find that these kinds of tools will speed up a lot of the donkey work component of science advice to enable people like me back in the day to spend more time face to face, more time crafting bespoke briefs for individuals, more time making sure that the evidence synthesis met the exact need of the policy questions that we’re being asked, rather than just scrambling for information the entire time.”
Rachel Martin, Elsevier’s global director of sustainability, served on a team that developed a proof-of-concept project testing the suitability of gen AI narratives for advisors and their clients in government. She tells CCC’s Christopher Kenneally what readers think of machine-written policy documents.
“One of the biggest things was that everybody said it reads well. Nobody thought, ‘Oh my God, a machine has written this.’ Nor at all,” Martin says.
“People said they wanted data. They wanted a clear number. And they wanted that to be citable. They wanted to be able to go to that document and to say, ‘OK, this study says that it’s this number.’ All these elements come into it, and you suddenly realize that this is a lot more complicated. It isn’t just a simple question, ‘Hey, ChatGPT, please write my Christmas menu.’ This is far more detailed and far more nuanced if it’s going to work and work at scale.”
Trackbacks/Pingbacks