Transcript: Recipes for a Healthy Information Diet
2022 Year-In-Review
For podcast release Monday, December 12, 2022
with
•Santiago Lyon, Content Authenticity Initiative
•Anita Makri
•Gordon Crovitz, NewsGuard
•Chalani Ranwala
•Tracy Brower
KENNEALLY: Welcome to Copyright Clearance Center’s podcast series. I’m Christopher Kenneally.
In the final weeks of 2022, Velocity of Content is looking back at the past twelve months of programs.
More and more, people trust information less and less. Traditional journalism organizations and digital-native social media networks alike face a formidable challenge – breaking through the cloud of misinformation and overcoming doubt and suspicion.
In 2019, Adobe and its media partners launched the Content Authenticity Initiative to provide consumers with more information about the provenance of content they see and read. The CAI now collaborates with hundreds of representatives from software, publishing, and social media companies, human rights organizations, photojournalism, and academic researchers to develop content attribution standards and tools.
Santiago Lyon is head of advocacy and education for the Adobe-led Content Authenticity Initiative. As a photographer for Reuters and the Associated Press, Santiago Lyon won multiple photojournalism awards for his coverage of conflicts around the world.
KENNEALLY: Photography is the place you’re starting, especially because we recognize it’s so easy to manipulate images, but you are more ambitious than just photography. Tell us about the provenance problem. How did we get here? How big is the problem? And why does it matter?
LYON: It’s becoming an increasingly confusing digital media landscape out there, and consumers often have difficulty understanding where content comes from, whether it’s been manipulated – if it has, to what degree? Traditionally, the approach to these problems has been in the area of detection, which is to say uploading suspect digital files to programs that look for telltale signs of manipulation, whether that’s sloppy Photoshopping or inconsistent pixel structures or an impossible combination of lighting sources.
While detection software is useful, we believe that it has a couple of fundamental problems. Number one, at least in its current form, it’s not scalable, in the sense that it takes too long to run images and other file types through detection software. And secondly, it’s invariably an interminable arms race, with bad actors trying to stay one step ahead of the latest detection software.
So instead of trying to detect what’s false, we decided to look at it from the other end of the argument, which is proving what’s real.
To that end, we began the work on the Content Authenticity Initiative and working to establish the provenance of digital file types – that is to say the basic trustworthy facts about the origins of a piece of digital content, what might have happened to it along its journey from creation or capture to publishing, and then exposing some or all of that information to the consumer to give them some insight into that provenance to help them make better-informed decisions about the veracity of what they’re looking at.
KENNEALLY: A skeptic might ask, Santiago Lyon, who put you in charge of telling people what’s right and what’s wrong when it comes to information? Is that what this is about?
LYON: No, we’re building a tool. Adobe is a technology company. We’re not forcing our vision of anything onto anybody. We’re providing a tool that publishers and others can use to determine how much information they want to share about the provenance of the content that they’re publishing. Really, what it does is bolster and buttress existing trust models. For example, if you happen to trust the Associated Press or The New York Times or The Wall Street Journal or Fox News or whatever it might be, this additional information, together with that existing trust model, will serve to establish the veracity and the integrity of the content that you’re looking at.
KENNEALLY: When you became a photographer in the 1980s, Santiago, you shot with film, and you worked in darkrooms. Certainly, digital technology has transformed media entirely, from creation to distribution. Are you concerned about where the future may be taking us? Should all of us be concerned about our digital futures?
LYON: I think we should all be concerned about the provenance of what it is we’re reading. I think we should be very careful to always check the source. Who is reporting this? Who is saying this? What might their agenda be? What relationship might they have with a particular aspect of a news story or things of that nature? I think it behooves us to be discerning consumers of content. And I think that the technology that we are developing will help us do that.
I’m optimistic for the future. Technology has played a major role in storytelling since its inception and will likely continue to. And the fact that it’s getting more sophisticated and more complicated, while it does create some challenges, also creates some significant opportunities in terms of engaging more effectively and efficiently with viewers who might be suffering from digital fatigue or story overload or things of that nature. So I’m very optimistic and excited about the future.
KENNEALLY: The COVID-19 pandemic has highlighted an ongoing and escalating attack on science that is much more than a philosophical debate. It is an attack on scientists, too. Physicians for Human Rights surveyed over 900 clinicians in early 2021 and documented a climate of fear in labs and hospitals. Over 60% said they fear reprisal if they spoke out publicly on safety concerns about COVID-19.
Verbal abuse, violent threats, and even physical attacks on medical staff and scientists reflect a tense relationship between science, media, politics, and the public, says UK-based journalist Anita Makri. When that relationship turns toxic, she notes, public health messages and scientific evidence must battle to be heard.
What role does social media play to incite people to attack scientists, and who are the targets of abuse that are most vulnerable?
MAKRI: I think it’s pretty well established that attacks via social media are common, and not just for scientists. We also know that women and minorities are especially targeted for abuse. I guess what’s less cited is the role that they have in this just by a lack of action. Some of my interviewees said that their complaints to Twitter, for example, vanished into a black hole, and others just simply saw no point in even reporting the abuse to them. So I think that that’s another dimension of the role of social media which isn’t as widely discussed.
KENNEALLY: Yet you do balance that, I think, rather negative view with the really important point that social media isn’t entirely a negative force. Twitter and Facebook are also a kind of double-edged sword, because they’re vital for sharing crucial information, especially during crises like the pandemic.
MAKRI: Yeah, that’s right. I made a point of making that point, if you will, because we know this from experience, but it also came through from the reporting once again for the story. Some of the interviewees had no interaction with social media. Others used it to communicate science-based information, even after receiving threats, and still saw great power in doing that.
KENNEALLY: I think your reporting found that a simplistic view of science is what leads people to look for someone to blame in a crisis like COVID.
MAKRI: Yes, I think that that really sort of emerged through the accounts of the people I spoke to. But I really think it’s also what happens when people need answers and clarity that science isn’t yet ready to provide urgently in this case, and faster than the usual pace of scientific work. In the early stages of the pandemic, that mismatch left a real vacuum, and there’s plenty to fill it. So there’s speculation, beliefs, miracle treatments, politics, opinions on social media. I think for me, that’s sort of a key part of why those things tend to happen.
KENNEALLY: Anita Makri, several scientists you spoke to said they wished they had talked more to the press rather than less, even though they faced attacks as a result. So is more public discussion about science and research, not less, the secret weapon in this war?
MAKRI: Well, I’m not sure there’s a secret weapon, really. I would say that it’s more about the kind of discussion we have about science, rather than how much of it we’re having. I’ve written before about the habit of viewing it as an authority, expecting science to give definitive answers and give them fast. I think that has created false expectations. The good news is that I think this message is starting to take shape in public discourse, and perhaps a greater willingness of scientists to speak up is part of that. But we’re still in a kind of transition stage, I think, when it comes to the role of science in society.
And in terms of attacks specifically, I think it’s part of a wider pattern. We’re seeing intimidation also growing for journalists, for defenders of human rights and the environment, and there is definitely data there to support that.
So to link it back to the role of science, I think that once you take voices of moderation out of the picture, you’re left with extreme views.
KENNEALLY: In growing numbers, people take questions about healthcare, politics, or finding the best restaurants not to Google, but to TikTok, the short-form video platform. NewsGuard is a journalism and technology tool founded in 2018, the same year as TikTok, though the two could hardly be more different.
NewsGuard rates the credibility of news and information websites and tracks online misinformation for search engines, social media apps, and advertisers. About 40% of all the news and information sites assessed by NewsGuard receive a red rating, categorizing them as untrustworthy. In September, a NewsGuard investigation revealed that such TikTok searches consistently feed false and misleading claims to users, most of whom are teens and young adults.
Gordon Crovitz, a former publisher of the Wall Street Journal, started NewsGuard with Steve Brill, who founded The American Lawyer in 1979 and started Court-TV in 1989. Crovitz explained that NewGuard fights misinformation across many fronts.
CROVITZ: We fight misinformation on behalf of readers, on behalf of brands, and on behalf of democracies. So we rate all the news and information sources that account for 95% of engagement. News consumers can find those ratings and detailed nutrition labels through services like Microsoft. We similarly help brands keep their ads off of misinformation sites. With the nature of programmatic advertising and algorithmically driven advertising, the ads of every blue-chip company is going to end up on the worst websites unless they take some step against it. And then we also help democracies around the world, particularly against Russian and Chinese disinformation being aimed at populations in the US and in Europe.
KENNEALLY: And, Gordon Crovitz, TikTok is a problem, but it’s not the only problem. In fact, NewsGuard research says that misinformation is a $2.6 billion problem and one that’s especially dangerous for brands. How are companies unwittingly contributing to the problem?
CROVITZ: This is the biggest shock that we’ve had in our time at NewsGuard – to be able to quantify the amount of online advertising unintentionally going to support misinformation sites, from Russian propaganda sites to healthcare hoax sites that’ll sell you a subscription to peach pits to cure cancer – chock-full of ads from every brand you could think of. It’s $2.6 billion a year. And the reason it happens is that with programmatic advertising, which is advertising selected by a computer – that’s the biggest category of advertising, and an ad will end up on all kinds of sites unless the brand itself or the ad agency or somebody in the ad tech world takes some step to advertising responsibly. That means not advertising on Vladimir Putin’s websites or crazy healthcare hoax sites and instead steering ads to high-quality sites. Or in local communities, we have inclusion lists that include sites serving the Black community, Hispanic, Asian, LGBTQ+ communities – sites that get relatively little advertising, because they’re not that well known to advertisers.
KENNEALLY: Gordon Crovitz, when a video of a UFO turns up on TikTok or anywhere else online, what steps should a child or should I take to see if it was just the Goodyear blimp?
CROVITZ: (laughter) Well, I think we always encourage people to look at the source and to be as familiar as possible with the source. Does this come from an established news operation? And if it does, hopefully you’re looking at it on a platform that includes an explanation of the nature of that news source from us or from somebody else. That’s the easiest way. We actually have agreements with a number of entities serving schools and universities so that when they do a search result for UFOs or any other topic, they’ll instantly see – it says from a generally reliable source, or is it not?
I think in the earlier era – even the internet, but before social media – I think it was still possible for people to teach themselves how to become news-literate. I frankly no longer believe that. I think it’s become impossible. The nature of these platforms, the algorithmic engines that promote falsehoods and send people down rabbit holes, the utter absence of disclosure about the nature of the sources on so many platforms really makes it hard for people to know what to believe or not to believe.
KENNEALLY: According to the Congressional Research Service, global research and development expenditures were $2.4 trillion US in 2020. Since 2000, total global R&D expenditures have more than tripled. Yet the public’s appreciation of the value of research has declined in recent years, and especially during the COVID pandemic.
If research is serious business, then why would anyone want to make fun of it?
Chalani Ranwala, a research communications specialist based in Colombo, Sri Lanka, suggests an antidote to cynicism and suspicions could be humor.. In her contribution to the London School of Economics’ Impact Blog earlier this year, Ranwala proposed that repackaging information into humorous content creates an informal access point to audiences.
KENNEALLY: Chalani Ranwala, humor has long played an important role in critiquing society. What effect do comic devices have on how we receive and process information?
RANWALA: Well, Chris, I grew up watching a lot of standup comedy and a lot of talk show hosts in their talk show monologues, and I always knew that satire and humor – it goes a lot beyond just for entertainment purposes, and I really wanted to – because my background is in research communications, I wanted to see whether there was some kind of way to bring those two worlds together. And I started reading up and digging a little deeper into what it is about humor that makes it such an effective medium, and what I found is that satire relies on the ability of the audience to recognize the irony in what they hear in the humor. I think it’s that irony that really makes humor an effective medium.
What I also found out was that humor also sort of – it reduces our counterarguments against information that’s received in an entertaining format.
KENNEALLY: Chalani Ranwala, it seems that humor, then, breaks down our resistance to new ideas, and it makes that an ideal vehicle to transmit knowledge.
RANWALA: It does, Chris. I really looked into this and what academics have said about humor, and there is a lot studies, a lot of research being done on comedians, particularly comedians who talk about issues like racism and feminism and social issues faced in different countries. These are very serious topics, right? But what it is about humor that makes it OK to make jokes about it, and what is it about humor that makes us want to understand these issues in a different way?
Just earlier today, I was listening to a South African comedian talk about his experience with racism, and it was a lot of humor. He was sort of engaging his audience. But really, he was talking about an issue – a very personal issue that was very serious. But there is something about the way this information is conveyed that makes you look at it differently. So it’s almost that a-ha moment – you know, when you hear a joke and you laugh, but then you’re also like, wait a minute. Why is it funny, though? What’s really going on here?
Finally, and this is where I feel like it’s most relevant to working with research, is that satire creates a very comfortable, accessible, friendly access point to an audience. I worked in research communications for many years. And as you know, Chris, it’s not always an interesting topic. Depending on what you’re researching, it’s not always the most exciting content to put out there, especially to a general audience. It can be quite heavy with jargon. It can be quite technical. But humor is a friendlier voice, and it’s not as intimidating as, let’s say, a research brief or a policy-related report. So what I find is that it kind of gives researchers an access point to reach an audience at a level in which they are more comfortable and also more willing to engage.
KENNEALLY: Too much information. If that’s driving you insane, then you should know there are ways to focus and filter for information success.
The information deluge is real. At CCC, we partnered with analyst firm Outsell, Inc., on an information seeking and consumption study to learn how copyrighted content is reused and shared in the workplace. From 2016 to 2020, the amount of work-related content sharing tripled. When working at home, the frequency of content sharing rose by more than one-third.
Dr. Tracy Brower, a sociologist and author of two books exploring happiness, fulfillment, and work life, say we can all be more selective about the choices we make over how we consume information. More careful information consumption helps build resistance to misinformation and greater personal resilience in an ever shifting news cycle.
BROWER: Resilience is about really being in tune with what’s happening, so staying aware. Resilience is about making sense of that information. What does it mean to me? What is my takeaway? What are my ways of perceiving against that information? And then finally, resilience is about improvisation, right? Pulling ourselves up by our bootstraps, being creative, solving problems, figuring out how to respond.
KENNEALLY: And it seems to me it’s about modulating our response. It’s about the way we respond. It’s not the information itself.
BROWER: Yes, so well said. Absolutely. Information is so about how we interpret it. This may not be new news, but I think it’s important news that how we’re feeling affects how we interpret what’s coming at us. In other research, you find that information can be relatively objective or relatively vanilla, but if we’re feeling at odds or anxious or upset, we can view that much more negatively than we would if we were feeling more optimistic and positive. So our own feelings about things absolutely shape our perceptions of our incoming information.
KENNEALLY: OK, so we have to think about the way we feel about the information. We also have to reckon with how useful this information is.
BROWER: Yeah, absolutely. You know, this is really interesting and related to learning. One of the classic tenets of adult learning is that we tend to learn better when we’re ready for that information, when that information has usefulness to us. And that is true about information as well
KENNEALLY: There’s an issue here around something that we hear a good deal about when it comes to misinformation, and that’s confirmation bias.
BROWER: Yes, absolutely. This one’s such a big deal. Confirmation bias is when we look for information that tends to agree with what we already believe. And these are really interesting times, because we’re managing information flow, but also, there are lots of algorithms that are managing information flow. And we can find ourselves, even unknowingly, in echo chambers. Algorithms can work too well. So we’re exposed to information that we already agree with, and we’re not exposed to enough diversity of information.
So I think we want to be intentional here as well to expose ourselves to ideas that are new, expose ourselves to things that we may not agree with, in order to stretch our thinking, in order to infuse diversity, new ideas, in order to learn more, in order to challenge ourselves so that we can either shift our point of view or recommit to our point of view.
KENNEALLY: When making a meal, we choose foods and flavors according to our appetites and our tastes. Whatever ends up on the menu, there are always fresh, wholesome ingredients from trusted sources. Our families and our guests expect as much – they rely on our judgment never to serve anything toxic or contaminated.
For all our information diets, the same care is advised – for our own good health, and for the good health of our communities.
Our co-producer and recording engineer is Jeremy Brieske of Burst Marketing. You can listen to Velocity of Content on demand on YouTube as part of the Copyright Clearance Center channel and subscribe wherever you go for podcasts.
I’m Christopher Kenneally. Thanks for joining me throughout the year on Velocity of Content from CCC. Best wishes for 2023!