"Suddenly you can't trust anything you see online" | #mediadev | DW | 04.09.2023
  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages

INTERVIEW

"Suddenly you can't trust anything you see online"

AI-driven, text-producing machines are altering our digital media environment. Researcher Swapneel Mehta discusses possible threats to information integrity – and the opportunities generative AI holds for media outlets.

A woman sits in front of a computer screen in a newsroom with several monitors attached to a rack in the middle of the room

Provide as much accurate information as possible: Newsroom in Hamburg, Germany

DW Akademie: Mr. Mehta, at New York University you are looking at disinformation trends on social media. What has been underway lately?

Swapneel Mehta: Producing and deploying fake content is becoming much cheaper, and people are more susceptible to it, so it scales much faster. Nowadays, it only takes a click of a button of an AI tool to generate a video. 

I also see more organized forms of disinformation in the political spectrum. It is now much easier to create a cohesive campaign against someone. All you must do is ask an AI to pretend to be this person, or use their public information to train an AI to generate vitriol against them. 

Do you see new types of actors spreading disinformation?

To run scams and disinformation at scale has become easier. Even less technically adept actors can deploy these kinds of manipulation tactics at scale as a result of unfettered access to AI. 

So while I’m in favor of, and very appreciative of the democratization of such technologies, the harms do tend to be faster than the benefits. Especially when it comes to vulnerable populations.

What are the effects on political discourse?

We have seen political actors exploit discourse online to promote divisiveness. New research has shown that generative AI models can produce speech that is more politically persuasive than human speech, albeit in limited settings. 

Anyone who wants to influence political discourse now has an arsenal of new tools at their disposal. It is really scary that we now have very good machines that can make arguments online. In principle, all you might have to do is click a button and deploy them to fight with real online users. 

What are the implications on information ecosystems?

Trust is falling massively because suddenly you can't trust anything you see online or take it at face value. 
This also could backfire on journalists because the average person who has been exposed to misinformation and experienced actual harm due to that will then start to question journalistic institutions as well. 

How can media regain trust? 

Media need to understand where a lot of criticism against them is coming from. A lot of times, criticism is not completely invalid. We have seen major problems arise when media self-appoint themselves as an arbiter of truth. And then it turns out that the coverage was inaccurate, misrepresented, or simply missed some facts. 

My sense is that the media’s role should shift towards providing as much accurate information as possible, and allowing the audience to draw their conclusions. At least they should provide a space for evidence-based counternarratives. The media might add value to more conversations, but just ensure they don’t present themselves as the only arbiter of truth in the world.

How can AI help media in establishing that approach? 

Using AI technology to speed up the writing process is one of the low-hanging fruit that media organizations could invest in right now. And that might involve structuring their data in a way that that AI is able to make use of it. 

That might involve initially partnering with large language model providers like OpenAI with the eventual goal of creating their own AI so that they have control over the process end to end and can train it to do specialized tasks. 

It could also involve setting up a team of prompt engineers to build in-house expertise on what prompts and models are most reflective of your organization’s writing style and communication.

What other challenges and opportunities do you see?

It is very important for media to be aware of the limitations of AI, to be clear about what technology can and can't do. One thing we know is that large language models massively generate false information. It is very smart marketing that they have named this “hallucinations.” In fact, it is a model failure. It is creating misinformation and putting it out there.

On the other hand, there are useful innovations. In low resource settings where people cannot use the Internet to access websites, they at least have chat and local messaging apps.

Media could create a local database of fact-checked information so that users can query this database with a chat-bot. It is an easy way to interface with the audience and get them to converse with a medium on a slightly more personal level than through an article. In fact, I am working on this right now.


Swapneel Mehta is postdoc at NYU Data Science. He is founder at SimPPL, a collective that aims at fostering civic information integrity. SimPPL has worked with several media outlets, including DW. Mehta’s  research deals with limiting disinformation on social networks.

 

Interview: Julius Endert (em, am)

The interview has been edited for brevity and clarity.

DW recommends