PODCAST: “Fake news is a psychological problem rather than a technological one”

My interview with VRT News Journalist & Visiting Research Fellow at Stanford University Tom Van de Weghe about Artificial Intelligence and deep fakes.

calendar icon
November 14, 2019
Podcast

I recently had the pleasure of interviewing Tom Van de Weghe, VRT News Journalist & Visiting Research Fellow at Stanford University, about Artificial Intelligence and deep fakes for our “nexxworks Innovation Talks podcast”. We talked about many things (like the disappearance of certain jobs, the use of consumer data and innovation at the VRT among other things), but I wanted to share some of the highlights of our conversation here, on our nexxworks blog, too.

The red thread running through our conversation was trust, rather than technology. As Tom put it: “fake news is much more a psychological problem than it is a technological problem”.

Listen to the entire podcast conversation here:

"Distrust should become the basis of the way we look at online content, because everything could be fake."

Distrust has become the basis

Tom was one of the first Belgians in 3 decades to be a Visiting Research Fellow at Stanford University. Because it seemed to him that a lot of people were minimizing the issue, he decided to use his time and research there to raise awareness about deep fakes and find ways to fight them. “Even the academic staff at Stanford university accused me of being a typical journalist fear monger in that aspect at the beginning. And they were not alone. A lot of people tell me that this is not a new problem, that we have been able to create fake visuals for a long time now, with for instance photoshop. And that’s completely true, but the way I see it, two major changes have happened since then. Obviously, the technology has become a lot more sophisticated, powerful and harder to detect. So basically, distrust should become the basis of the way we look at online content, because everything could be fake. And another aspect of that ubiquity, is that tools have become so easy to purchase and simple – like the wildly popular Chinese app Zao – that basically anyone has access to this type of technology. Which means that anyone could be a victim, not just movie stars or politicians.”

We then discussed how these deep fakes could be addressed. One way, explained Tom, was to build the trust and verification inside the technology, for instance using blockchain to create a sort of video provenance tool in which every segment of a video is assigned a sort of smart contract. So, each time you distrust a certain clip, you could always look under the hood to see where the separate pieces came from. Another would be to use cryptography to create a digital watermark signature. Tom also talked about Sherlock AI, the automatic deep fake detection tool from some of his fellow Stanford students that looks for anomalies in a video by using the same deep learning techniques that have been used to create the deep fakes in the first place.

“It’s not because people can look up the origin of a clip, that they will do so”

Humans are the weakest link

But as technology is not the root problem of the fake news phenomenon, it’s quite logical that technology alone will not be the solution. “It’s not because people can look up the origin of a clip, that they will do so”, Tom explained. “Humans are the weakest link here and that’s because of confirmation bias, which is the tendency to favour information that affirms our beliefs. If a clip confirms what we thought about person X, we’ll not be inclined to question it, let alone investigate it, even if we have the tools to do so at hand.” He agreed that for instance teaching critical thinking in schools could be a big help. In fact, he believes that teaching them about AI in general - the good and bad part - is crucial. It’s one of the reasons that the VRT has developed an AI education box for distribution in high schools, in which students can for instance play with neural networks and facial recognition.

Why blindly trusting algorithms is a bad idea

Going beyond fake news, and talking about AI in general, we discussed how a lot of people seem so unaware of the fact that algorithms should not be blindly trusted. They aren’t neutral at all, but reflect the people who make them, most often white males. And this can result in a lot of bias. And yet, Tom told me that for instance, most people put more trust in an article that’s written by an AI than in one written by a human journalist. That’s telling.

"Journalists should become some kind of algorithm detectives."

“Newsrooms need algorithmic accountability reporting”, said Tom. “It’s for instance what The Markup - which focuses on data-driven journalism, covering the ethics and impact of technology on society – and ProPublica – a nonprofit newsroom that aims to produce investigative journalism in the public interest – have been doing for the past years. I remember this one project that uncovered that the algorithms that decided upon the amount of bail that a defendant needed to pay were racially biased, asking ask higher sums for coloured people. Just think about that, and how so many parts of our lives are ruled by algorithms, which are never honest and objective by nature. It’s should be the mission of the media to uncover these types of injustice. Journalists should become some kind of algorithm detectives. But at the same time, newsrooms can also use algorithms for faster fact-checking or digging up video-material from the newsroom archives. So it’s not a black and white story, obviously. AI can certainly be very beneficial for our profession. But it’s always important to stay aware about potential bais.”

Using synthetic media for good

In fact, even deep fakes can be used for good, though Tom pointed out they were called “synthetic media” in that case. “Just think of the potential for the entertainment industries, localizing movies in a certain language or making series at a much lower cost. Or people could use it for reasons of education and training. There’s for instance a museum in Florida which has ‘revived’ Salvador Dali so he can talk to the visitors.”

A Salvador Dalí museum in St Petersburg, Florida has used an artificial intelligence technique to "bring the master of surrealism back to life".

“Or companies could use it to train their salespeople with lifelike scenario’s or have a monthly company communications update with the type of AI news anchor that already exists in China.”

“And let’s not forget usage in the newsroom. We could easily correct mistakes - like a faulty statistic or a name - made by journalists in a recording without heavy editing or needing to re-record. Though, as with every aspect of AI and synthetic media, we’d have to watch out that this does not turn too dark.”

It’s safe to conclude that the fake information phenomenon is definitely not new, but that is has grown in power, realism and reach. And that it is important that people are deeply aware of the fact that anything they see or read online could be fake, in fact, especially if it reflects exactly what you have been thinking all along. Because, even more so than the technology, it’s the human part of the equation that needs improving and adapting to a changed media environment.

You can download the VRT EDUbox Artificial Intelligence Tom mentions in the interview here.

WRITTEN BY
Laurence Van Elegem
Laurence Van Elegem
Laurence has more than 10 years of experience in marketing, communications and disruptive innovation. Passionately curious, she is fascinated by the impact of technology and science on the way we work, consume and live our lives.
See author page
Join us on our next experience
calendar icon
Get front row access to the latest scoop and new upcoming experiences, bundled into a monthly newsletter
You may opt-out any time. 
Read the .
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
calendar icon
November 14, 2019
Podcast