How you can help transform the internet into a place of trust | Claire Wardle

How you can help transform the internet into a place of trust | Claire Wardle

No matter who you are or where you live, I’m guessing that you have
at least one relative that likes to forward those emails. You know the ones I’m talking about — the ones with dubious claims
or conspiracy videos. And you’ve probably
already muted them on Facebook for sharing social posts like this one. It’s an image of a banana with a strange red cross
running through the center. And the text around it is warning people not to eat fruits that look like this, suggesting they’ve been
injected with blood contaminated with the HIV virus. And the social share message
above it simply says, “Please forward to save lives.” Now, fact-checkers have been debunking
this one for years, but it’s one of those rumors
that just won’t die. A zombie rumor. And, of course, it’s entirely false. It might be tempting to laugh
at an example like this, to say, “Well, who would believe this, anyway?” But the reason it’s a zombie rumor is because it taps into people’s
deepest fears about their own safety and that of the people they love. And if you spend as enough time
as I have looking at misinformation, you know that this is just
one example of many that taps into people’s deepest
fears and vulnerabilities. Every day, across the world,
we see scores of new memes on Instagram encouraging parents
not to vaccinate their children. We see new videos on YouTube
explaining that climate change is a hoax. And across all platforms, we see
endless posts designed to demonize others on the basis of their race,
religion or sexuality. Welcome to one of the central
challenges of our time. How can we maintain an internet
with freedom of expression at the core, while also ensuring that the content
that’s being disseminated doesn’t cause irreparable harms
to our democracies, our communities and to our physical and mental well-being? Because we live in the information age, yet the central currency
upon which we all depend — information — is no longer deemed entirely trustworthy and, at times, can appear
downright dangerous. This is thanks in part to the runaway
growth of social sharing platforms that allow us to scroll through, where lies and facts sit side by side, but with none of the traditional
signals of trustworthiness. And goodness — our language around this
is horribly muddled. People are still obsessed
with the phrase “fake news,” despite the fact that
it’s extraordinarily unhelpful and used to describe a number of things
that are actually very different: lies, rumors, hoaxes,
conspiracies, propaganda. And I really wish
we could stop using a phrase that’s been co-opted by politicians
right around the world, from the left and the right, used as a weapon to attack
a free and independent press. (Applause) Because we need our professional
news media now more than ever. And besides, most of this content
doesn’t even masquerade as news. It’s memes, videos, social posts. And most of it is not fake;
it’s misleading. We tend to fixate on what’s true or false. But the biggest concern is actually
the weaponization of context. Because the most effective disinformation has always been that
which has a kernel of truth to it. Let’s take this example
from London, from March 2017, a tweet that circulated widely in the aftermath of a terrorist incident
on Westminster Bridge. This is a genuine image, not fake. The woman who appears in the photograph
was interviewed afterwards, and she explained that
she was utterly traumatized. She was on the phone to a loved one, and she wasn’t looking
at the victim out of respect. But it still was circulated widely
with this Islamophobic framing, with multiple hashtags,
including: #BanIslam. Now, if you worked at Twitter,
what would you do? Would you take that down,
or would you leave it up? My gut reaction, my emotional reaction,
is to take this down. I hate the framing of this image. But freedom of expression
is a human right, and if we start taking down speech
that makes us feel uncomfortable, we’re in trouble. And this might look like a clear-cut case, but, actually, most speech isn’t. These lines are incredibly
difficult to draw. What’s a well-meaning
decision by one person is outright censorship to the next. What we now know is that
this account, Texas Lone Star, was part of a wider Russian
disinformation campaign, one that has since been taken down. Would that change your view? It would mine, because now it’s a case
of a coordinated campaign to sow discord. And for those of you who’d like to think that artificial intelligence
will solve all of our problems, I think we can agree
that we’re a long way away from AI that’s able to make sense
of posts like this. So I’d like to explain
three interlocking issues that make this so complex and then think about some ways
we can consider these challenges. First, we just don’t have
a rational relationship to information, we have an emotional one. It’s just not true that more facts
will make everything OK, because the algorithms that determine
what content we see, well, they’re designed to reward
our emotional responses. And when we’re fearful, oversimplified narratives,
conspiratorial explanations and language that demonizes others
is far more effective. And besides, many of these companies, their business model
is attached to attention, which means these algorithms
will always be skewed towards emotion. Second, most of the speech
I’m talking about here is legal. It would be a different matter if I was talking about
child sexual abuse imagery or content that incites violence. It can be perfectly legal
to post an outright lie. But people keep talking about taking down
“problematic” or “harmful” content, but with no clear definition
of what they mean by that, including Mark Zuckerberg, who recently called for global
regulation to moderate speech. And my concern is that
we’re seeing governments right around the world rolling out hasty policy decisions that might actually trigger
much more serious consequences when it comes to our speech. And even if we could decide
which speech to take up or take down, we’ve never had so much speech. Every second, millions
of pieces of content are uploaded by people
right around the world in different languages, drawing on thousands
of different cultural contexts. We’ve simply never had
effective mechanisms to moderate speech at this scale, whether powered by humans
or by technology. And third, these companies —
Google, Twitter, Facebook, WhatsApp — they’re part of a wider
information ecosystem. We like to lay all the blame
at their feet, but the truth is, the mass media and elected officials
can also play an equal role in amplifying rumors and conspiracies
when they want to. As can we, when we mindlessly forward
divisive or misleading content without trying. We’re adding to the pollution. I know we’re all looking for an easy fix. But there just isn’t one. Any solution will have to be rolled out
at a massive scale, internet scale, and yes, the platforms,
they’re used to operating at that level. But can and should we allow them
to fix these problems? They’re certainly trying. But most of us would agree that, actually,
we don’t want global corporations to be the guardians of truth
and fairness online. And I also think the platforms
would agree with that. And at the moment,
they’re marking their own homework. They like to tell us that the interventions
they’re rolling out are working, but because they write
their own transparency reports, there’s no way for us to independently
verify what’s actually happening. (Applause) And let’s also be clear
that most of the changes we see only happen after journalists
undertake an investigation and find evidence of bias or content that breaks
their community guidelines. So yes, these companies have to play
a really important role in this process, but they can’t control it. So what about governments? Many people believe
that global regulation is our last hope in terms of cleaning up
our information ecosystem. But what I see are lawmakers
who are struggling to keep up to date with the rapid changes in technology. And worse, they’re working in the dark, because they don’t have access to data to understand what’s happening
on these platforms. And anyway, which governments
would we trust to do this? We need a global response,
not a national one. So the missing link is us. It’s those people who use
these technologies every day. Can we design a new infrastructure
to support quality information? Well, I believe we can, and I’ve got a few ideas about
what we might be able to actually do. So firstly, if we’re serious
about bringing the public into this, can we take some inspiration
from Wikipedia? They’ve shown us what’s possible. Yes, it’s not perfect, but they’ve demonstrated
that with the right structures, with a global outlook
and lots and lots of transparency, you can build something
that will earn the trust of most people. Because we have to find a way
to tap into the collective wisdom and experience of all users. This is particularly the case
for women, people of color and underrepresented groups. Because guess what? They are experts when it comes
to hate and disinformation, because they have been the targets
of these campaigns for so long. And over the years,
they’ve been raising flags, and they haven’t been listened to. This has got to change. So could we build a Wikipedia for trust? Could we find a way that users
can actually provide insights? They could offer insights around
difficult content-moderation decisions. They could provide feedback when platforms decide
they want to roll out new changes. Second, people’s experiences
with the information is personalized. My Facebook news feed
is very different to yours. Your YouTube recommendations
are very different to mine. That makes it impossible for us
to actually examine what information people are seeing. So could we imagine developing some kind of centralized
open repository for anonymized data, with privacy and ethical
concerns built in? Because imagine what we would learn if we built out a global network
of concerned citizens who wanted to donate
their social data to science. Because we actually know very little about the long-term consequences
of hate and disinformation on people’s attitudes and behaviors. And what we do know, most of that has been
carried out in the US, despite the fact that
this is a global problem. We need to work on that, too. And third, can we find a way to connect the dots? No one sector, let alone nonprofit,
start-up or government, is going to solve this. But there are very smart people
right around the world working on these challenges, from newsrooms, civil society,
academia, activist groups. And you can see some of them here. Some are building out indicators
of content credibility. Others are fact-checking, so that false claims, videos and images
can be down-ranked by the platforms. A nonprofit I helped
to found, First Draft, is working with normally competitive
newsrooms around the world to help them build out investigative,
collaborative programs. And Danny Hillis, a software architect, is designing a new system
called The Underlay, which will be a record
of all public statements of fact connected to their sources, so that people and algorithms
can better judge what is credible. And educators around the world
are testing different techniques for finding ways to make people
critical of the content they consume. All of these efforts are wonderful,
but they’re working in silos, and many of them are woefully underfunded. There are also hundreds
of very smart people working inside these companies, but again, these efforts
can feel disjointed, because they’re actually developing
different solutions to the same problems. How can we find a way
to bring people together in one physical location
for days or weeks at a time, so they can actually tackle
these problems together but from their different perspectives? So can we do this? Can we build out a coordinated,
ambitious response, one that matches the scale
and the complexity of the problem? I really think we can. Together, let’s rebuild
our information commons. Thank you. (Applause)


  1. Post
  2. Post
  3. Post
  4. Post
  5. Post
  6. Post

    I agree with your premise, I reject your conclusion.
    Also the title is nonsense. The internet is not, was not, and never should be, a place of 'trust'.
    It just wouldn't be The Internet anymore if it was.

  7. Post
    Ryan Pickering

    'The weaponization of context'. Wow, now that's a powerful image. Let me add that I would much rather give my data freely to responsible, just companies than have it stolen from me without my consent. My two cents. Thank you for this.

  8. Post
  9. Post
    Thomas Schön

    We need our professional news media more than ever now? 😂
    You got to be kidding me. They're not even reporters anymore, they're activists.

    You can fool some people sometimes.

    But you can't fool all the people all the time..

  10. Post
  11. Post
  12. Post
  13. Post
  14. Post
  15. Post
  16. Post
    Cerberus x47

    Lol "it's not fake… It's MISLEADING"
    As if it's different. Pretty sure both fake news or misleading – either one is worthy of redaction when caught.

  17. Post
  18. Post
    Paul Marek

    "to attack a free and independent press" – LOL! More like a partisan, corpocratic, propaganda-filled press that's losing the trust of critical thinkers daily.

  19. Post
  20. Post
  21. Post
  22. Post
  23. Post
  24. Post
  25. Post
  26. Post
  27. Post
  28. Post
  29. Post
  30. Post
  31. Post
  32. Post
  33. Post
  34. Post
  35. Post
  36. Post
  37. Post
    Success Waiting

    We always work for better tomorrow
    But , when tomorrow comes

    Instead of enjoying, the tomorrow
    We again think of better , tomorrow

  38. Post

    Oh please. Satanists are awesome ethical people. And their numbers are tiny. Way to try to smear them with bullshit memes.

  39. Post
  40. Post
    Lyssa’s Letters

    I love how she used Wikipedia as an example! Wikipedia seems to have an amazing system of fact-checking. My take is that this woman is simply trying to help us all brainstorm ways to create a World Wide Web that is based on events that have actually taken place and verifiable information, as opposed to people’s opinions. Somehow Wikipedia has managed to find ways to troubleshoot when someone provides erroneous information on their site (eg. “grass is purple” would be taken down super quickly). It seems like a reasonable hypothesis that other sites could have similar troubleshooting methods.

  41. Post

    Yet another talk that belongs on TedX. Second one in a row.
    Ted talks are for tangible and real world problems. Whether they biological, astrological, or just a lack of something needed. Talks like "how to transform X social thing into Y" are like trying to form a tangible pragmatic CONTROLLED realism from something intangible and irrational. You're grasping at ghosts.

  42. Post
    L.A. Jameson

    Oh please, lady! The whole point of free speech is to allow the opinions of others, even if you don't agree. The largest amount of demonizing of other's differences come from people and organizations like YOU.

    The crowd clapping should be ashamed of themselves for going along with this planted farce.

  43. Post
    Kevin Rushing

    So the term “fake news” is the problem, and not the fact that the media is covering up a story to protect a powerful pedophile ring? Cool.

    Also, the best way to fight bad ideas is to shine a light on them, not sensor and “protect” people from them. You say you care about freespeech, but this whole y’all is about how to filter out “problematic” internet content

Leave a Reply

Your email address will not be published. Required fields are marked *