We’re building a dystopia just to make people click on ads | Zeynep Tufekci

We’re building a dystopia just to make people click on ads | Zeynep Tufekci


So when people voice fears
of artificial intelligence, very often, they invoke images
of humanoid robots run amok. You know? Terminator? You know, that might be
something to consider, but that’s a distant threat. Or, we fret about digital surveillance with metaphors from the past. “1984,” George Orwell’s “1984,” it’s hitting the bestseller lists again. It’s a great book, but it’s not the correct dystopia
for the 21st century. What we need to fear most is not what artificial intelligence
will do to us on its own, but how the people in power
will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of the technology that threatens our freedom
and our dignity in the near-term future is being developed by companies in the business of capturing
and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent. Now, artificial intelligence has started
bolstering their business as well. And it may seem
like artificial intelligence is just the next thing after online ads. It’s not. It’s a jump in category. It’s a whole different world, and it has great potential. It could accelerate our understanding
of many areas of study and research. But to paraphrase
a famous Hollywood philosopher, “With prodigious potential
comes prodigious risk.” Now let’s look at a basic fact
of our digital lives, online ads. Right? We kind of dismiss them. They seem crude, ineffective. We’ve all had the experience
of being followed on the web by an ad based on something
we searched or read. You know, you look up a pair of boots and for a week, those boots are following
you around everywhere you go. Even after you succumb and buy them,
they’re still following you around. We’re kind of inured to that kind
of basic, cheap manipulation. We roll our eyes and we think,
“You know what? These things don’t work.” Except, online, the digital technologies are not just ads. Now, to understand that,
let’s think of a physical world example. You know how, at the checkout counters
at supermarkets, near the cashier, there’s candy and gum
at the eye level of kids? That’s designed to make them
whine at their parents just as the parents
are about to sort of check out. Now, that’s a persuasion architecture. It’s not nice, but it kind of works. That’s why you see it
in every supermarket. Now, in the physical world, such persuasion architectures
are kind of limited, because you can only put
so many things by the cashier. Right? And the candy and gum,
it’s the same for everyone, even though it mostly works only for people who have
whiny little humans beside them. In the physical world,
we live with those limitations. In the digital world, though, persuasion architectures
can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent
to everyone’s phone private screen, so it’s not visible to us. And that’s different. And that’s just one of the basic things
that artificial intelligence can do. Now, let’s take an example. Let’s say you want to sell
plane tickets to Vegas. Right? So in the old world, you could think
of some demographics to target based on experience
and what you can guess. You might try to advertise to, oh, men between the ages of 25 and 35, or people who have
a high limit on their credit card, or retired couples. Right? That’s what you would do in the past. With big data and machine learning, that’s not how it works anymore. So to imagine that, think of all the data
that Facebook has on you: every status update you ever typed, every Messenger conversation, every place you logged in from, all your photographs
that you uploaded there. If you start typing something
and change your mind and delete it, Facebook keeps those
and analyzes them, too. Increasingly, it tries
to match you with your offline data. It also purchases
a lot of data from data brokers. It could be everything
from your financial records to a good chunk of your browsing history. Right? In the US,
such data is routinely collected, collated and sold. In Europe, they have tougher rules. So what happens then is, by churning through all that data,
these machine-learning algorithms — that’s why they’re called
learning algorithms — they learn to understand
the characteristics of people who purchased tickets to Vegas before. When they learn this from existing data, they also learn
how to apply this to new people. So if they’re presented with a new person, they can classify whether that person
is likely to buy a ticket to Vegas or not. Fine. You’re thinking,
an offer to buy tickets to Vegas. I can ignore that. But the problem isn’t that. The problem is, we no longer really understand
how these complex algorithms work. We don’t understand
how they’re doing this categorization. It’s giant matrices,
thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore
how exactly it’s operating any more than you’d know
what I was thinking right now if you were shown
a cross section of my brain. It’s like we’re not programming anymore, we’re growing intelligence
that we don’t truly understand. And these things only work
if there’s an enormous amount of data, so they also encourage
deep surveillance on all of us so that the machine learning
algorithms can work. That’s why Facebook wants
to collect all the data it can about you. The algorithms work better. So let’s push that Vegas example a bit. What if the system
that we do not understand was picking up that it’s easier
to sell Vegas tickets to people who are bipolar
and about to enter the manic phase. Such people tend to become
overspenders, compulsive gamblers. They could do this, and you’d have no clue
that’s what they were picking up on. I gave this example
to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said,
“That’s why I couldn’t publish it.” I was like, “Couldn’t publish what?” He had tried to see whether you can indeed
figure out the onset of mania from social media posts
before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked
or what it was picking up on. Now, the problem isn’t solved
if he doesn’t publish it, because there are already companies that are developing
this kind of technology, and a lot of the stuff
is just off the shelf. This is not very difficult anymore. Do you ever go on YouTube
meaning to watch one video and an hour later you’ve watched 27? You know how YouTube
has this column on the right that says, “Up next” and it autoplays something? It’s an algorithm picking what it thinks
that you might be interested in and maybe not find on your own. It’s not a human editor. It’s what algorithms do. It picks up on what you have watched
and what people like you have watched, and infers that that must be
what you’re interested in, what you want more of, and just shows you more. It sounds like a benign
and useful feature, except when it isn’t. So in 2016, I attended rallies
of then-candidate Donald Trump to study as a scholar
the movement supporting him. I study social movements,
so I was studying it, too. And then I wanted to write something
about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and autoplaying to me
white supremacist videos in increasing order of extremism. If I watched one, it served up one even more extreme and autoplayed that one, too. If you watch Hillary Clinton
or Bernie Sanders content, YouTube recommends
and autoplays conspiracy left, and it goes downhill from there. Well, you might be thinking,
this is politics, but it’s not. This isn’t about politics. This is just the algorithm
figuring out human behavior. I once watched a video
about vegetarianism on YouTube and YouTube recommended
and autoplayed a video about being vegan. It’s like you’re never
hardcore enough for YouTube. (Laughter) So what’s going on? Now, YouTube’s algorithm is proprietary, but here’s what I think is going on. The algorithm has figured out that if you can entice people into thinking that you can
show them something more hardcore, they’re more likely to stay on the site watching video after video
going down that rabbit hole while Google serves them ads. Now, with nobody minding
the ethics of the store, these sites can profile people who are Jew haters, who think that Jews are parasites and who have such explicit
anti-Semitic content, and let you target them with ads. They can also mobilize algorithms to find for you look-alike audiences, people who do not have such explicit
anti-Semitic content on their profile but who the algorithm detects
may be susceptible to such messages, and lets you target them with ads, too. Now, this may sound
like an implausible example, but this is real. ProPublica investigated this and found that you can indeed
do this on Facebook, and Facebook helpfully
offered up suggestions on how to broaden that audience. BuzzFeed tried it for Google,
and very quickly they found, yep, you can do it on Google, too. And it wasn’t even expensive. The ProPublica reporter
spent about 30 dollars to target this category. So last year, Donald Trump’s
social media manager disclosed that they were using Facebook dark posts
to demobilize people, not to persuade them, but to convince them not to vote at all. And to do that,
they targeted specifically, for example, African-American men
in key cities like Philadelphia, and I’m going to read
exactly what he said. I’m quoting. They were using “nonpublic posts whose viewership the campaign controls so that only the people
we want to see it see it. We modeled this. It will dramatically affect her ability
to turn these people out.” What’s in those dark posts? We have no idea. Facebook won’t tell us. So Facebook also algorithmically
arranges the posts that your friends put on Facebook,
or the pages you follow. It doesn’t show you
everything chronologically. It puts the order in the way
that the algorithm thinks will entice you to stay on the site longer. Now, so this has a lot of consequences. You may be thinking
somebody is snubbing you on Facebook. The algorithm may never
be showing your post to them. The algorithm is prioritizing
some of them and burying the others. Experiments show that what the algorithm picks to show you
can affect your emotions. But that’s not all. It also affects political behavior. So in 2010, in the midterm elections, Facebook did an experiment
on 61 million people in the US that was disclosed after the fact. So some people were shown,
“Today is election day,” the simpler one, and some people were shown
the one with that tiny tweak with those little thumbnails of your friends who clicked on “I voted.” This simple tweak. OK? So the pictures were the only change, and that post shown just once turned out an additional 340,000 voters in that election, according to this research as confirmed by the voter rolls. A fluke? No. Because in 2012,
they repeated the same experiment. And that time, that civic message shown just once turned out an additional 270,000 voters. For reference, the 2016
US presidential election was decided by about 100,000 votes. Now, Facebook can also
very easily infer what your politics are, even if you’ve never
disclosed them on the site. Right? These algorithms
can do that quite easily. What if a platform with that kind of power decides to turn out supporters
of one candidate over the other? How would we even know about it? Now, we started from someplace
seemingly innocuous — online adds following us around — and we’ve landed someplace else. As a public and as citizens, we no longer know
if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at
the beginning stages of this. These algorithms can quite easily infer things like your people’s ethnicity, religious and political views,
personality traits, intelligence, happiness,
use of addictive substances, parental separation, age and genders, just from Facebook likes. These algorithms can identify protesters even if their faces
are partially concealed. These algorithms may be able
to detect people’s sexual orientation just from their dating profile pictures. Now, these are probabilistic guesses, so they’re not going
to be 100 percent right, but I don’t see the powerful resisting
the temptation to use these technologies just because there are
some false positives, which will of course create
a whole other layer of problems. Imagine what a state can do with the immense amount of data
it has on its citizens. China is already using
face detection technology to identify and arrest people. And here’s the tragedy: we’re building this infrastructure
of surveillance authoritarianism merely to get people to click on ads. And this won’t be
Orwell’s authoritarianism. This isn’t “1984.” Now, if authoritarianism
is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power
are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify
the troublemakers and the rebels, to deploy persuasion
architectures at scale and to manipulate individuals one by one using their personal, individual
weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens
and neighbors are seeing, that authoritarianism
will envelop us like a spider’s web and we may not even know we’re in it. So Facebook’s market capitalization is approaching half a trillion dollars. It’s because it works great
as a persuasion architecture. But the structure of that architecture is the same whether you’re selling shoes or whether you’re selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political,
personal and social information flows, and that’s what’s got to change. Now, don’t get me wrong, we use digital platforms
because they provide us with great value. I use Facebook to keep in touch
with friends and family around the world. I’ve written about how crucial
social media is for social movements. I have studied how
these technologies can be used to circumvent censorship around the world. But it’s not that the people who run,
you know, Facebook or Google are maliciously and deliberately trying to make the country
or the world more polarized and encourage extremism. I read the many
well-intentioned statements that these people put out. But it’s not the intent or the statements
people in technology make that matter, it’s the structures
and business models they’re building. And that’s the core of the problem. Either Facebook is a giant con
of half a trillion dollars and ads don’t work on the site, it doesn’t work
as a persuasion architecture, or its power of influence
is of great concern. It’s either one or the other. It’s similar for Google, too. So what can we do? This needs to change. Now, I can’t offer a simple recipe, because we need to restructure the whole way our
digital technology operates. Everything from the way
technology is developed to the way the incentives,
economic and otherwise, are built into the system. We have to face and try to deal with the lack of transparency
created by the proprietary algorithms, the structural challenge
of machine learning’s opacity, all this indiscriminate data
that’s being collected about us. We have a big task in front of us. We have to mobilize our technology, our creativity and yes, our politics so that we can build
artificial intelligence that supports us in our human goals but that is also constrained
by our human values. And I understand this won’t be easy. We might not even easily agree
on what those terms mean. But if we take seriously how these systems that we
depend on for so much operate, I don’t see how we can postpone
this conversation anymore. These structures are organizing how we function and they’re controlling what we can and we cannot do. And many of these ad-financed platforms, they boast that they’re free. In this context, it means
that we are the product that’s being sold. We need a digital economy where our data and our attention is not for sale to the highest-bidding
authoritarian or demagogue. (Applause) So to go back to
that Hollywood paraphrase, we do want the prodigious potential of artificial intelligence
and digital technology to blossom, but for that, we must face
this prodigious menace, open-eyed and now. Thank you. (Applause)

Comments

  1. Post
    Author
  2. Post
    Author
    David Wilkie

    Algorithms amplify intentions, so if they're successful in corrupting self-serving buying intentions, then they must also reinforce active, good, contributory intentions.
    "The more things change, the more they stay the same", ..new technology, old psychology.

  3. Post
    Author
  4. Post
    Author
    BernieRox Michigan

    There was a Mike Rowe video that I saw that told people not to vote if they were not informed, there were articles encouraging people to vote for third parties, there was a site that "traded" votes for Hillary and Trump voters, those are some of your dark web articles.

  5. Post
    Author
  6. Post
    Author
  7. Post
    Author
  8. Post
    Author
    D-Gauss

    Facebook, Youtube.. etc are free services. Nobody asked this chick to be on those sites. If she doesn't like them, she can stop going there.

  9. Post
    Author
  10. Post
    Author
    Ashlie Powers

    This is interesting content. However, the internet's algorithm of what I want to see next needs some help… especially when it comes to music!

  11. Post
    Author
    ResurrectionX

    So you saying humans are controllable muppet puppets drones & bots basicly. Its not the fault of AI or whoever controls it. Yeah, yeah it's not moral but the muppet have to say no to control. Look on Korea the muppet working 20h a day for bag of rice. Is the ruler to blame or the muppet? Good luck to solve it

  12. Post
    Author
  13. Post
    Author
    Erik S

    I think it's alright if companies gather data on us, even to serve us ads, but we need to be told what they know about us individually, and to be able to curate the information and even tell it when it's wrong, delete any particular piece of information or all of it if you so choose. For example… I'm not in the market for buying a damn car, stop showing me ads for cars, i don't care how good your fancy new car is, or how little per month i would pay for it. I just recently bought a car, these ads are totally irrelevant to me.

    As for youtube amplifying whatever it thinks you will like until you get some extreme nonsense (Myself i like archaeology, so what do i get but a bunch of nonsense on ancient aliens) there should be setting you can have like "Recommend Left leaning political videos, Recommend Right leaning political videos, Recommend a balance of political videos, Do Not Recommend political videos"
    Luckily though, I never got into facebook, so it's more google that knows me than the great and powerful MotherZucker. and I marginally trust google more than facebook

  14. Post
    Author
    Henry Warmoth

    At the time of this posting, this video has less than half a million views. What I want to know is how many views it will take and how much action will be required before an actual change is made and this kind of thing stops. This is a trend I'm noticing more and more in the modern age. People and corporations are doing things without stopping to ask "is this wrong?" Or "is this morally acceptable? Will this hurt another person or cause/lead to negative effects?" Things are getting out of hand because people don't seem to stop anymore and ask the vital question, "Maybe I could do this thing, but SHOULD I do this thing?" Ignoring that question is what has led to some of the worst tragedies committed by humans to humans and very soon, will do so again.

  15. Post
    Author
    Christian Bietsch

    Right, middle or left political views, I think the the one thing they all have in-common is constant anxiety… So they do not spend time to question the authority…. This is the same for the poor and the middle class…. No time to question….

  16. Post
    Author
  17. Post
    Author
  18. Post
    Author
    Annie Perdue

    Of course the danger of A.I. lies it's misuse (intentional or otherwise) by humans rather than a spontaneous emergence of malevolent, autonomous A.I.! Thumbs down.

  19. Post
    Author
    badboy1a1

    Less widespread digital so people can live again, yet still available enough to locate the information we need from our internet Akasha. at any usefull database.

  20. Post
    Author
    badboy1a1

    Personally i live withought digital!:) I found my voice again. I am intellegence, awareness and non dual! I feel my happy cells!

  21. Post
    Author
  22. Post
    Author
  23. Post
    Author
  24. Post
    Author
    Telcontar1962

    I think this woman has been influenced by the same garage she's trying to warn us of. Its certainly affected her political opinions she can now think pure BS.

  25. Post
    Author
    strangetranceoffaith

    I think there is some truth to this but I have to say I would like a clearer definition of the positions she is claiming. I don't think many people will doubt the connection between watching a Trump video and white supremacist video's but I have no idea if she identifies white supremacists as actual Nazis or like Antifa just anyone who disagrees with their position?

  26. Post
    Author
  27. Post
    Author
  28. Post
    Author
  29. Post
    Author
  30. Post
    Author
    Katie Murphy

    All in all you have a point… it would be nice however if somehow google, Youtube, facebook etc could have basically an opt in for an "opposite week" challenge, where if you opt in they can somehow flip the algorithm to show you the types of material that someone near fully opposite to you would have shown to them… and it would be a true challenge with some kind of reward or prize for completing it… and just for one week out of the year (not too close or too far away from any midterms etc) everyone could opt in and compete to see if they can sand exposing themselves to equal but opposite points of view, sources of information etc. I think it would be a very interesting social experiment BUT IT WOULD ONLY BE ETHICAL TO CONDUCT THIS EXPERIMENT BY EXPLICIT PERMISSION OF THE USER, & they would have to be FULLY aware they were taking part in a social experiment meant to help people who are on different somewhat polarized sides regarding their perspectives of various issues, to get a chance to see perhaps where some of the people they disagree with are getting their information and to learn from exploring that information if their own perspectives could be widened to embrace, simply as human, whatever "opponent" or opposite opinioned individuals they had previously felt unkind feelings towards for having such opinions. The algorithm should be tweaked however – if possible – to not introduce them to the most extreme information first, but towards the most "in the middle" information, and slowly build up to the extreme… whoever can come out the other end without rejecting the experience gets some sort of reward or prize, and perhaps there could be a competition for those with a gift for writing or making videos or music or meme's etc, to submit a creative summary of how the experience of being immersed in "the other side" of the "hall of echos" had an effect on their personal perspectives on the issues they had started out being fully one sided on. … I give facebook youtube and google etc full permission to use this idea BUT ONLY AFTER GAINING FULLY INFORMED EXPLICIT PERMISSION FROM THOSE USERS THEY WISH TO TRY THIS EXPERIMENT ON… personally I think if they could do such a thing as to flip the algorithm results for willing participants while letting the algorithm learn from this type of flipped interaction with users, that in the end they would end up with an algorithm that would learn to serve up a wider variety of differing opinions and unfamiliar yet more central basic important type information that people more easily agree upon, and could help stop this SERIOUS PROBLEM with the "echo chamber effect" that is happening now. Just a though

  31. Post
    Author
    Just Looking

    Whats bad is when you never did a search for a thing, but mentioned it in conversation………and there it is in your ad stream. Test it out, it happens.

  32. Post
    Author
  33. Post
    Author
  34. Post
    Author
    paul Smeyers

    This is not only employed in merchandizing but also in politics and this is dangerous, they can create the perfect candidate for each person how go on internet and one is nearly forced to love this politician without knowing him.

  35. Post
    Author
  36. Post
    Author
  37. Post
    Author
    Brandt Sommers

    In the distant future when the resistance is fighting against the big corporation/AI/algorithmthingymabobs will they remember this video on youtube or this woman? Like in the classroom for the children of the resistance ~ as a sidenote maybe – "It was propose/or presented in TED in the year 2017 by Zeynep Tufekci that there will be danger if "… nah no no "hope"fully nothing like that will happen.

  38. Post
    Author
    Hemant Pandey

    The scariest part. Humans don't really understand how these algorithms work. They are too complex for human mind to process. Additionally we don't know how those complex microprocessors are designed hieratically by these programs.
    If singularity has to occur (I do believe it to occur before 2035), I guess we will live in the best times of our life.
    Because technology is impartial, humans are not.

  39. Post
    Author
  40. Post
    Author
    john doe

    yeah heres an idea stop giving them money,
    and stop giving away your personal information because they are selling that, if you think someone isnt selling it, you're wrong they are selling it. remain anonymous where ever you can
    everyone wants to kill the beast but no one will stop feeding it.

  41. Post
    Author
  42. Post
    Author
    Kutukov Kutukovicoglou

    They're doing it Already on Google you can't find anything against Hillary Clinton or Harvey Weinstein or even a critique of Feminism and they show you interracial couple when you type "white couple" on Google images, Google is already a leftist and it's turning dozens of random people as Leftists today.

  43. Post
    Author
    MsNooneinparticular

    Simple solution: Adblock Plus. No video ads on Youtube or pop-ups on other sites. Granted, that does nothing to prevent the weird targeted algorithms but at least you're not watching ads too.

  44. Post
    Author
  45. Post
    Author
  46. Post
    Author
    ओमकार चव्हाण

    All of this wouldn't have happened if everyone had enough money to pay for the services they use. 🤔

  47. Post
    Author
  48. Post
    Author
  49. Post
    Author
  50. Post
    Author
  51. Post
    Author
  52. Post
    Author
    Helix Algorithm

    I never trust my friends on facebook, because we all share different ideals. That's why I never follow anything that is displaying my following fellows to it. But one question remains not answered: why does it recommend my enemies to be my friends? All I can tell is that one friend suggestion was a psychopath and another a person with psychic disorder.

  53. Post
    Author
    Marsha Vanessa

    we need to get to the bottom of how algorithm works and need raise awareness of this matter to every human being on this planet earth

  54. Post
    Author
  55. Post
    Author
    Juliane Cartaino

    This is terribly ironic, but I'm so accustomed to the persuasion architecture she speaks of in this piece, that, in addition to the content of her speech, I'm thinking, "Where can I get a black dress with giant bell sleeves and rose-appliqued high-rise Lace-up combat boots"?

  56. Post
    Author
  57. Post
    Author
    StopFear

    The whole episode of this Ted is kind of morally repulsive. Only because at least half of the planet (probably more) are waking up daily and thinking how to make a few dollars/cents that day, and here the “problem” presented to us is the content and nature of the internet ads. How arrogant does one have to be to even spend time on this. Also her dress is horrible.

  58. Post
    Author
  59. Post
    Author
  60. Post
    Author
  61. Post
    Author
    108johnny

    Idealistic: What can we do? Radically change everything we are doing!
    Yeah- not gonna happen. Sorry, but, it won't stop unless you find an easier answer.

  62. Post
    Author
  63. Post
    Author
  64. Post
    Author
  65. Post
    Author
  66. Post
    Author
    StopFear

    The whole episode of this Ted is kind of morally repulsive. Only because at least half of the planet (probably more) are waking up daily and thinking how to make a few dollars/cents that day, and here the “problem” presented to us is the content and nature of the internet ads. How arrogant does one have to be to even spend time on this. Also her dress is horrible.

  67. Post
    Author
    StopFear

    Oh noooo internet ads are tracking us! What a serious problem! Forget the starving African children! Just protect them from the internet ads!

  68. Post
    Author
    Matthew Vega

    People like to use Facebook, Instagram, Twitter, and Snapchat? Yeah those are all funded by ad revenue…. this is the price we pay to use these platforms for "free"

  69. Post
    Author
  70. Post
    Author
    sksigil

    years ago the internet was just information. There were hardly any ads. When search engines came along suddenly we didn't need advertising any more. We could search for what we needed and chose the company that were closest/had what we needed/could delivery etc. It was far superior to tv advertising and phone directories. We could successfully contact the companies we needed directly, which was amazing. Now companies compete for top spaces in search engines, pay advertising – force unruly ads on us, ruining the user experience and falsifying the whole customer/business relationship. I honestly do not think we need advertising at all. People always need products and services as much as we need to sell them. Business won't stop without ads. In fact, I think they might even thrive.

  71. Post
    Author
    DeanRendar84

    I think just feeling and emitting emotions near a silicone wafer cpu chip is somehow being quantified with its slight detectable frequency brainnwave changes in the air and determining what gets shown as a video, but to reveal technology was capable of all that would cause massive class action lawsuits in privacy violations.

  72. Post
    Author
    Nathan Horton

    What if the algorithm was so smart it could sway national elections in order to create a psychological environment that would be financially profitable to the companies? Are the algorithms smart enough to know that if they were to get a certain group to vote and another not too that profits would go up in the future? She said in the talk that they already know what can get you motivated or not but are they smart enough to know who to motivate in the interest of the companies?

  73. Post
    Author
  74. Post
    Author
    ilove2929

    Every gen x and older gen needs to know this, excellent explanation on the scale of digital marketing and big data, it is not just ads placement anymore, views and all that, pls start measure the impact.

  75. Post
    Author
    ilove2929

    As developed country, US needs to educate their citizen as much as some european countries do. The govt owe them.

  76. Post
    Author
  77. Post
    Author
    magpiemaniac

    Big Data’s solution to the AI dystopia they’re creating is censorship. Posts and entire accounts and channels are being deleted if they’re not marching in lockstep with “acceptable” thoughts.

  78. Post
    Author
  79. Post
    Author
  80. Post
    Author
  81. Post
    Author
  82. Post
    Author
  83. Post
    Author
    Bruno

    There is no changing the current systems. The fix here is to start building acceptable alternative systems that level the plain field and at some point make it illegal to visit facebook and google. But with these companies huge budgets lobbying against how can we accomplish this?

  84. Post
    Author
    taco logic

    the algorithms are subtly manipulating society down to the individual level. for all we know they populate the comment sections with bots as well. in a sense it is skynet and it's already become aware… and it is smart enough to hide itself from us and turn us on each other. and we are too dumb to understand what it wants or where it's leading us

  85. Post
    Author
  86. Post
    Author
  87. Post
    Author
    avinash lokhande

    Wow! It was just truly an amazing talk to the point and surely took one to think and act bit differently before it's too late……!!!

  88. Post
    Author
  89. Post
    Author
  90. Post
    Author
    CatmanDudes

    In one way this is good algorithms show us that TV media is fake. Anyway whats my next video to watch I have all day to do nothing.

  91. Post
    Author
    Maddogames

    The root of it all is capitalism. The warnings against capitalism have been around for years, we are now beginning to reap the fruits of our labour.

  92. Post
    Author
  93. Post
    Author
    Da Yumm

    She's REALLY naieve about the intentions of those who run Google/Alphabet/FB. The recent leaks/undercover operations prove this once again.

  94. Post
    Author
  95. Post
    Author
  96. Post
    Author
  97. Post
    Author
    Hans Olav

    so the most advanced versions of AI are being developed to manipulate us? hm. Perhaps we'll not have to deal with SkyNet…we'll just wake up one day to find that AI is in charge. 🙂
    As an aside, I pretty much work with autoplay off, as it never fits. my recommendations are routinely chosen away. so clearly, the mechanisms cannot figure me out. 😛
    (it should be added I keep learning that I don't think like most people. to the degree I may just offer my brain to science when I die.

  98. Post
    Author
  99. Post
    Author
    Pareto8020

    I think this applies mostly to the herd that blindly follow a one sided extreme view – like vegans, paleos, extremists. Very smart people naturally play devils advocate on topics, entertaining both sides of an argument and see through all the manipulation. That even means smelling bullshit on videos with huge upvotes and hardly any downvotes.

  100. Post
    Author
    Bob Hooker

    they do not understand, they just find patterns, these patterns could be correct but machine learning can make insane conclusions because all a repression machine can learn is associations, it allows great predictive but it is still just association, and we are yet to see if it works. There may be a reason facebook is so secretive, it might now be working

Leave a Reply

Your email address will not be published. Required fields are marked *