Facial Recognition: Dystopian Nightmare or Necessary Evil? | Digital Society with Vontobel


There’s one technology that’s already being
used by millions of people around the world that has the potential to define the next
decade of technological progress. Facial recognition systems work by matching an image
of a person’s face to a previously held image of them. The technology is powered by artificial intelligence,
and it can be used in a number of different ways – from unlocking mobile
phones to real-world events. It’s been used at protests, sports
matches and at concerts. Police say it can be used in
public spaces to improve security by helping to locate dangerous
people and suspects. The rapid increase in the use of facial recognition
has proved to be hugely divisive. There has been a lack of transparency around
how the technology is being used. Critics claim the systems are unreliable
and erode people’s privacy. There are broadly two types of facial
recognition systems that are used. The first doesn’t work in real time, it’s
static and uses individual images. If you have a photo of someone, it’s possible
to compare against the database of 10s, thousands or even millions of
images to find a potential match. This type of facial recognition technology
could be used on one individual computer or powered through the cloud. Police in the UK have used this type of facial recognition
system with images captured from CCTV cameras. The second type of facial recognition system is
more dynamic and therefore more controversial. It works in real time. It’s called automated
facial recognition. It all starts with cameras that can be positioned in public
places such as outside shops or on high streets. The image is captured by these cameras and
then processed by artificial intelligence software. It’s able to pick out humans and their
faces from other objects around them, such as cars, lampposts
and street signs. The faces are analyzed in a matter of
seconds and compared to images that are already held
on a database. The systems that are currently being trialled
can identify 10s of faces in one image. They can also scan huge
crowds with very little effort. When a match is made,
police can receive a notification telling them about a potential
suspect on the ground. They’re then able to go and
identify and locate them. The facial recognition systems used to unlock
mobile phones work in a similar way, they identify our
likeness in real time. But there are some differences. The systems on our phones are only
trained to recognize one face. They work at short distances and crucially
they have to be opted into by the phone’s owner. Facial recognition systems aren’t looking at your
face in the same way that a human would. Instead, they process biometric
markers on your face, for instance, the gap between your nose
and lips or the width of your eyes. An algorithm can essentially
create a map of your face by combining different measurements
and personal traits. Much like your fingerprints,
your face is unique to you. It can’t easily be replicated. Around the world facial recognition has
been used in lots of different ways. Airports are using the technology
to verify people’s identities. Shopping malls have experimented
with emotion tracking. In Taylor Swift’s Reputation tour
it was used to spot stalkers. China has tracked Uighur Muslims
throughout the country. And police in the UK and US have used
the software to arrest alleged criminals. So what’s the danger
with facial recognition? This is where it
gets complicated. Questions have been raised about
bias in data sets, accuracy, and the ethics of facial
recognition deployments. Essentially, a facial recognition system is only as
good as the data that has been used to train it. Researchers in scientific labs can get
the technology to be highly accurate. They use high quality images of people showing
their full faces and well-lit conditions. When they do this they can get
matches nearly 100% of the time. But when the tech’s being used in the real
world, this is when things can get messy. Things don’t work as well when CCTV cameras
are low quality and people are moving around, cameras can’t capture people’s faces as
well in poor quality lighting conditions. One of the first uses of facial recognition technology in
the UK was the UEFA Champions League Final in 2017. South Wales Police received 2,400 alerts of possible
matches less than 200 of these were right. The risk is people who are criminals
get identified as potential suspects and then receive unwanted
police attention. And then there’s the issue of bias. A study by MIT Media Lab has found problems
with Amazon’s facial recognition technology. The system could identify the gender of lighter-skinned
men, but it mistook women for men 19% of the time. On top of this, it mistook darker-skinned
women for men 31% of the time. The danger here is that AI systems
won’t treat people equally or fairly. In May 2019, lawmakers in San Francisco banned
facial recognition technology from being used. This is massively significant as Silicon
Valley is the home of big tech. Both Amazon and Microsoft have said there needs
to be rules put in place to control facial recognition. The main argument in
favour of the technology is to help improve law enforcement
and increase security. Police say that facial recognition technology
can be used to help find suspects in large crowds. They argue that this can be cheaper, more efficient
and help to reduce crime in a certain area. Is this going to end up with us
living in a dystopian society? Well, it’s too early to say –
but one thing’s for sure. Facial recognition is going to
be debated for years to come. Police, government and law enforcement are
going to increasingly use the technologies in live deployments. The key issue is that the technology
is still being developed. Without control there’s a danger we will
rely on facial recognition technology when we don’t know whether
it’s safe or accurate. And at that point,
things will be too late.

Leave a Reply

Your email address will not be published. Required fields are marked *