12 FAS TCOMPAN Y.COM WIN TER 2019;2020
2019 was a great year for seeing what AI
could do. Waymo deployed self-driving taxis
to actual paying customers in Arizona. Bots
from OpenAI and DeepMind beat the top
professionals in two major esports games. A
deep-learning algorithm performed as well
as doctors—and sometimes better—at spotting lung cancer tumors in medical imaging.
But as for what AI should do, 2019 was terrible. Amazon’s facial recognition software?
Racist, according to MIT researchers, who
reported that the tech giant’s algorithms misidentify nearly a third of dark-skinned women’s faces (while demonstrating near-perfect
accuracy for light-skinned men’s). Emotion
detection, used by companies such as WeSee
and HireVue to perform threat assessments
and screen job applicants? Hogwash, says
the Association for Psychological Science.
Even the wonky field of natural language
processing took a hit, when a state-of-the-art
system called GPT- 2—capable of generating hundreds of words of convincing text
after only a few phrases of prompting—was
deemed too risky to release by its own creators, OpenAI, which feared it could be used
“maliciously” to propagate fake news, hate
speech, or worse.
2019, in other words, was the year that
two things became unavoidably clear about
the rocket ship of innovation called artificial
intelligence. One: It’s accelerating faster than
most of us expected. Two: It’s got some serious screws loose.
That’s a scary realization to have, given
that we’re collectively strapped into this
rocket instead of watching it from a safe
distance. But AI’s anxiety-inducing progress
has an upside: For perhaps the first time, the unintended
consequences of a disruptive technology are visible in
the moment, instead of years or even decades later. And
that means that while we may be moving too quickly
for comfort, we can actually grab the throttle—and steer.
It’s easy to forget that before 2012, the technology we
now call AI—deep learning with artificial neural networks—for all practical purposes didn’t exist. The concept
of using layers of digital connections (organized in a
crude approximation of biological brain tissue) to learn
pattern-recognition tasks was decades old, but largely
stuck in an academic rut. Then, in September 2012, a
neural network designed by students of University of
Toronto professor and future “godfather of deep learning” Geoffrey Hinton unexpectedly smashed records
on a highly regarded computer-vision challenge called
ImageNet. The test asks software to correctly identify the
content of millions of images: say, a picture of a parrot,
or a guitar. The students’ neural net made half as many
errors as the runner-up.
Suddenly, deep learning “worked.” Within five
years, Google and Microsoft had hired scores of deep-learning experts and were dubbing themselves “AI first”
FOR PERHAPS THE FIRST TIME, THE UNINTENDED CONSEQUENCES OF A DISRUPTIVE TECHNOLOGY ARE VISIBLE IN THE MOMENT.
Big Idea N