It’s impossible to overstate the significance of opening TensorFlow to developers
outside of Google. “People couldn’t wait to
get their hands on it,” says Ian Bratt, director of machine learning at Arm, one of the
world’s largest designers of computer chips.
Today, Twitter is using it to build bots to
monitor conversations, rank tweets, and entice people to spend more time in their feed.
Airbus is training satellites to be able to examine nearly any part of the earth’s surface,
within a few feet. Students in New Delhi
have transformed mobile devices into air-quality monitors. This past spring, Google
released early versions of TensorFlow 2.0,
which makes its AI even more accessible
to inexperienced developers. The ultimate
goal is to make creating AI apps as easy as
building a website.
TensorFlow has now been downloaded
approximately 41 million times. Millions
of devices—cars, drones, satellites, laptops,
phones—use it to learn, think, reason, and
create. An internal company document
shows a chart tracking the usage of TensorFlow inside Google (which, by extension,
tracks machine learning projects): It’s up by
5,000% since 2015.
Tech insiders, though, point out that if
TensorFlow is a gift to developers, it may
also be a Trojan horse. “I am worried that
they are trying to be the gatekeepers of AI,”
says an ex-Google engineer, who asked not
to be named because his current work is
dependent on access to Google’s platform.
At present, TensorFlow has just one main
competitor, Facebook’s Py Torch, which is
popular among academics. That gives Google
a lot of control over the foundational layer
of AI, and could tie its availability to other
Google imperatives. “Look at what [Google’s]
done with Android,” this person continues.
Last year, European Union regulators levied
a $5 billion fine on the company for requiring electronics manufacturers to pre-install
Google apps on devices running its mobile
operating system. Google is appealing, but
it faces further investigations for its competitive practices in both Europe and India.
By helping AI proliferate, Google has cre-
ated demand for ne w tools and products that
it can sell. One example is Tensor Processing
Units (TPUs), which are integrated circuits
designed to accelerate applications using
TensorFlow. If developers need more power
has stated its intention to take the lead? President Xi Jinping has committed
more than $150 billion toward the goal of becoming the world’s AI leader by 2030.
Inside Google, dueling factions are competing over the future of AI. Thousands
of employees are in revolt against their leaders, trying to stop the tech they’re
building from being used to help governments spy or wage war. How Google decides to develop and deploy its AI may very well determine whether the technology
will ultimately help or harm humanity. “Once you build these [AI] systems, they
can be deployed across the whole world,” explains Reid Hoffman, the LinkedIn
cofounder and VC who’s on the board of the Institute for Human-Centered Artificial Intelligence at Stanford University. “That means anything [their creators] get
right or wrong will have a correspondingly massive-scale impact.”
“IN THE BEGINNING, THE NEURAL NETWORK IS
untrained,” says Jeff Dean one glorious spring evening in Mountain View, California. He is standing under a palm tree just outside
the Shoreline Amphitheatre, where Google is hosting a party to
celebrate the opening day of I/O, its annual technology showcase.
This event is where Google reveals to developers—and the rest
of the world—where it is heading next. Dean, in a mauve-gray polo,
jeans, sneakers, and a backpack double-strapped to his shoulders,
is one of the headliners. “It’s like meeting Bono,” gushes one Korean
software programmer who rushed over to take a selfie with Dean
after he spoke at one event earlier in the day. “Jeff is God,” another
tells me solemnly, almost surprised that I don’t already know this.
Around Google, Dean is often compared to Chuck Norris, the action star known for his kung fu moves and taking on multiple
assailants at once.
“Oh, that looks good! I’ll have one of those,” Dean says with
a grin as a waiter stops by with a tray of vegan tapioca pudding
cups. Leaning against a tree, he speaks about neural networks the
way Laird Hamilton might describe surfing the Teahupo’o break.
His eyes light up and his hands move in sweeping gestures. “Okay,
so here are the layers of the network,” he says, grabbing the tree and using the
grizzled trunk to explain how the neurons of a computer brain interconnect. He
looks intently at the tree, as though he sees something hidden inside it.
Last year, Pichai named Dean head of Google AI, meaning that he’s responsible
for what the company will invest in and build—a role he earned in part by scaling
the You Tube neural net experiment into a new
framework for training their machines to think on
a massive scale. That system started as an internal project called DistBelief, which many teams,
including Android, Maps, and You Tube, began
using to make their products smarter.
But by the summer of 2014, as DistBelief
grew inside Google, Dean started to see that it
had flaws. It had not been designed to adapt to
technological shifts such as the rise of GPUs (the
computer chips that process graphics) or the
emergence of speech as a highly complex data
set. Also, DistBelief was not initially designed to
be open source, which limited its growth. So he
made a bold decision: Build a new version that
would be open to all. In November 2015, Pichai
introduced TensorFlow, DistBelief’s successor,
one of his first big announcements as CEO.
TELLING US, ‘DON’T
WORRY ABOUT IT.
WE GOT THIS,’” SAYS
AI. “WE ALL KNOW