TensorFlow World 2019 Keynote

TensorFlow World 2019 Keynote


JEFF DEAN: I’m really
excited to be here. I think it was almost
four years ago to the day that we were about 20 people
sitting in a small conference room in one of the
Google buildings. We’ve woken up early because
we wanted to kind of time this for an early East
Coast launch where we were turning on the
TensorFlow.org website and releasing the first
version of TensorFlow as an open source project. And I’m really, really excited
to see what it’s become. It’s just remarkable to
see the growth and all the different kinds of ways
in which people have used this system for all kinds
of interesting things around the world. So one thing that’s
interesting is the growth in the use of
TensorFlow also kind of mirrors the growth in
interest in machine learning and machine learning research
generally around the world. So this is a graph showing
the number of machine learning archive papers
that have been posted over the last 10 years or so. And you can see it’s growing
quite, quite rapidly, much more quickly than you might expect. And that lower red line is
kind of the nice doubling every couple of years growth
rate, exponential growth rate we got used to in computing
power, due to Moore’s law for so many years. That’s now kind of slowed down. But you can see that the machine
learning research community is generating research ideas
at faster than that rate, which is pretty remarkable. We’ve replaced computational
growth with growth of ideas, and we’ll see those both
together will be important. And really, the excitement
about machine learning is because we can now do things
we couldn’t do before, right? As little as five or six
years ago, computers really couldn’t see that well. And starting in
about 2012, 2013, we started to have people use
deep neural networks to try to tackle computer
vision problems, image classification, object
detection, things like that. And so now, using deep learning
and deep neural networks, you can feed in the
raw pixels of an image and fairly reliably get
a prediction of what kind of object is in that image. Feed in the pixels there. Red, green, and blue values
in a bunch of different coordinates, and you get
out the prediction leopard. This works for speech as well. You can feed an
audio wave forms, and by training on lots of
audio wave forms and transcripts of what’s being said
in those wave forms, we can actually take a
completely new recording and tell you what is being
said amid a transcript. Bonjour, comment allez-vous? You can even combine
these ideas and have models that take in pixels,
and instead of just predicting classifications of
what are in the object, it can actually write a short
sentence, a short caption, that a human might
write about the image– a cheetah lying on top of a car. That’s one of my vacation
photos, which was kind of cool. And so just to show the progress
in computer vision, in 2011, Stanford hosts an ImageNet
contest every year to see how well computer
vision systems can predict one of 1,000 categories
in a full color image. And you get about a
million images to train on, and then you get a
bunch of test images your model has
never seen before. And you need to
make a prediction. In 2011, the winning entrant
got 26% error, right? So you can kind of
make out what that is. But it’s pretty hard to tell. We know from human
experiment that human error of a well-trained
human, someone who’s practiced at this
particular task and really understands
1,000 categories, gets about 5% error. So this is not a trivial task. And in 2016, the winning
entrant got 3% error. So just look at that
tremendous progress in the ability of computers
to resolve and understand computer imagery and
have computer vision that actually works. This is remarkably
important in the world, because now we have
systems that can perceive the world around us and we
can do all kinds of really interesting things about. We’ve seen similar progress in
speech recognition and language translation and
things like that. So for the rest of the talk,
I’d like to kind of structure it around this nice
list of 14 challenges that the US National
Academy of Engineering put out and felt like
these were important things for the science and
engineering communities to work on for the
next 100 years. They put this out
in 2008 and came up with this list of 14 things
after some deliberation. And I think you’ll
agree that these are sort of pretty good
large challenging problems, that if we actually
make progress on them, that
we’ll actually have a lot of progress in the world. We’ll be healthier. We’ll be able to
learn things better. We’ll be able to develop
better medicines. We’ll have all kinds of
interesting energy solutions. So I’m going to talk
about a few of these. And the first one
I’ll talk about is restoring and improving
urban infrastructure. So we’re on the cusp of the sort
of widespread commercialization of a really interesting
new technology that’s going to really change how we
think about transportation. And that is autonomous vehicles. And this is a problem
that has been worked on for quite a while,
but it’s now starting to look like it’s
actually completely possible and commercially
viable to produce these things. And a lot of the
reason is that we now have computer vision and
machine learning techniques that can take in sort
of raw forms of data that the sensors on
these cars collect. So they have the spinning
LIDARs on the top that give them 3D point cloud data. They have cameras in lots
of different directions. They have radar in the front
bumper and the rear bumper. And they can really take
all this raw information in, and with a deep
neural network, fuse it all together to build a high
level understanding of what is going on around the car. Or is it another car to my
side, there’s a pedestrian up here to the left, there’s
a light post over there. I don’t really need to
worry about that moving. And really help to understand
the environment in which they’re operating and
then what actions can they take in the world
that are both legal, safe, obey all the traffic laws,
and get them from A to B. And this is not some
distant far-off dream. Alphabet’s Waymo
subsidiary has actually been running tests
in Phoenix, Arizona. Normally when they
run tests, they have a safety driver
in the front seat, ready to take over
if the car does something kind of unexpected. But for the last
year or so, they’ve been running tests in
Phoenix with real passengers in the backseat and no safety
drivers in the front seat, running around suburban Phoenix. So suburban Phoenix is a
slightly easier training ground than, say, downtown
Manhattan or San Francisco. But it’s still something that
is like not really far off. It’s something that’s
actually happening. And this is really
possible because of things like machine
learning and the use of TensorFlow in these systems. Another one that I’m
really, really excited about is advance
health informatics. This is a really
broad area, and I think there’s lots
and lots of ways that machine learning and
the use of health data can be used to make
better health care decisions for people. So I’ll talk about one of them. And really, I think
the potential here is that we can use
machine learning to bring the wisdom of experts
through a machine learning model anywhere in the world. And that’s really a
huge, huge opportunity. So let’s look at this
through one problem we’ve been working
on for a while, which is diabetic retinopathy. So diabetic retinopathy is
the fastest growing cause of preventable
blindness in the world. And screening every year,
if you’re at risk for this, and if you have diabetes or
early sort of symptoms that make it likely you might develop
diabetes, you should really get screened every year. So there’s 400 million
people around the world that should be screened every year. But the screening is
really specialized. Doctors can’t do it. You really need ophthalmologist
level of training in order to do this effectively. And the impact of the
shortage is significant. So in India, for
example, there’s a shortage of
127,000 eye doctors to do this sort of screening. And as a result,
45% of patients who are diagnosed to
this disease actually have suffered either full
or partial vision loss before they’re actually
diagnosed and then treated. And this is completely
tragic because this disease, if you catch it in time,
is completely treatable. There’s a very simple
99% effective treatment that we just need to make
sure that the right people get treated at the right time. So what can you do? So, it turns out diabetic
retinopathy screening is also a computer vision
problem, and the progress we’ve made on general
computer vision problems where you want to take a
picture and tell if that’s a leopard or an aircraft
carrier or a car actually also works for
diabetic retinopathy. So you can take a
retinal image, which is what the screening camera,
sort of the raw data that comes off the screening
camera, and try to feed that into a model that
predicts 1, 2, 3, 4, or 5. That’s how these
things are graded, 1 being no diabetic retinopathy,
5 being proliferative, and the other numbers
being in between. So it turns out you can
get a collection of data of retinal images and have
ophthalmologists label them. Turns out if you ask
two ophthalmologists to label the same
image, they agree with each other 60% of the time
on the number 1, 2, 3, 4, or 5. But perhaps slightly
scarier if you ask the same ophthalmologist
to grade the same image a few hours apart, they
agree with themselves 65% of the time. But you can fix this by actually
getting each image labeled by a lot of
ophthalmologists, so you’ll get it labeled by
seven ophthalmologists. If five of them say it’s a 2,
and two of them say it’s a 3, it’s probably more
like a 2 than a 3. Eventually, you have
a nice, high quality data set you can train on. Like many machine
learning problems, high quality data is the
right raw ingredient. But then you can
apply, basically, an off-the-shelf computer vision
model trained on this data set. And now you can
get a model that is on par or perhaps slightly
better than the average board certified ophthalmologist in
the US, which is pretty amazing. It turns out you can
actually do better than that. And if you get the data labeled
by retinal specialists, people who have more training
in retinal disease and change the protocol
by which you label things, you get three
retinal specialists to look at an image, discuss
it amongst themselves, and come up with what’s called
a sort of coordinated assessment and one number. Then you can train
a model and now be on par with retinal
specialists, which is kind of the gold standard
of care in this area. And that’s something you can
now take and distribute widely around the world. So one issue particularly with
health care kinds of problems is you want explainable models. You want to be able to
explain to a clinician why is this person, why do
we think this person has moderate diabetic retinopathy. So you can take a
retinal image like this, and one of the things
that really helps is if you can show in
the model’s assessment why this is a 2 and not a 3. And by highlighting
parts of the input data, you can actually make this more
understandable for clinicians and enable them to really sort
of get behind the assessment that the model is making. And we’ve seen this in
other areas as well. There’s been a lot of
work on explainability, so I think the notion that
deep neural networks are sort of complete black
boxes is a bit overdone. There’s actually a
bunch of good techniques that are being
developed and more all the time that
will improve this. So a bunch of advances depend on
being able to understand text. And we’ve had a lot of
really good improvements in the last few years on
language understanding. So this is a bit of
a story of research and how research builds
on other research. So in 2017, a collection of
Google researchers and interns came up with a new kind of model
for text called the Transformer model. So unlike recurrent
models where you have kind of a
sequential process where you absorb one word
or one token at a time and update some internal
state and then go on to the next token,
the Transformer model enables you to process a whole
bunch of text, all at once in parallel, making it much
more computationally efficient, and then to use attention
on previous texts to really focus on if I’m
trying to predict what the next word is, what are
other parts of the context to the left that are
relevant to predicting that? So that paper was
quite successful and showed really good results
on language translation tasks with a lot less compute. So the blue score there
and the first two columns for English to German and
English to French, higher is better. And then the compute
cost of these models shows that this is getting
sort of state of the art results at that time, with
10 to 100x less compute than other approaches. Then in 2018, another
team of Google researchers built on the idea
of Transformers. So everything you see
there in a blue oval is a Transformer
module, and they came up with this approach
called Bidirectional Encoding Representations from
Transformers, or BERT. It’s a little bit
shorter and more catchy. So BERT has this
really nice property that, in addition to
using context to the left, it uses context all
around the language, sort of the surrounding
text, in order to make predictions about text. And the way it
works is you start with a self-supervised
objective. So the one really
nice thing about this is there’s lots and lots
of text in the world. So if you can figure out
a way to use that text to train a model to be able
to understand text better, that would be great. So we’re going to
take this text, and in the BERT
training objective, to make it self-supervised,
we’re going to drop about 15% of the words. And this is actually
pretty hard, but the model is then going
to try to fill in the blanks, essentially. Try to predict what
are the missing words that were dropped. And because we actually
have the original words, we now know if the model
is correct in its guesses about what goes in the box. And by processing trillions
of words of text like this, you actually get a
very good understanding of contextual cues
in language and how to actually fill in the blanks
in a really intelligent way. And so that’s essentially the
training objective for BERT. You take text, you
drop 15% of it, and then you try to predict
those missing words. And one key thing that works
really well is that step one. You can pre-train a model
on lots and lots of text, using this fill-in-the-blank
self-supervised objective function. And then step two, you can
then take a language task you really care about. Like maybe you want to predict,
is this a five-star review or a one-star review
for some hotel, but you don’t have
very much labeled text for that actual task. You might have 10,000
reviews and know the star count of each review. But you can then
finetune the model, starting with the model
trained in step one on trillions of words
of text and now use your paltry 10,000
examples for the text task you really care about. And that works extremely well. So in particular, BERT gave
state-of-the-art results across a broad range of different
text understanding benchmarks in this GLUE benchmark
suite, which was pretty cool. And people have
been using BERT now in this way to improve all
kinds of different things all across the language
understanding and NLP space. So one of the grand challenges
was engineer the tools of scientific discovery. And I think it’s pretty clear
machine learning is actually going to be an important
component of making advances in a lot of these other
grand challenge areas, things like autonomous vehicles
or other kinds of things. And it’s been really satisfying
to see what we’d hoped would happen when we released
TensorFlow as an open source project has actually
kind of come to pass, as we were hoping, in
that lots of people would sort of pick
up TensorFlow, use it for all kinds of things. People would improve
the core system. They would use it for tasks
we would never imagine. And that’s been
quite satisfying. So people have done
all kinds of things. Some of these are
uses inside of Google. Some are outside in
academic institutions. Some are scientists working
on conserving whales or understanding
ancient scripts, many kinds of things,
which is pretty neat. The breadth of uses
is really amazing. These are the 20 winners of the
Google.org AI Impact Challenge, where people could
submit proposals for how they might
use machine learning and AI to really tackle
a local challenge they saw in their communities. And they have all
kinds of things, ranging from trying to predict
better ambulance dispatching to identifying sort of
illegal logging using speech recognition
or audio processing. Pretty neat. And many of them are
using TensorFlow. So one of the things
we’re pretty excited about is AutoML, which is
this idea of automating some of the process
by which machine learning experts sit down and
sort of make decisions to solve machine learning problems. So currently, you have a machine
learning expert sit down, they take data, they
have computation. They run a bunch of experiments. They kind of stir
it all together. And eventually,
you get a solution to a problem you
actually care about. One of the things we’d
like to be able to do, though, is see if we could
eliminate a lot of the need for the human machine learning
expert to run these experiments and instead, automate the
experimental process by which a machine learning expert
comes by a high quality solution for a problem
you care about. So lots and lots
of organizations around the world have
machine learning problems, but many, many of
them don’t even realize they have a
machine learning problem, let alone have people
in their organization that can tackle the problem. So one of the earliest
pieces of work our researchers did in the
space was something called neural architecture search. So when you sit down and
design a neural network to tackle a particular
task, you make a lot of decisions about
shapes of this or that, and should it be used 3 by
3 filters at layer 17 or 5 by 5, all kinds of
things like this. It turns out you can
automate this process by having a model
generating model and train the model generating
model based on feedback about how well the
models that it generates work on the problem
you care about. So the way this will
work, we’re going to generate a bunch of models. Those are just descriptions
of different neural network architectures. We’re going to train each
of those for a few hours, and then we’re going to
see how well they work. And then use the
accuracy of those models as a reinforcement learning
signal for the model generating model, to steer it
away from models that didn’t work very
well and towards models that worked better. And we’re going to
repeat many, many times. And over time, we’re going
to get better and better by steering the search to the
parts of the space of models that worked well. And so it comes up with models
that look a little strange, admittedly. A human probably would
not sit down and wire up a sort of machine learning,
computer vision model exactly that way. But they’re pretty effective. So if you look at
this graph, this shows kind of the best human
machine learning experts, computer vision experts,
machine learning researchers in the world, producing a
whole bunch of different kinds of models in the last
four or five years, things like ResNet
50, DenseNet-201, Inception-ResNet,
all kinds of things. That black dotted line
is kind of the frontier of human machine
learning expert model quality on the y-axis
and computational cost on the x-axis. So what you see is as
you go out the x-axis, you tend to get more accuracy
because you’re applying more computational cost. But what you see is
the blue dotted line is AutoML-based solutions,
systems where we’ve done this automated
experimentation instead of pre-designing any
particular architecture. And you see that it’s better
both at the high end, where you care about the most
accurate model you can get, regardless of
computational cost, but it’s also accurate
at the low end, where you care about a really
lightweight model that might run on a phone
or something like that. And in 2019, we’ve actually
been able to improve that significantly. This is a set of models
called Efficient Net and it has a very
kind of a slider about you can trade
off computational cost and accuracy. But they’re all way
better than human sort of guided experimentation on
the black dotted line there. And this is true for image
recognition, for [INAUDIBLE].. It’s true for object detection. So the red line there is AutoML. The other things are not. It’s true for
language translation. So the black line there is
various kinds of Transformers. The red line is we gave
the basic components of Transformers to
an AutoML system and allowed it to fiddle
with it and come up with something better. It’s true for
computer vision models used in autonomous vehicles. So this was a collaboration
between Waymo and Google Research. We were able to come up with
models that were significantly lower latency for
the same quality, or they could trade it off and
get significantly lower error rate at the same latency. It actually works
for tabular data. So if you have lots
of customer records, and you want to
predict which customers are going to be spending $1,000
with your business next month, you can use AutoML to come
up with a high quality model for that kind of problem. OK. So what do we want? I think we want the
following properties in a machine learning model. So one is we tend to
train separate models for each different
problem we care about. And I think this
is a bit misguided. Like, really, we want one
model that does a lot of things so that it can build on the
knowledge in how it does thousands or millions
of different things, so that when the million
and first thing comes along, it can actually use
its expertise from all the other things it knows
how to do to know how to get into a good state
for the new problem with relatively little
data and relatively little computational cost. So these are some
nice properties. I have kind of a cartoon
diagram of something I think might make sense. So imagine we have a model like
this where it’s very sparsely activated, so different
pieces of the model have different
kinds of expertise. And they’re called upon
when it makes sense, but they’re mostly idle, so
it’s relatively computationally [INAUDIBLE] power efficient. But it can do many things. And now, each component
here is some piece of machine learning model
with different kinds of state, parameters in the model,
and different operations. And a new task comes along. Now you can imagine something
like neural architecture search becoming– squint
at it just right and now turn it into
neural pathway search. We’re going to
look for components that are really good for
this new task we care about, and maybe we’ll search
and find that this path through the model
actually gets us into a pretty good
state for this new task. Because maybe it goes
through components that are trained on
related tasks already. And now maybe we
want that model to be more accurate for
the purple task, so we can add a bit more
computational capacity, add a new component,
start to use that component for this new
task, continue training it, and now, that new
component can also be used for solving
other related tasks. And each component
itself might be running some sort of interesting
architectural search inside it. So I think something like that
is the direction we should be exploring as a community. It’s not what we’re
doing today, but I think it could be a pretty
interesting direction. OK, and finally, I’d like to
touch on thoughtful use of AI in society. As we’ve seen more and more
uses of machine learning in our products and
around the world, it’s really, really
important to be thinking carefully
about how we want to apply these technologies. Like any technology,
these systems can be used for amazing
things or things we might find a little
detrimental in various ways. And so we’ve come up with a set
of principles by which we think about applying sort of
machine learning and AI to our products. And we’ve made these public
about a year and a half ago as a way of sort of
sharing our thought process with the rest of the world. And I particularly like these. I’ll point out many of these
are sort of areas of research that are not fully
understood yet, but we aim to apply the best in
the state of the art methods, for example, for reducing bias
in machine learning models, but also continue to do
research and advance the state of the art in these areas. And so this is just kind of
a taste of different kinds of work we’re doing
in this area– how do we do machine
learning with more privacy, using things like
federated learning? How do we make models
more interpretable so that a clinician
can understand the predictions it’s making
on diabetic retinopathy sort of examples? How do we make machine
learning more fair? OK, and with that, I
hope I’ve convinced you that deep neural nets
and machine learning– you’re already here,
so maybe you’re already convinced
of this– but are helping make sort of
significant advances in a lot of hard computer
science problems, computer vision, speech recognition,
language understanding. General use of machine
learning is going to push the world forward. So thank you very much, and I
appreciate you all being here. [APPLAUSE] MEGAN KACHOLIA: Hey, everyone. Good morning. Just want to say,
first of all, welcome. Today, I want to
talk a little bit about TensorFlow 2.0 and
some of the new updates that we have that are going
to make your experience with TensorFlow even better. But before I dive into
a lot of those details, I want to start off by thanking
you, everyone here, everyone on the livestream,
everyone who’s been contributing to
TensorFlow, all of you who make up the community. TensorFlow was open source
to help accelerate the AI field for everyone. You’ve used it in
your experiments. You’ve deployed in
your businesses. You’ve made some amazing
different applications that we’re so excited to
showcase and talk about, some that we get to see
a bit here today, which is one of my favorite
parts about conferences like this. And you’ve done so much more. And all of this has helped make
TensorFlow what it is today. It’s the most popular ML
ecosystem in the world. And honestly, that
would not happen without the community being
excited and embracing and using this and giving back. So on behalf of the
entire TensorFlow team, I really just first
want to say thank you because it’s so amazing to
see how TensorFlow is used. That’s one of the
greatest things I get to see about my
job, is the applications and the way folks
are using TensorFlow. I want to take a step back and
talk a little bit about some of the different user
groups and how we see them making use of TensorFlow. TensorFlow was being used across
a wide range of experiments and applications. So here, calling out
researchers, data scientists and developers, and there’s
other groups kind of in-between as well. Researchers use it
because it’s flexible. It’s flexible enough to
experiment with and push the state-of-the-art
deep learning. You heard this even
just a few minutes ago, with folks from Twitter
talking about how they’re able to use TensorFlow
and expand on top of it in order to do some
of the amazing things that they want to make use
of on their own platform. And at Google, we see examples
of this when researchers are creating advanced
models like Excel NAT and some of the other
things that Jeff referenced in his talk earlier. Taking a step forward,
looking at data scientists, data scientists and
enterprise engineers have said they
rely on TensorFlow for performance and scale
in training and production environments. That’s one of the big
things about TensorFlow that we’ve always emphasized and
looked at from the beginning. How can we make sure this can
scale to large production use cases? For example, Quantify
and BlackRock use TensorFlow to
test and deploy BERT in real world
NLP instances, such as text tokenization,
as well as classification. Hopping one step forward,
looking a bit at application developers,
application developers use TensorFlow because it’s easy
to learn ML on the platforms that they care about. Arduino wants to make ML
simple on microcontrollers, so they rely on TensorFlow
pre-trained models and TensorFlow Lite
Micro for deployment. Each of these groups
is a critical part of the TensorFlow ecosystem. And this is why we really wanted
to make sure that TensorFlow 2.0 works for everyone. We announced the alpha at our
Dev Summit earlier this year. And over the past
few months, the team has been working very hard to
incorporate early feedback. Again, thank you
to the community for giving us that
early feedback, so we can make sure we’re
developing something that works well for you. And we’ve been working to
resolve bugs and issues and things like that. And just last
month in September, we were excited to announce
the final general release for TensorFlow 2.0. You might be familiar with
TensorFlow’s architecture, which has always supported
the ML lifecycle from training through deployment. Again, one of the
things we’ve emphasized since the beginning when
TensorFlow was initially open sourced a few years ago. But I want to emphasize
how TensorFlow 2.0 makes this workflow even easier
and more intuitive. First, we invested in Keras,
an easy-to-use package in TensorFlow, making it
the default high level API. Many developers love
Keras because it’s easy to use and understand. Again, you heard this already
mentioned a little bit earlier, and hopefully, we’ll
hear more about it throughout the next few days. By tightly integrating
Keras into 2.0, we can make Keras work
even better with primitives like TF data. We can do performance
optimizations behind the scenes and run distributed training. Again, we really wanted
2.0 to focus on usability. How can we make it
easier for developers? How can we make it easier for
users to get what they need out of TensorFlow? For instance, Lose It, a
customized weight loss app, said they use tf.keras for
designing their network. By leveraging [INAUDIBLE]
strategy distribution in 2.0, they were able to utilize
the full power of their GPUs. It’s feedback like this
that we love to hear, and again, it’s very
important for us to know how the community
is making use of things, how the community is using 2.0,
the things they want to see, so that we can make sure we’re
developing the right framework and also make sure you
can contribute back. When you need a bit more control
to create advanced algorithms, 2.0 comes fully loaded
with eager execution, making it familiar
for Python developers. This is especially useful when
you’re stepping through, doing debugging, making sure you
can really understand step by step what’s happening. This also means
there’s less coding required when
training your model, all without having
to use session.run. Again, usability is a focus. To demonstrate the power of
training models with 2.0, I’ll show you how you can train
a state-of-the-art NLP model in 10 lines of code, using
the Transformers NLP library by Hugging Face– again,
a community contribution. This popular package hosts
some of the most advanced NLP models available today, like
BERT, GPT, Transformer-XL, XLNet, and now supports
TensorFlow 2.0. So let’s take a look. Here, kind of just
looking through the code, you can see how you
can use 2.0 to train Hugging Face’s DistilBERT
model for text classification. You can see just simply
load the tokenizer, model, and the data set. Then prepare the data set and
use tf.keras compile and fit APIs. And with a few lines of code,
I can now train my model. And with just a
few more lines, we can use the train
model for tasks such as text classification
using eager execution. Again, it’s examples
like this where we can see how the community
takes something and is able to do something very
exciting and amazing by making use of the platform
and the ecosystem that TensorFlow is providing. But building and
training a model is only one part
of TensorFlow 2.0. You need the
performance to match. That’s why we worked hard to
continue to improve performance with TensorFlow 2.0. It delivers up to 3x
faster training performance using mixed precision on
NVIDIA Volta and Turing GPUs in a few lines of code with
models like ResNet-50 and BERT. As we continue to double
down on 2.0 in the future, performance will remain
a focus with more models and with hardware accelerators. For example, in 2.1, so the next
upcoming TensorFlow release, you can expect TPU and
TPU pod support, along with mixed precision for GPUs. So performance is
something that we’re keeping a focus on as
well, while also making sure usability really
stands to the forefront. But there’s a lot
more to the ecosystem. So beyond model building
and performance, there are many other
pieces that help round out the TensorFlow ecosystem. Add-ons and extensions are
a very important piece here, which is why we wanted to
make sure that they’re also compatible with TensorFlow 2.0. So you can use
popular libraries, like some other ones
called out here, whether it’s TensorFlow
Probability, TF Agents, or TF Text. We’ve also introduced
a host of new libraries to help researchers
and ML practitioners in more useful ways. So for example, neural
structure learning helps to train neural networks
with structured signals. And the new Fairness
Indicators add-on enables regular computation
and visualization of fairness metrics. And these are just
the types of things that you can see kind of
as part of the TensorFlow ecosystem, these add-ons
that, again, can help you make sure you’re able to
do the things you need to do not with your models,
but kind of beyond just that. Another valuable aspect of
the TensorFlow ecosystem is being able to analyze your
ML experiments in detail. So this is showing TensorBoard. TensorBoard is TensorFlow’s
visualization toolkit, which is what helps
you accomplish this. It’s a popular tool
among researchers and ML practitioners
for tracking metrics, visualizing model graphs and
parameters, and much more. It’s very interesting that we’ve
seen users enjoy TensorBoard so much, they’ll
even take screenshots of their experiments and
then use those screenshots to be able to share
with others what they’re doing with TensorFlow. This type of sharing and
collaboration in the ML community is something
we really want to encourage with TensorFlow. Again, there’s so
much that can happen by enabling the community
to do good things. That’s why I’m excited to share
the preview of TensorBoard.dev, a new, free, managed TensorBoard
experience that lets you upload and share your ML experiment
results with anyone. You’ll now be able to host
and track your ML experiments and share them publicly. No setup required. Simply upload your logs,
and then share the URL, so that others can
see the experiments and see the things that you
are doing with TensorFlow. As a preview, we’re starting off
with the [INAUDIBLE] dashboard, but over time, we’ll
be adding a lot more functionality to make the
sharing experience even better. But if you’re not looking
to build models from scratch and want to reduce some
computational cost, TensorFlow was always
made pre-trained models available through
TensorFlow Hub. And today, we’re excited to
share an improved experience of TensorFlow Hub that’s
much more intuitive, where you can find a
comprehensive repository of pre-trained models in
the TensorFlow ecosystem. This means you can
find models like BERT and others related to
image, text, video, and more that are ready to
use with TensorFlow Lite and TensorFlow.js. Again, we wanted to make
sure the experience here was vastly improved
to make it easier for you to find what you need
in order to more quickly get to the task at hand. And since TensorFlow is
driven by all of you, TensorFlow Hub is hosting
more pre-trained models from the community. You’ll be able to find curated
models by DeepMind, Google, Microsoft’s AI for Earth,
and NVIDIA ready to use today with many more to come. We want to make sure that
TensorFlow Hub is a great place to find some of these
excellent pre-trained models. And again, there’s so much
the community is doing. We want to be able to
showcase those models as well. TensorFlow 2.0 also highlights
TensorFlow’s core strengths and areas of focus,
which is being able to go from model
building, experimentation, through to production, no matter
what platform you work on. You can deploy
end-to-end ML pipelines with TensorFlow Extended or TFX. You can use your models on
mobile and embedded devices with TensorFlow Lite
for on device inference, and you can train and run
models in the browser or Node.js with TensorFlow.js. You’ll learn more about
what’s new in TensorFlow in production during the
keynote sessions tomorrow. You can learn more
about these updates by going to tensorflow.org
where you’ll also find the latest
documentation, examples, and tutorials for 2.0. Again, we want to make sure
it’s easy for the community to see what’s
happening, what’s new, and enable you to just do what
you need to do with TensorFlow. We’ve been thrilled to see
the positive response to 2.0, and we hope you continue
to share your feedback. Thank you, and I hope you
enjoy the rest of TF World. [APPLAUSE] FREDERICK REISS:
Hello, everyone. I’m Fred Reiss. I work for IBM. I’ve been working
for IBM since 2006. And I’ve been contributing to
TensorFlow Core since 2017. But my primary job at IBM is to
serve as tech lead for CODAIT. That’s the Center for
Open Source Data and AI Technologies. We are an open source lab
located in downtown San Francisco, and we work on
open source technologies that are foundational to AI. And we have on staff
44 full-time developers who work only on
open source software. And that’s a lot of developers,
a lot of open source developers. Or is it? Well, if you look across
IBM at all of the IBM-ers who are active contributors to
open source, in that they have committed code to GitHub
in the last 30 days, you’ll find that there
are almost 1,200 IBM-ers in that category. So our 44 developers are
actually a very small slice of a very large pie. Oh, and those numbers,
they don’t include Red Hat. When we closed that
acquisition earlier this year, we more than doubled our
number of active contributors to open source. So you can see that IBM is
really big in open source. And more and more, the bulk of
our contributions in the open are going towards the
foundations of AI. And when I say AI, I
mean AI in production. I mean AI at scale. AI at scale is not an algorithm. It’s not a tool. It’s a process. It’s a process that
starts with data, and then that data
turns into features. And those features train
models, and those models get deployed in applications,
and those applications produce more data. And the whole thing
starts all over again. And at the core of this
process is an ecosystem of open source software. And at the core
of this ecosystem is TensorFlow, which
is why I’m here, on behalf of IBM open
source, to welcome you to TensorFlow World. Now throughout this
conference, you’re going to see talks that speak
to all of the different stages of this AI lifecycle. But I think you’re going
to see a special emphasis on this part– moving models into production. And one of the most important
aspects of moving models into production is that when
your model gets deployed in a real-world
application, it’s going to start having
effects on the real world. And it becomes
important to ensure that those effects are
positive and that they’re fair to your clients,
to your users. Now, at IBM, here’s a
hypothetical example that our researchers put
together about a little over a year ago. They took some real
medical records data, and they produced a model
that predicts which patients are more likely to get
sick and therefore should get additional screening. And they showed that if you
naively trained this model, you end up with a model that
has significant racial bias, but that by deploying
state-of-the-art techniques to adjust the data set and the
process of making the model, they could substantially reduce
this bias to produce a model that is much more fair. You can see a Jupyter Notebook
with the entire scenario from end to end, including
code and equations and results, at the URL down here. Again, I need to emphasize this
was a hypothetical example. We built a flawed
model deliberately, so we could show how
to make it better. But no patients were
harmed in this exercise. However, last Friday, I sat
down with my morning coffee, and I opened up the
“Wall Street Journal.” And I saw this article at
the bottom of page three, describing a scenario eerily
similar to our hypothetical. When your hypothetical
starts showing up as newspapers headlines,
that’s kind of scary. And I think it is incumbent
upon us as an industry to move forward the process, the
technology of trust in AI, trust and transparency in
AI, which is why IBM and IBM Research have released our
toolkits of state-of-the-art algorithms in this space as open
source under AI Fairness 360, AI Explainability 360, and
Adversarial Robustness 360. It is also why IBM is working
with other members of the Linux Foundation AI, a
trusted AI committee, to move forward open
standards in this area so that we can all move
more quickly to trusted AI. Now if you’d like to
hear more on this topic, my colleague,
Animesh Singh, will be giving a talk this
afternoon at 1:40 on trusted AI for the
full 40 minute session. Also I’d like to give
a quick shout out to my other
co-workers from CODAIT who have come down here to
show you cool open source demos at the IBM booth. That’s booth 201. Also check out our
websites, developer.ibm.com and codait.org. On behalf of IBM, I’d
like to welcome you all to TensorFlow World. Enjoy the conference. Thank you. [APPLAUSE] THEODORE SUMME: Hi, I’m
Ted Summe from Twitter. Before I get started with
my conversation today, I want to do a quick
plug for Twitter. What’s great about
events like this is you get to hear people
like Jeff Dean talk. And you also get to hear
from colleagues and people in the industry that are facing
similar challenges as you and have conversations around
developments in data science and machine learning. But what’s great is
that’s actually available every day on Twitter. Twitter’s phenomenal for
conversation on data science and machine learning. People like Jeff Dean
and other thought leaders are constantly
sharing their thoughts and their developments. And you can follow that
conversation and engage in it. And not only that, but you can
bring that conversation back to your workplace and come
off looking like a hero– just something to consider. So without that shameless
plug, my name’s Ted Summe. I lead product for Cortex. Cortex is Twitter’s central
machine learning organization. If you have any questions
for me or the team, feel free to connect
with me on Twitter, and we can follow up later. So before we get into how we’re
accelerating ML at Twitter, let’s talk a little
bit about how we’re even using ML at Twitter. Twitter is largely organized
against three customer needs, the first of which is
our health initiative. That might be a little
bit confusing to you. You might think of
it as user safety. But we think about it
as improving the health of conversations on Twitter. And machine learning
is already at use here. We use it to detect spam. We can algorithmically
and at scale detect spam and protect
our users from it. Similarly, in the
abuse space, we can proactively flag content
as potentially abuse, toss it up for human
reviews, and act on it before our users even
get impacted by it. A third space where we’re
using machine learning here is something called
NSFW, Not Safe For Work. I think you’re all
familiar with the acronym. So how can we, at scale,
identify this content and handle it accordingly? Another use of machine
learning in this space. There’s more that
we want to do here, and there’s more that
we’re already doing. Similarly, the consumer
organization– this is largely what you think of,
the big blue app of Twitter. And here, the customer
job that we’re serving is helping connect our
customers with the conversations on Twitter that
they’re interested in. And one of the primary
veins in which we do this is our timeline. Our timeline today is ranked. So if you’re not familiar,
users follow accounts. Content and tweets associated
with those accounts get funneled into
a central feed. And we rank that based on your
past engagement and interest to make sure we bring forth
the most relevant conversations for you. Now, there’s lots of
conversations on Twitter, and you’re not
following everyone. And so there’s
also a job that we have to serve about bringing
forth all the conversations that you’re not
proactively following, but are still relevant to you. This has surfaced in our
Recommendations product, which uses machine learning to
scan the corpus of content on Twitter, and identify
what conversations would be most interesting
to you, and push it to you in a notification. The inverse of that
is when you know what the topics you
want to explore are, but you’re looking for the
conversations around that. That’s where we
use Twitter Search. This is another surface
area in the big blue app that we’re using
machine learning. The third job to be
done for our customers is helping connect brands
with their customers. You might think of this
as the ads product. And this is actually
the OG of machine learning at Twitter, the first
team that implemented it. And here, we use it for what
you might expect, ads ranking. That’s kind of like the timeline
ranking, but instead of tweets, it’s ads and identifying
the most relevant ads for our users. And as signals to go
into that, we also do user targeting to understand
your past engagement ads, understand which ads are
in your interest space. And the third– oh. Yeah, we’re still good. And the third is brand safety. You might not think about this
when you think about machine learning and advertising. But if you’re a
company like United and you want to
advertise on Twitter, you want to make sure
that your ad never shows up next to a tweet
about a plane crash. So how do we, at scale,
protect our brands from those off-brand
conversations? We use machine learning
for this as well. So as you can tell,
machine learning is a big part of all of
these organizations today. And where we have shared
interests and shared investment, we want to
make sure we have a shared organization that serves that. And that’s the need for Cortex. Cortex is Twitter’s central
machine learning team, and our purpose is
really quite simple– to enable Twitter with
ethical and advanced AI. And to serve that purpose,
we’ve organized in three ways. The first is our
applied research group. This group applies the
most advanced ML techniques from industry and research
to our most important surface areas, whether they be new
initiatives or existing places. This team you can kind of think
of as like an internal task force or consultancy that we can
redeploy against the company’s top initiatives. The second is signals. When using machine
learning, having shared data assets
that are broadly useful can provide us more leverage. Examples of this would be our
language understanding team that looks at tweets
and identifies named entities inside them. Those can then be offered up
as features for other teams to consume in their own
applications of machine learning. Similarly, our media
understanding team looks at images and can create
a fingerprint of any image. And therefore, we can identify
every use of that image across the platform. These are examples of
shared signals that we’re producing that can be
used for machine learning at scale inside the company. And the third organization
is our platform team. And this is really
the origins of Cortex. Here, we provide tools
and infrastructure to accelerate ML
development at Twitter, increase the velocity
of our ML practitioners. And this is really the focus
of the conversation today. When we set out to
build this ML platform, we decided we wanted a shared ML
platform across all of Twitter. And why is that
important that it be shared across all of Twitter? Well, we want transferability. We want the great work
being done in the ads team to be, where
possible, transferable to benefit the health initiative
where that’s relevant. And similarly, if we have great
talent in the consumer team that’s interested in
moving to the ads team, if they’re on the
same platform, they can transfer without friction
and be able to ramp up quickly. So we set out with this
goal of having a shared ML platform across all of Twitter. And when we did that, we
looked at a couple product requirements. First, it needs to be scalable. It needs to be able to
operate at Twitter scale. The second, it needs
to be adaptable. This space is
developing quickly so we need a platform that can evolve
with data science and machine learning developments. Third is the talent pool. We want to make sure that we
have a development environment at Twitter that appeals to the
ML researchers and engineers that we’re hiring
and developing. Fourth is the ecosystem. We want to be able to
lean on the partners that are developing
industry leading tools so that we can focus
on technologies that are Twitter specific. Fourth is documentation. You ought to understand that. We want to be able
to quickly unblock our practitioners as
they hit issues, which is inevitable in any platform. And finally, usability. We want to remove
friction and frustration from the lives of our
team, so that they can focus on delivering
value for our end customers. So considering these
product requirements, let’s see how TensorFlow
is done against them. First is scalability. We validated this by
putting TensorFlow by way of our implementation
we called Deep Bird against timeline ranking. So every tweet that’s
ranked in the timeline today runs through TensorFlow. So we can consider
that test validated. Second is adaptability. The novel architectures
that TensorFlow can support, as well as the custom
lost functions, allows us to react to the
latest research and employ that
inside the company. An example that we
published on this publicly is our use of a
SplitNet architecture and ads ranking. So TensorFlow has been
very adaptable for us. Third is the talent pool,
and we think about the talent pool in kind of two types. There’s the ML engineer
and the ML researcher. And as a proxy of
these audiences, we looked at the
GitHub data on these. And clearly, TensorFlow
is widely adopted amongst ML engineers. And similarly, the
archive community shows strong evidence
of wide adoption in the academic community. On top of this
proxy data, we also have anecdotal
evidence of the speed of ramp-up for ML
researchers and ML engineers inside the company. The fourth is the ecosystem. Whether it’s TensorBoard,
TF Data Validation, TF Model Analysis, TF Metastore,
TF Hub, TFX Pipelines, there’s a slew of these
products out there, and they’re phenomenal. They allow us to focus
on developing tools and infrastructure that is
specific to Twitter’s needs and lead on the
great work of others. So we’re really
grateful for this, and TensorFlow does great here. Fifth being documentation. Now, this is what you would go
to when you go to TensorFlow, and you see that phenomenal
documentation, as well as great education resources. But what you might
not appreciate and what we’ve come
to really appreciate is the value of the
user generated content. What Stack Overflow
and other platforms can provide in terms
of user generated content is almost as
valuable as anything TensorFlow itself can create. And so TensorFlow, given
its widespread adoption, its great TensorFlow
website, has provided phenomenal
documentation for ML practitioners. Finally, usability. And this is why we’re really
excited about TensorFlow 2.0. The orientation around
the carrier’s API makes it more user friendly. It also still continues
to allow for flexibility for more advanced users. The eager execution enables more
rapid and intuitive debugging, and it closes the gap between
ML engineers and modelers. So clearly from this
checklist, we’re pretty happy with our
engagement with TensorFlow. And we’re excited
about continuing to develop the
platform with them and push the limits
on what it can do, with gratitude
to the community for their participation and
involvement in the product and appreciate their
conversation on Twitter, as we advance it. So if you have any questions
for me, as I said before, you can connect with me, but
I’m not alone here today. A bunch of my colleagues
are here as well. So if you see them
roaming the halls, feel free to engage with them. Or as I shared before, you
can continue the conversation on Twitter. Here are their handles. Thank you for your time. Cheers. [APPLAUSE] CRAIG WILEY: I
just want to begin by saying I’ve been dabbling
in Cloud AI and Cloud Machine Learning for a while. And during that time,
it never occurred to me that we’d be able to come
out with something like we did today because this is only
possible because Google Cloud and TensorFlow can
collaborate unbelievably closely together within Google. So to begin, let’s talk a
little bit about TensorFlow– 46 million downloads. TensorFlow has been massive
growth the last few years. It’s expanded from the forefront
of research, which we’ve seen earlier this morning,
to businesses taking it on as a dependency for their
business to operate on a day in, day out basis. It’s a super exciting piece. As someone who spends most
all of their time thinking about how we can bring
AI and machine learning into businesses, seeing
TensorFlow’s commitment and focus on deploying
actual ML in production is super exciting to me. With this growth, though,
comes growing pains. And part of that is things
like support, right? When my model doesn’t
do what I expected it to or my training job fails,
what options do I have? And how well does your
boss respond when you say, hey, yes, I don’t know why
my model’s not training, but not to worry, I’ve
put a question on Slack. And hopefully, someone
will get back to me. We understand that
businesses who are taking a bet on
TensorFlow as a critical piece of their hardware
architecture or their stack need more than this. Second, it can be a
challenge to unlock the scale and performance of cloud. For those of you who, like me,
have gone through this journey over the last couple
of years, for me, it started on my laptop. Right? And then eventually,
I outgrew my laptop, and so I had a gaming
rig under my desk, right? With the GPU and
eventually, there were eight gaming
rigs under my desk. And when you opened
the door to my office, the whole floor knew because
it sounded like [INAUDIBLE].. Right? And but now with
today’s cloud, that doesn’t have to be the case. You can go from
that single instance all the way up to a
massive scale seamlessly. So with that, today, we bring
you TensorFlow Enterprise. TensorFlow Enterprise is
designed to do three things– one, give you Enterprise grade
support; two, cloud scale performance; and
three, managed services when and where you want them,
at the abstraction level you want them. Enterprise grade support,
what does that mean? Fundamentally what that means
is that as these businesses take a bet on TensorFlow,
many of these businesses have IT policies or requirements
that the software have a certain longevity before
they’re willing to commit to it in production. And so today, for certain
versions of TensorFlow, when used on Google Cloud,
we will extend that one year of support a full three years. That means that if you’re
building models on 1.15 today, you can know that for
the next three years, you’ll get bug fixes and
security patches when and where you need them. Simple and scalable. Scaling from an idea
on a single node to production at massive
scale can be daunting, right? Saying to my boss, hey, I
took a sample of the data was something that previously
seemed totally reasonable, but now we’re asked to train
on the entire corpus of data. And that can take days, weeks. We can help with all of
that by deploying TensorFlow on Google Cloud, a network
that’s been running TensorFlow successfully for years
and has been highly optimized for this purpose. So scalable across our
world class architecture, the products are
compatibility tested with the cloud, their
performance optimized for the cloud and for Google’s
world class infrastructure. What does this mean? So if any of you have
ever had the opportunity to use BigQuery, BigQuery
is Google Cloud’s kind of massively parallel cloud
hosted data warehouse. And by the way, if you
haven’t tried using BigQuery, I highly recommend
going out and trying it. It returns results faster
than can be imagined. That speed in
BigQuery, we wanted to make sure we were taking
full advantage of that. And so recent changes
and recent pieces included in
TensorFlow Enterprise have increased the speed of
the connection between the data warehouse and TensorFlow
by three times. Right? Now, all of sudden, those
jobs that were taking days take hours. Unity gaming, wonderful
customer and partner with us. You can see the quote here. Unity leverages these aspects
of TensorFlow Enterprise in their business. Their monetization products
reach more than three billion devices– three billion devices worldwide. Game developers rely on a
mix of scale and products to drive installs and revenue
and player engagement. And Unity needs to be able to
quickly test, build, scale, deploy models all
at massive scale. This allows them to
serve up the best results for their developers
and their advertisers. Managed services. As I said, TensorFlow
Enterprise will be available on
Google Cloud and will be available as part of
Google Cloud’s AI platform. It will also be available
in VMs if you’d prefer that, or in containers if you want
to run them on Google Cloud Kubernetes Engine, or using
Kubeflow on Kubernetes Engine. In summary,
TensorFlow Enterprise offers Enterprise
grade support– that continuation, that
full three years of support that IT departments
are accustomed to– cloud scale performance so that
you can run at massive scale, and works seamlessly with
our managed services. And all of this is
free and fully included for all Google Cloud users. Google Cloud becomes the
best place to run TensorFlow. But there’s one
last piece, which is for companies for whom
AI is their business– not companies for
whom AI might help with this part of their
business or that or might help optimize this campaign
or this backend system, but for companies where AI
is their business, right? Where they’re training
hundreds of thousands of hours of training a year,
petabytes of data, right? Using cutting edge models to
meet their unique requirements, we are introducing
TensorFlow Enterprise with white-glove support. This is really for
cutting edge AI, right? Engineering to engineering
assistance when needed. Close collaboration
across Google allows us to fix bugs
faster if needed. One of the great opportunities
of working in cloud, if you ask my kids,
they’ll tell you that the reason I
work in cloud AI and in kind of machine learning
is in an effort to keep them ever from learning to drive. They’re eight and 10 years
old, so I need people to kind of hurry along
this route, if you will. But one of the customers
and partners we have is Cruise Automotive. And you can see here,
they’re a shining example of the work we’re doing. On their quest towards
self-driving cars, they’ve also experienced
hiccups and challenges and scaling problems. And we’ve been a
critical partner for them in helping ensure that they can
achieve the results they need to, to solve this kind
of generational defining problem of autonomous vehicles. You can see not
only did we improve the accuracy of their models,
but also reduce training times from four days down to one day. This allows them to iterate at
speeds previously unthinkable. So none of this, as I said,
would have been possible without the close collaboration
between Google Cloud and TensorFlow. I look back on Megan’s
recent announcement of TensorBoard.dev. We will be looking at bringing
that type of functionality into an enterprise environment
as well in the coming months. But we’re really, really excited
to get TensorFlow Enterprise into your hands today. To learn more and
get started, you can go to the link, as well
as sessions later today. And if you are on the
cutting edge of AI, we are accepting applications
for the white-glove service as well. We’re excited to bring
this offering to teams. We’re excited to bring
this offering to businesses that want to move into a place
where machine learning is increasingly a part of
how they create value. Thank you very much
for your time today KEMAL EL MOUJAHID:
Hi, my name is Kemal. I’m the product
director for TensorFlow. So earlier, you heard
from Jeff and Megan about the prod direction. Now, what I’d like to talk
about is the most important part of what we’re building,
and that’s the community. That’s you. Sorry. Where’s the slide? Thank you. So as you’ve seen
in the video, we’ve got a great roadshow, 11
events spanning five continents to connect the community
with the TensorFlow team. I, personally, was
very lucky this summer, because I got to travel to
Morocco and Ghana and Shanghai, amongst other places, just
to meet the community, and to listen to your feedback. And we heard a lot
of great things. So as we’re thinking about, how
can we best help the community? It really came down
to three things. First, we would like to help
you to connect with the larger community, and to share the
latest and greatest of what you’ve been building. Then, we also would like you– we want to help you
learn, learn about ML, learn about TensorFlow. And then, we want to help
you contribute and give back to the community. So let’s start with Connect. So why connect? Well, first the community–
the TensorFlow community has really grown a lot. It’s huge– 46 million
downloads, 2,100 committers, and– again, I know that we’ve
been saying that all along, but I really want to
say a huge thank you on behalf of the TensorFlow team
for making the community what it is today. Another aspect of the community
that we’re very proud of is that it’s truly global. This is a revised map
of our GitHub stars. And, as you can see, we’re
covering all time zones and we keep growing. So the community is huge. It’s truly global. And we really want
to think about, how can we bring the
community closer together? And this is really
what initiated the idea of TensorFlow World. We wanted to create
an event for you. We wanted an event where you
could come up and connect with the rest of the
community, and share what you’ve been working on. And this has actually
started organically. Seven months ago, the
TensorFlow User Groups started, and I think now we
have close to 50. The largest one is in Korea. It has 46,000 members. We have 50 in China. So if you’re in the audience
or in the livestream, and you’re looking to this
map, and you’re thinking, wait, I don’t see a dot where I live– and you have a TensorFlow member
that you’re connecting with, and you want to start a
TensorFlow User Group– well, we’d like to help you. So please go to
tensorflow.org/community, and we’ll help you
get it started. So that next year, when
we look at this map, we have dots all over the place. So what about businesses? We’ve talked about developers. What about businesses? One thing we heard
from businesses is they have this
business problem. They think ML can help them,
but they’re not sure how. And that’s a huge
missed opportunity when we look at the
staggering $13 trillion that AI will bring to the global
economy over the next decade. So you have those
businesses on one side, and then you have partners
on the other side, who know about ML, they know
how to use TensorFlow, so how do we connect those two? Well, this was the inspiration
for launching our Trusted Partner Pilot Program, which
helps you, as a business, connect to a partner who will
help you solve your ML problem. So if you go on
tensorflow.org, you’ll find more about our
Trusted Partner program. Just a couple of
examples of cool things that they’ve been working on. One partner helped a
car insurance company shorten the insurance
claim processing time using image processing techniques. Another partner helped the
global med tech company by automating the shipping
labeling process using object recognition techniques. And you’ll hear more from
these partners later today. I encourage you to go
check out their talks. Another aspect is that
if you’re a partner, and you’re interested in
getting into this program, we also would like
to hear from you. So let’s talk about Learn. We’ve invested a lot in
producing quality material to help you learn about
ML and about TensorFlow. One thing that we did over the
summer, which was very exciting is for the first time, we’re
a part of the Google Summer of Code. We had a lot of interest. We were able to select 20
very talented students, and they got to work
the whole summer with amazing mentors on the
TensorFlow engineering team. And they worked on
very inspiring projects going from 2.0 to Swift
to JS to TF-Agents. So we were so excited with
the success of this program that we decided to participate,
for the first time, in a Google Code-in program. So this is the same program,
but for pre-university students from 13 to 17. It’s a global online contest. And it introduces teenagers
to the world of contributing to open source development. So as I mentioned, we’ve
invested a lot this year on ML education material,
but one thing we heard is that there’s a lot
of different things. And what you want
is to be guided through pathways of learning. So we’ve worked hard
on that, and we’ve decided to announce the new
Learn ML page tensorflow.org. And what this is a
learning path curated for you by the TensorFlow
team, and organized by level. So you have from
beginners to advanced. You can explore books,
courses, and videos to help you improve your
knowledge of machine learning, and use that knowledge and
use TensorFlow, to solve your real-world problem. And for more exciting
news that will be available on the
website, I’d like to play a brief video
by a friend Andrew Ng. [VIDEO PLAYBACK] – Hi, everyone. I’m in New York
right now, and wish I could be there to
enjoy the conference. But I want to share with
you some exciting updates. Deeplearning.ai
started a partnership with the TensorFlow
team with a goal of making world-class education
available for developers on the Coursera platform. Since releasing the Deep
Learning Specialization, I’ve seen so many of you,
hundreds of thousands, learn the fundamental
skills of deep learning. I’m delighted we’ve been
able to complement that with the TensorFlow in
Practice Specialization to help developers
learn how to build ML applications for
computer vision, NLP, sequence models, and more. Today, I want to share with
you an exciting new project that the deeplearning.ai
and TensorFlow teams have been working on together. Being able to use your models
in a real-world scenario is when machine learning
gets particularly exciting. So we’re producing a new
four-course specialization called TensorFlow Data
and Deployment that will let you take your ML
skills to the real world, deploying models to the web,
mobile devices, and more. It will be available on
Coursera in early December. I’m excited to see what you
do with these new resources. Keep learning. [END PLAYBACK] KEMAL EL MOUJAHID: All right. This is really cool. Since we started working
on these programs, it’s been pretty amazing to see
hundreds of thousands of people take those courses. And the goal of these
educational resources is to let everyone participate
in the ML revolution, regardless of what your
experience with machine learning is. And now, Contribute. So a great way to get involved
is to connect with your GDE. We now have 126 machine
learning GDEs globally. We love our GDEs. They’re amazing. They do amazing things
for the community. This year alone, they gave over
400 tech talks, 250 workshops. They wrote 221 articles
reaching tens of thousands of developers. And one thing that
was new this year is that they helped
with doc sprints. So docs are really important. They’re critical, right? You really need
good quality docs to work on machine learning,
and often the documentation is not available in
people’s native languages. And so this is why when we
partnered with our GDEs, we launched the doc sprints. Over 9,000 API docs were updated
by members of the TensorFlow community in over 15 countries. We heard amazing
stories of power outage, and power running out, and
people coming back later to finish a doc sprint,
and actually writing docs on their phones. So if you’ve been
helping with docs, thank you, if
you’re in the room, if you’re over livestream,
thank you so much. If you’re interested in
helping translate documentation in your native language,
please reach out, and we’ll help you
organize a doc sprint. Another thing that
the GDEs help with is experimenting with
the latest features. So I want to call out
Sam Witteveen, an ML GDE from Singapore, who’s
already experimenting with 2.x TPUs, and you can hear
him talk later today to hear about his experience. So if you want to get involved,
please reach out to your GDE and start working on TensorFlow. Another really great way
to help is to join a SIG. A SIG is a Special
Interest Group, and it helps you
work on the things that you’re the most
excited about on TensorFlow. We have, now, 11 SIGs available. Addons, IO, and
Networking, in particular, really supported the
transition to 2.0 by embracing the
parts of contrib and putting them into 2.0. And SIG Build ensures that
TF runs well everywhere on any OS, any
architecture, and plays well with the Python library. And we have many other really
exciting SIGs, so I really encourage you to join one. Another really great
way to contribute is through competition. And for those of
you who were there at the Dev Summit back in March,
we launched our 2.0 challenge on DevPost. And the grand prize
was an invitation to this event, TensorFlow World. And so we would like to honor
our 2.0 Challenge winners, and I think we are lucky to
have two of them in the room– Victor and Kyle, if you’re here. [APPLAUSE] So Victor worked on
Handtrack.js, a library for prototyping hand
gesture in the browser. And then Kyle worked on a Python
3 package to simulate N-body, to generate N-body simulations. So one thing we heard,
too, during our travels is, oh, that hackathon was
great, but I totally missed it. Can we have another one? Well, yes. Let’s do another one. So if you go on
tfworld.devpost.com, we’re launching a new challenge. You can apply your
2.0 skills and share the latest and greatest,
and win cool prizes. So we’re really excited to see
what you’re going to build. Another great community that
we’re very excited to partner with is Kaggle. So we’ve launched
a contest on Kaggle to challenge you with
question answering model based on Wikipedia articles. You can put your natural
language processing skills to the test and earn
$50,000 in prizes. It’s open for entry until
January 22, so best of luck. So we have a few
action items for you, and they’re listed
on this slide. But remember, we created
TensorFlow World for you, to help you connect and share
what you’ve been working on. So our main action item for
you in the next two days is really to get to know
the community better. And with that, I’d
like to thank you, and I hope you enjoy
the rest of TF World. Thank you. [APPLAUSE]

Related Posts

CAREERS IN FASHION DESIGNING – Fashion Designer,B.Sc,Certificate Courses,Trend Research

CAREERS IN FASHION DESIGNING – Fashion Designer,B.Sc,Certificate Courses,Trend Research

Hello All..This is Manju from Freshersworld.com Welcome to our video channel on jobs and careers Today I will be talking

Leave a Reply

Your email address will not be published. Required fields are marked *