Navigating Google Cloud Platform: A Guide for new GCP Users (Google I/O ’17)

Navigating Google Cloud Platform: A Guide for new GCP Users (Google I/O ’17)


[MUSIC PLAYING] TERRENCE RYAN: All right,
how you guys all doing? Good? All right, still a
little energy left. Great, great, great. So this is going to be
Navigating Google Cloud Platform. It’s meant to be an overview. If you are not familiar
with Google Cloud Platform, well, you’re in the right spot. So to that end, I kind of
want to get started, and ask you guys who you all are. So obviously, you’re all
I/O attendees, that’s great. But who here spends most
of their time developing? If you do that, are you mostly
front end developers, or client side, like Android developers? Or backend developers? OK, good mix. How many people
are systems people? How many do both? Because you love DevOps
or because your company won’t hire more people? Which is it? How many people here
tell developers or system people what to do, managers? We’re just at the cutoff for
where manager jokes still fly, because there’s more
of us than there are of you. Good. OK, so that gives me a good
idea of who you guys are and what I can talk about. My goal here is to
help you guys identify the pieces of Google
Cloud Platform that makes sense for you. And this is sometimes
difficult, because if you look– like this is when we show
off Google Cloud Platform, it ends up showing up
like something like this. And I sort of understand it. We actually have drills where
we test if you can identify each thing by the logo. But I think for
someone coming to it, it’s hard to know what
are all these things. My job is to tell you, from
a technical perspective, what all of this stuff does. And that’s where I’m
going to be coming from. I’m not going to be giving
you a competitive analysis. I’m not going to be talking
too much marketing stuff. I’m not going to
talk about pricing. I just want to tell
you what this stuff is. I’ll show you a couple of
demos and examples of it. And hopefully, that will
whet your appetite for it. Yes, because when
you see that list, you sometimes feel like this. So here’s what I’m
going to talk about. I’m going to talk about
computing, networking, storage, big data, administration,
development, and machine learning. I’m going to spend extra
time talking about computing and storage, because those
are places where you have to spend a little bit
more time figuring out what the right thing is. If you want a firewall, we
have a thing called a firewall. It’s pretty straightforward,
from a networking perspective. But which computing
platform you want to use is a little bit more
complicated story. So with that, I am going
to go right into computing, or doing things. So when people talk about
doing things on Google Cloud Platform, or in
computing in general, tend to approach it from
one of three directions. Virtual machines, so taking
what you have somewhere else and just lifting it over and
dropping onto Google Cloud Platform; containers,
which sort of everybody’s moving over to containers. This a big part of the zeitgeist
in the industry right now. So how do you run stuff on
containers on Google Cloud Platform. And serverless–
serverless is one of those like “marketing terms.” What it really means is, you
don’t worry about the hardware. You usually get some sort of
metered version of the service, where you pay for what you
use, and if you don’t use it, you don’t pay for it. If you guys are
familiar with the term platform as a service,
what I think happened was, platform as a service
hit the Gartner trail of disillusionment, and
people got afraid of it. So we all started calling it
serverless in the industry. Don’t tell anybody, you now
know that’s our dirty secret. But that’s serverless. So I’m going to
start with our VMs. Our VMs are– they’re VMS. I mean, they’re
virtual machines. They’re what you’re used
to if you’re dealing with other virtual machines. I think they’re
great, and I’ll show an example of why I think
they’re great in a second. But their capacity
is pretty great, from 1 to 64 processors, half
a gig of RAM all the way up to 416 gigs of RAM. We have various limits
for your persistent disk, whether it’s 65
terabytes for just plain persistent, 3 terabytes for
SSD, 200 gigs of RAM disk you can also do. We can also include GPUs in that
now, which is really awesome. Nice addition to the list. We have pre-configured
images so that if you’re running whatever flavor
of Linux you want to run, we can pretty much already
give you an image for it. We also have Windows images. If one of the things we have
is not the right fit for you, you can take one of our images
and customize it, save it as an image, and use it
in creating your VMs, so you can build your own. You can also build your own
from scratch and upload them. And we like to say they
spin up in tens of seconds. And what that means is when
you’re in your hotel room practicing before
speaking at I/O, to both you guys
here and the audience on the livestream,
when you’re practicing, it takes like 20 seconds. When you’re in front of people,
it takes like 80 to 100, and I don’t know how
it knows, but it knows. So we’re going to see how
fast it spins up today, because I’m going to do a demo. Actually, I’m going to
do a demo in a second, because I want to talk
about one more thing, which is Cloud Launcher. You may want images already
built. Software packages– let’s say you want Redis, or
you want a Barracuda firewall setup– we have pre-built images
of a whole list of services that you might want. You go to Cloud Launcher,
you spin them up, and you will take over
management of them. They’re not managed
services, but they can help speed you
along the way of getting your machine up and running. So now I’m going to
switch over to a demo. And here we are, great. And I accidentally
locked my machine, because security first. So this is our VM console,
and there’s a couple of things I want to show off. But first I’m going to start
and create an instance. So here’s an instance. I can pick any
pre-configured image, but what’s really kind of cool
is I can go through and say, you know what, I
want a crazy image. I want 14 CPUs, because I have a
very specific workload in mind, I guess. I can go through, I
can pick an image, but I’m just going to go
through and create this. And in a moment, that’s going
to spin up and be available. But what I can show
you while we’re waiting for that is that one of
the great things about our VMs is if I hit this SSH
button, I will SSH into it through the browser. So no having to
download security certs and setting up my local
environment to hook up to it. Just wherever I
have a browser, I can hook up via SSH to my box. Now, under the covers
here, while this was blocking the
screen, our instance is already up and ready. So while we’re waiting
for the SSH to connect– there we go, there’s
command line– there’s one other thing
I want to show you– I’m just going to
leave SSH there– and that is we
look at your usage. There’s not a person
looking at your usage, but like the system
looks at your usage. And we’ll tell you, hey,
you’ve had really high memory utilization on this box. Maybe you should up it. And you can just apply
the recommendation. But it also works the other way. I could save $334 a month if
I take this recommendation and tune it down. So we want to make sure you’re
not wasting your money on us, so we give you these
alerts to tell you, hey, maybe we can run on
a different machine. So with that, I’m
going to switch back to [? preso ?] mode. That was Compute Engine. I’m going to move
on to containers. So how many people here are
fooling around containers, Docker or otherwise? OK, how many people are
running in production? How many people said
Docker in a parking lot and a conference
sprung up around you? That’s actually how I/O came
to be, in a parking lot. I think, I don’t know. I wouldn’t trust me on that one. So containers,
containers at Google. What do you do with
containers at Google? So containers at Google, we’re
talking about Container Engine, or managed Kubernetes, at which
point you might be asking, Kuber-what-es, which is
the appropriate question. Kubernetes is a container
orchestration system. So you guys are
familiar with Docker. You take your
containerized systems, you already have them in
a small, single-instance container. But when you want
to start linking whole bunches of containers
together and building micro surfaces based
on them, or building complex scheduled jobs,
it becomes a little bit difficult to manage them. So Kubernetes makes
that easier for you. So it is an open
source application that can run anywhere. You can run it on us,
which I recommend, but you can also
run it other places, on premises, Amazon, Azure, you
can run it all those places. And so the way
Kubernetes works is I say, Kubernetes, I have
a service that’s made up of three different containers. I would like to have
some redundancy, so I want four of
this container, three of this container,
one of this container. And I’d like you to
get me an IP address for public use for two layers of
it, the API and the front end. The backend I want
to keep private. And Kubernetes will make sure
that it starts all that up and runs for you. If you have a
public IP address– or if you want to have
a public IP address– Kubernetes will give it to you. If you don’t, it’ll
keep it private. So Kubernetes, we talked about
running on top of Kubernetes, but now we’re going to talk
about actually building a Kubernetes cluster
for yourself, if you’re running on
prem or somewhere else. So first you have to
build up all the machines. Once you build up the
machines, you then have to install
Kubernetes on it. After you install
Kubernetes on it, you have to set up a network
and get the network running. You then have to set up services
like DNS for service discovery, so that when you say, this
guy I want to be public and this one a private,
that all gets sorted out. Then, because people
like to know what’s going on with their systems,
you probably want to do logging, and you want to do monitoring. Then to go a step further,
containers are ephemeral. So you don’t want to
store data on containers. So what you want
to do is you want to build disks that live
outside the containers that can live even if the containers
themselves go up and down. If you need it, this is great,
but this is a lot of work. So the other thing you could
do is on Google Cloud Platform, we have this thing
called Container Engine. And there’s this button,
Create a Container Cluster, and that is exactly the
same thing I did without all of the steps involved. It’ll just do all
of that for you and keep maintaining the OS
and the Kubernetes version. So you don’t have to
manage any of the system, you just run on
top of Kubernetes. So I’m going to show a quick
demo of Kubernetes in action. So I’m going to switch
over to my laptop here. So I have a visualizer
up with nothing on it, because nothing is running yet. I’m going to switch
over to my ID here. And wrong terminal
window, switch over here. And what I’m going
to do is I’m going to deploy seven applications
onto this Kubernetes cluster that I’m running. So I’ll hit Run. The command isn’t that– it’s just a make file. We got an error. I know why. All right. So I logged in earlier, thinking
that it would save me, but no. Gcloud off, log in. There you go. Yup, I’m allowed to
do this, trust me. All right, I’ll
switch back here. Make [? M, ?] make
creds, and make deploy. Also helps if you
spell it right. There we go. All right, so I’m going to
deploy seven applications. There we go, now it’s working. Just had to log in. So now when I switch,
we should see– oh, there we go, now
they’re starting up. So in that time I launched
seven applications. It happened so fast that while I
muddling around with logging in and making sure all of our
credentials are correct, it did it. So it launched seven– no, six applications, yes. So two Drupal instances, a
Wordpress instance, an HD, like just a simple CRUD
application, a Node.js app. All of these
launched immediately. I already have a public IP
address for one of them. And so in the past, I would
build multiple systems for all of these. And because they’re
all different versions of the same software,
I wouldn’t want to run them on the same box. Now I can throw all of my
boxes into a Kubernetes cluster and run all of
these applications in one space, saving a
whole lot of resources and making my deploy times
faster and everything working. So that is a quick demo
of Kubernetes in action. And with that, I’m going to
switch back to presenting. And I’m going to go through. So that is Container Engine. It’s managed Kubernetes. All of the system, all of the
hardware, all of the network, and all of that’s
handled for you. You get smart defaults
for monitoring and logging and all that. And you also get the
capability of autoscaling. The other way to run containers
on Google Cloud Platform is something we call App
Engine Flexible Environment. Now, that’s going
to come back up when we talk about serverless. But under the covers,
App Engine Flexible is running just Docker. And so our run times are all
Docker images that you can use. Now, what you can do is
use a custom runtime, which is your own Docker file,
with your own application that you upload to App Engine. What you get out of
that is autoscaling. So we can scale it from
one to as much load as you can throw at it. And you also get
an endpoint for it. So you don’t have to wire up all
of the networking and the certs and all of that to get SSL,
you can just use our endpoint, and you’re good to go. So we also have a couple of
helper apps for containers. Container Registry
is a place where you can build your
own containers, store them, and then
store the images there, and then you’re locked behind
your Google Cloud credentials. So if you want to share an
image with a group of people, but not make it public,
you can do that. You can also make
them public there. That also works, too. You can use them from
any Docker environment, including Docker
on your desktop, or Kubernetes in production. Container Builder will take your
Docker files and application and just build an image for
you, and host it in registry. So instead of having
to build all that stuff on your local laptop
and then upload it, you can just upload the smaller
files and the applications, and it’ll be fine. All right, so now we’re going
to switch to serverless. So we’ll start with App
Engine Flexible, which I just mentioned. So it does run Docker, and
you can build your own. But under the covers,
we have these runtimes that are all ready for you. So Java, Python,
Node, Ruby, PHP, Go. And what that gives
you is the ability to kind of just
upload your code. You don’t have to worry
about the Docker file at all, as long as it’s one
of those runtimes. And we’ll just run it for you. And so that same
kind of advantage you got before of just having
a scale and one endpoint is already there for you. Now, we also have
this thing called App Engine Standard, which is,
if you are familiar with App Engine from before, like you’ve
been to a couple of these and seen us talk about
it, is the App Engine that we’ve had out for a while. It’s restricted
in its languages. And the restrictions
go a little deeper. You can’t write to
the file system, you can’t use extensions. You have to write in only
the language we provide you. But in return for that,
you get very, very quickly scaling applications, so if
you have very variable load, and sometimes it
rests to zero, you can use the App Engine
to manage that, and only pay for it when you need it. And I’m going to show an example
of this, of just how rapidly App Engine can scale. The last piece of this
is Cloud Functions. App Engine, the
unit of abstraction for your serverless
app, is an application. So all the wiring,
all the routing, all of the pieces that make
your code eventually surface up to an endpoint. With Functions, we get rid
of all the application, and all you’re doing
is writing a function that you’re making serverless. So the canonical example of
this is someone uploads a file and you want to send an email. You don’t want to build
a whole application to make that happen. What you can do is
just, basically, listen for the event of a
file being uploaded into Cloud Storage, fire that off
to a Cloud Function that takes the info about the
file, and sends the email that you need to run. So that is Cloud Functions. So I’m going to
switch now, and I said I was going to show a
quick example of App Engine’s capacity for load. So I’m going to do that. Here I have App Engine
running on a real URL. The reason why I
have faked up the URL is because people will ping
it, and the whole point of this is that, it’s a cold system. So if you ping it while
the URL is up there, it will warm up the system and
defeat the purpose of the demo. I have 20 VMs already
built and ready. And they have ApacheBench. And they’re going to send a
whole lot of loaded App Engine. And we’ll be able to see
just how fast App Engine can respond to load through this. So I’m going to
type a command here. I am going to, basically– I know it’s hard to see at
the bottom– but basically, I’m going to send– using ApacheBench,
each node is going to send 500 hits at
that URL, and it’s going to do it with 100
concurrent hits at a time. So it should be a pretty
quick load test here. Somebody hit the
URL, no, no, no. There we go, refreshed. It’ll still work, it’ll be fine. So you see the guys
bouncing up at the top? That’s when they’re
starting to send load. And you see App
Engine is spinning up copies of the code
that handles that URL. So as the load increases,
as all these machines are hammering that
URL, App Engine says, fine, we can handle it. We’ll just spin up more
instances to handle it. So this should go to
precisely 10,000 hits. And now that I’ve
said that, I’ve guaranteed that it won’t,
because demo gods don’t like me. But we’re spinning
down, and boom, there was that one extra one that
we had at the beginning. So we went to 10,000
hits, plus one. But that’s how quickly
App Engine standard can respond to your needs. With that, I’m going to
switch back to the app. Yep, there we go. And I’m going to move forward. All right, so that is
everything about computing. And now I’m going to go through
just a quick list of networking to clear our palettes
before we go to storage. Networking, what
do people ask for? I am hitting the wrong button. There we go, all right. So you want to hook up a
network on your own system, in your own Google
Cloud project. Virtual Cloud Network
is the way to do that. If you want to peer
directly with Google so you can get faster throughput
between your site and Google, you can use Cloud
Interconnect to do that. If you need a VPN, we
do have a VPN available. It is gateway to
gateway only, so not hooking up your road warriors. You’re basically just securing a
connection to another location. So your premises to Google,
or say, AWS to Google instead. We also have a firewall, that’s
another thing that you want. It’s hidden in the
network settings, but it is there for you. Other things that people
ask for, do we have a CDN? Yes, we have a CDN. You activate it through
our load balancers, and it uses HTTP headers
to determine whether or not something should be cached. We have DNS servers. Little trivia– the only product
I’ve ever seen that we have, that has a 100% uptime SLA
is our DNS authoritative name servers. So you can run your
DNS on us and take advantage of our very
available name servers. We also have load balancers
and other things people ask for on the networking side. We do HTTP load balancing, SSL
load balancing, internal load balancing, so just for
resources in your project to talk to one another. And then finally,
network load balancing, which can do protocols
other than just HTTP. What if I need something
that isn’t on this list? So first off, that was
not a comprehensive list of everything on
the network side. But don’t be alarmed, you can– with Compute Engine–
if you can build it, you can host it on
us, and it’ll work. And we actually have a
couple of pre-built solutions for Barracuda,
Brocade, CloudFlare, and a couple of other networking
solutions already available for you. All right, going quick
through this quickly. And now I’m going to talk about
storage, or keeping things. People think about storage
in kind of these three ways– files, databases, and big data. When you’re talking
about files at Google, you’re talking
about Cloud Storage. If you guys are familiar with
products from other cloud providers, the API
matches pretty well. We have one set of
interfaces, so one API for dealing
with all of storage, even though there are a
couple of different types that you can run. And I’ll go through
those types in a second. But we kind of divide
this up into three spaces. One is public content,
content that you want to share with your
users outside of you. And so you want this to be
globally available and highly available. Process data, instead,
is data that you’re going to run crunching on, right? You’re going to
do a big data job. You’re going to analyze a whole
bunch of photos or something. You don’t need that to
be globally available, you just need it to be close
to the processing power that you’re throwing at it. And then long term
storage is stuff that we hope we never have
to use, like compliance data, or locks, or legal stuff. Like stuff that we never
really want to use, but we have to hold onto. And so we’ll talk about those in
terms of Google Cloud Storage. So starting off with stuff
that’s publicly available. We have a set of servers that’s
publicly available at 99.95% availability. It’s geo redundant. And this is for
frequently accessed stuff, like world audience,
websites, video streaming, gaming, mobile. You want this data to be as
close to people as possible, so you put it in
this type of storage and we’ll make sure it’s
replicated around the globe. Storage cost is $0.026
per gigabyte per month, and we call it multi-region. Now, for process data, it’s
slightly less available. And it’s only stored in a very
specific geographic region. So this is for, you want
to process the data there, and you want it close to
your processing power, so you just leave it in one
area, and just leave it there. So it is cheaper
for that reason. It’s only $0.02 per gigabyte per
month, and we call it regional. Now, long term storage,
we have two options. We have Nearline, which is,
again, slightly less available, and has a minimum amount of time
you have to store it, 30 days. Now, that doesn’t mean
you can’t delete it. If you put something up there
and you delete it, cool, we’ll get rid of it. But it means we’re going to
charge you for 30 days for it, even if you delete it. This is for things like
backups and long tail media. You’ve got videos that are only
accessed every once in a while. You don’t want to store
on the more expensive one. It’s OK if it takes a little
bit longer for someone to serve up the first byte and
get a little charge for running it, because it’s
cheaper in the long run. You get charged for the
storage and also for retrieval. So there’s a little
bit of incentive to not put stuff there
that you’re going to be accessing frequently. Finally, there’s
Coldline, which is our cheapest storage at
$0.007 per gigabyte per month. It’s a little bit more
expensive to retrieve and has a longer minimum store. You’re kind of set
up incentivized. You don’t want to
pull this stuff. This is for, like, I
need compliance data, I need regulatory data, or
disaster recovery information. So that’s Coldline. All right, now
we’re going to talk about two sets of application
data, or database data. SQL or NoSQL? I’m not going to get into this. This is a giant
fight, and we can have a whole conference just
on which one should you choose. I’m not going to get into that. Choose whatever you want. I’ll talk about how we can
help you in both those places. So with SQL, we have Cloud
SQL, which is traditional SQL that you’re used to that
is vertically scalable. So you want more
power, you have to make your machine bigger and bigger. And then eventually you run
out of head space on that. They’re based on our
virtual machines, so as powerful as our
virtual machines are, Cloud SQL can usually
operate at that level. We manage your backups
to make it really easy to set up replicas,
and you don’t ever have to manage the system. We will manage the system,
you just run SQL on top of it. We have MySQL, which we’ve
had for quite some time, and we have Postgres,
which was available– it’s available in beta. We released that in March. It will, at some
point, move to GA. I don’t know when that is
off the top of my head. We also have this thing
called Cloud Spanner. Now, Cloud SQL is
vertically scalable, Cloud Spanner is
horizontally scalable. And the idea behind it
is it’s structured data that you query using SQL,
that’s both highly available and strongly consistent. Now, for people who know the
cap theorem, that’s impossible. So we hired the guy that wrote
the cap theorem, made him a VP, and he’s like, it’s OK. Which is a very abbreviated
version of the truth with that. Basically, our networking
is reliable enough that you don’t really have
to worry about partitions, is sort of the way we
hem and haw over that. But the point here
is that if you want to make it faster, or
bigger, or have more resources, you horizontally
scale it instead of vertically scaling it. Which means it could
be globally available, and you do really, really
cool things with it. It’s important to note
though, that it is– how did I say this before– it’s a BMW, not a Volkswagen.
If you’re a hobbyist, be very careful trying
Cloud Spanner out, because it can get
expensive quickly if you don’t have the load. I mean, if you’re
running a big company and you’re having
tremendous amounts of load, Cloud Spanner’s awesome
for you, but watch yourself if you’re a hobbyist. Now we’ll switch over to NoSQL. NoSQL, we have Cloud
Datastore, which is a document-based, indexable,
giant, giant NoSQL store. It’s proprietary. It is not based on any
other open sourced product. And it works very
well with App Engine and is very, very easily
accessible from Compute Engine. Bigtable, not a lot
of people use it for application
development, but some people do because it’s very,
very low latency. It is columnar, but you
don’t query it with SQL. If you guys are familiar with
HBase, it is HBase compatible. And this is another tool
where you need scale to make it cost effective. So if you have less
than a terabyte of data, we recommend not using it. We recommend using probably
Datastore or another solution. So that’s storage, that’s
files, that’s databases. Let’s talk about big data. Where do you store big data
on Google Cloud Platform? Any of the these,
because big data is just data that to get
insight out of, you need to run additional processes
on that are not supported by– my SQL can be big data
if it’s big enough, and you can’t get insight
out of it using just SQL. So any of these
can store big data, with the caveat that Cloud SQL
is only vertically scalable. So at some point you’re going
to hit that 65 terabytes of disk space per VM, and you’re
going to have to figure out some other way of scaling that. Now, a really quick
thing on big data. We also have this
tool called BigQuery, which is awesome for running
ad hoc data analysis. Basically, you pipe in
semi-structured data to it. We can analyze it quickly. And you can use SQL to
analyze the data, which is really awesome. Instead of having to
spin up a dup job, and worry about how to
structure your query, you just use SQL that
everyone’s, at this point, familiar with and can roll with. How do we do it? Basically, when you do big
data analysis, what you do is you take as many
machines as you can, you throw it at the job and see
how fast you can get it done. Well, we’re Google. We have some spare
machines lying around. So what we do is we
back them into BigQuery, and BigQuery can scale
out as wide as it needs to in order to do your query. I’m going to just give you
a quick example of this in action. I’m going to switch
over here to this. All right, this is BigQuery. I have a query here. I have it in SQL. I’m going to try to
explain it before it gets done running the query. So let me see if I can do this. I’m going to have to
talk fast to do it. So I have a record of
every Wikipedia article that was searched
between 2013 and 2014. I am searching
through all of them to count up just as many
queries that how many requests were run against
Wikipedia during that time. So it is searching
through, and finding out, that we searched, as
a whole, as a people, we searched 1.1 trillion queries
on Wikipedia in that time. So how BigQuery
came about finding that answer is it search through
130 billion rows of Wikipedia data, reduced it down
to 65,000 records, and then further
reduced that down to a total count
of the whole thing. And we did all that,
like we processed close to a terabyte of data,
and we did all of that query in 13 seconds. So that is a quick
BigQuery demo. I’m going to switch
back and present. And we’re going
to wrap up things. So we have a couple
other big data tools. We have Pub/Sub, which is a
publisher, subscriber messaging application. So you want to connect many
publishers and many subscribers to the same data
channels, you can do that. Dataflow, which is
Apache Beam, if you’re into that sort of thing. DataProc which is managed Spark
and Hadoop, with autoscaling. So if you’re running
Hadoop and Spark, DataProc is a great
way of running it and get help with
additional processing. Here’s where I point out that
Cloud Pub/Sub and Cloud Storage sort of form a central
hub through which you can kind of have any
piece of Google Cloud talk to any other piece
of Google Cloud. So like if you want
to hook up a storage bucket to run a machine
learning process on images that are uploaded,
you can wire up Pub/Sub to listen to Cloud Storage. When Cloud Storage
gets the file, Pub/Sub will send it
to Cloud Functions, run the process on it,
and do something with it. So between Cloud Pub/Sub
and Cloud Storage, you can connect any two
services that we have. What if we need
something we don’t offer? That’s fine. You can build it
in Compute Engine. And we have Cloud
Launcher images for a lot of the common use
cases that we get asked for– Cassandra, Redis,
Mongo, CouchDB, more. So now we’re going to
switch to administration. Administration, talk about
reporting and logging. So we have this product called
Stackdriver Logging, which does logging and monitoring. And what’s cool about
it is it’s cross-cloud. So you can monitor– like we’re very big
into hybrid cloud. You want to run on multiple
places, that’s great. With Stackdriver,
you can connect all of your environments
using the Stackdriver agent and monitor them in one place. So we’ll have Stackdriver
Logging and Monitoring do that. Security, how do I
lock things down? We have a product
called Cloud IAM. Allow you to build
groups of resources, make role-based permissions,
finely tune those permissions. And these will vary
from product to product. Cloud Storage is going to have
a different set of permissions than, say, Compute Engine. Interacting with it from an
administrative standpoint, we’ve got the command
line interface, G Cloud, and the tool for accessing
storage, which is gsutil. The API is what
those apps will call. So you can take our API and
extend it in most languages to do whatever you want. And we also have
Deployment Manager. If you have a very
complex environment that you want to
recreate, like maybe you want to build an
environment, and then take it and make a staging
environment and a production environment from
it, basically, you can create one
giant YAML file that can recreate your entire
Google Cloud Platform project setup somewhere else on
Google Cloud Platform. I’ll talk about developing on
top of Google Cloud Platform. There’s an SDK available
through Gcloud and gsutil. We have plugins for IntelliJ,
Android Studio, Powershell, Visual Studio,
Eclipse, and more. I ran out of icons, so that’s
why we have shrugee there. For Cloud Source
repository, it is managed git– it’s
not necessarily meant to be a competitor to GitHub. You can hook up your
GitHub repo to this. This is so that, one, we
have a copy of your code so when we do intelligence
on, like say, debugging, and we want to go
to a line number, we can just show
you the line number. That’s like the big
use case for it. But I mean, it’s managed git,
so if you just want to use it, it’s fine. Reporting, tracing. So tracing comes automatically
with App Engine and our load balancer. It’s enabled through an
SDK and other environments. This is what our tracing
interface looks like. So this is a call that I
was trying to troubleshoot. You’ll see that this particular
call was pretty quick, 92 milliseconds down the
purple on the lower left. You can see above it
that, most of the time, it’s happening in the amount
of time that I want it to. But there are a couple of times
on the graph that it wasn’t, so I could select one of those,
then show the logs for it and see what was
going on at that time. Really helps me
debug my applications running on App Engine. Reporting, error reporting. Any errors that you kind
of miss or you’re not trapping through other means,
go to this interface here. And I’ve actually
found it both great and a little
embarrassing, like crap, I missed trapping those errors
and handling those errors, but they’re here,
and now I can use that to help troubleshoot my
application going forward. Debugging. We do Stackdriver Debugging
on App Engine, Java, Python, Node.js, and Ruby. Also available via an SDK
on Java, Python, and Go. API manager, so we have
a lot of APIs at Google– Maps, and YouTube, and
all our various services. With Google Cloud Platform, you
can use default credentials, so you don’t have to
set up all the certs. You’re going to say, use my
project’s default credentials to use this API. And it makes hooking up a lot
of this stuff a lot easier. Machine learning, all
right, finishing up. People that are doing
machine learning, first off, there are
a lot more talks today so they might address
this in much more detail. But typically, you
want to use a model, you want to create a model,
or you want to extend a model. If you’re using a model, we have
a number of models available. Vision API, Natural Language
API, Speech API, all of them use our machine learning
models without you having to already train the set. We already have it, and we
have features and functions that are available. If you want to train
your own models, we have this thing
called Cloud ML Engine, which is managed TensorFlow. Which you might say, what flow? TensorFlow. TensorFlow is an open
source ML library that helps you build models. You can also take your models
and extend them in TensorFlow. And it allows you to compete
with the various level of either CPU, GPU, or now, in
the near future, TPU resources. So this is a tool that you
can use to build your model, whether it’s on your own laptop,
or on prem, or in our cloud. You can then take your
models and run them on Cloud ML Engine, which
is a managed TensorFlow. And it has some smart defaults
and can do some self-tuning to help you run better,
and it’s portable. Anything you do there, you’ll
be able to move onto your laptop and go further with. All right, with that,
hopefully, we’ve re-righted the table that we
knocked over in the beginning when we saw that giant
list of all of our stuff. But make note about it,
there is a lot of stuff here. I talked about over 40 products
and I had 40 minutes to do it. So that’s why I was sort of
machine gun going through it. There’s a lot there. My hope is that
you now could look at this list of all of
these crazy hexagons and find solutions to
your problems there. If you need more info and you
want to know where to go next, first off, Id say,
we have a free trial. Cloud.google.com/free. Sign up, $300 for
12 months, which you can run a lot of stuff on. We also have a free
tier that like, if you only spin up the
smallest VM we have, it’s free. If you spin up two, you have
to pay for the second one. And then if you have
questions about anything I said here, because I
won’t have time for Q&A, I want to direct
you to, on the map, there’s this yellow line
from where you are now on stage five, all the way
up to Dome H or Sandbox H. That is where a lot of the
Cloud people are hanging out. You can see a couple of demos of
a couple of things we’ve done, and you can ask questions. And that’s where
I’ll be if you have any questions about anything
I sped through here. That’s where I’ll be. So with that I will say
thank you very much and hope you enjoy the rest of I/O. [MUSIC PLAYING]

Related Posts

Passenger Calls MIA Security Scare Over Cell Phone ‘Uneventful’

Passenger Calls MIA Security Scare Over Cell Phone ‘Uneventful’

HOSPITAL, THEIR CONDITION IS UNKNOWN AT THIS TIME. REPORTING LIVE IN OPELIKA, CBS 4 NEWS. DEVELOPING NOW AT MIAMI INTERNATIONAL

8 Replies to “Navigating Google Cloud Platform: A Guide for new GCP Users (Google I/O ’17)”

  1. I was a bit lost with so many products. But this video was really effective into providing a summary for each one.
    Nicely done!

  2. Google team please consider my request , im teenager student i really want you to do summit related to cloud computing in North India . Thanks for your consideration

  3. Cheers for this, I been tryin to find out about "advertising manager hours" for a while now, and I think this has helped. Have you heard people talk about – Dylantan Trackify Taboparesis – (should be on google have a look ) ? Ive heard some amazing things about it and my work buddy got cool success with it.

Leave a Reply

Your email address will not be published. Required fields are marked *