I’ve been playing around with the cloud for a while now (specifically AWS and Google Cloud) and deemed it useful to publish a thing or two about the GCloud CLI that I may use as a cheatsheet in the future.
gcloud by cloning the google-cloud-sdk project and running
the install script
After the installation I completed my setup by sourcing the
*.bash if you’re still on that :wink:) scripts at
gcloud CLI tool is as simple as running
gcloud components update.
Google made authentication too easy. By simply running
you can get your box authenticated by following a link and from that moment on I pretty much forgot what happened – that is how easy I felt it was.
Google allows you to access multiple projects from the same account. The only requirement is that you configure the name of your project which should specify the context. This makes collaboration simple as I can easily add members to a project and they would be able to access the project without any additional effort. I could eventually transfer an entire infrastructure to a client if necessary while still operational.
Accomplishing something similar on the collaboration part with AWS would require one to set up IAM credentials for such a user which makes it a bit less straight-forward for a simple guy like myself. You have to keep track of multiple credential sets when working on multiple projects by multiple owners. I understand the advantage from a security perspective, but on the usability end it is quite a hassle (nothing that a few scripts can’t solve though).
Anyways… You can choose to set the project for all subsequent gcloud commands by configuring your gcloud setup
or choose to add the
--project parameter option to every gcloud command you
are executing. In case you decide to script a lot of your gcloud stuff you may
want to consider using the
--project option. Obviously, while experimenting,
config set method will save me some keystrokes every time I execute a
In order to get going you can simply install your boxes through the instances create CLI command.
The name of my instance is
box1, the image is the CoreOS image and the zone
is somewhere in Western Europe :wink:. The command
displays all image flavors available, while
displays all zones available.
Zones and regions, you ask?
Well, zones exist within regions. On the 10th of december 2014, the
region had two zones available to host instances (specifically
europe-west1-a happened to be
deprecated at that time. I don’t know how the zones relate to the
individual datacenters; just didn’t bother figuring it out yet
but I do know they are building a new center in our dutch front garden.
Point is… latency between Amsterdam and my european units will be less
than the latency to Asian or American units.
Google has taken taken the libery of setting up a baseline system that happens to be quite secure. When firing up a service on one of your instances (let’s say a webservice listening on port 80) you must explicitly allow acccess to this port from the outside world. Just an example of the firewall rules by default obtained through
default-allow-icmp rule we allow the internet to communicate with the
instances within your project using ICMP messages which allows the internet to
ping your boxes.
default-allow-internal rule we allow all TCP, UDP and ICMP traffic
A.B.0.0/16 which basically means the entire range of addresses masked
by the last 16 bits being
represent the first two octets of your internal IP addresses within your
private address space where
A will probably be
B could be anything
default-allow-ssh rules allow RDP and SSH access
respectively by allowing traffic on the ports used for these services. I
usually run tux boxes and will have no need for RDP so I could disable the RDP
I do want to open up port 80 which is used by HTTP services serving webpages whomever requests them.
Now my Google Cloud compute instance is ready to serve the world.