Skip to content

thesandlord/kubernetes-workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kubernetes Workshop

This workshop will walk you through deploying a Node.js microservices stack with Kubernetes.

Optional: Set up local environment

This tutorial launches a Kubernetes cluster on Google Kubernetes Engine

If you are running this tutorial at home, you will need a Google Cloud Platform account. If you don't have one, sign up for the free trial.

To complete this tutorial, you will need the following tools installed:

We will also use a set of Google Cloud APIs that you can enable here all together.

You can also use Google Cloud Shell, a free VM that has all these tools pre-installed.

For this workshop, I will assume you are using Cloud Shell.

Step 1: Create Cluster and Deploy Hello World

  1. Create a cluster:
ZONE=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/zone" \ 
      -H "Metadata-Flavor: Google" | sed 's:.*/::')

gcloud container clusters create my-cluster --zone=$ZONE

If you get an error, make sure you enable the Kubernetes Engine API here.

  1. Run the hello world deployment:

kubectl apply -f ./hello-node/deployment.yaml

Expose the container with a service:

kubectl apply -f ./hello-node/service.yaml

At this stage, you have created a Deployment with one Pod, and a Service with an extrnal load balancer that will send traffic to that pod.

You can see the extrnal IP address for the service with this command. It might take a few minutes to get the extrnal IP address:

kubectl get svc

Step 2: Scale up deployment

One pod is not enough. Let's get 5 of them!

kubectl scale deployment hello-node-green --replicas=5

You can see the all pods with this command:

kubectl get pods

Step 3: Hello world is boring, let's update the app

The new app will take a picture, flip it around, and return it.

You can see the source code here.

The Dockerfile for this container can be found here.

Build the Docker Container using Google Container Builder:

gcloud container builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/imageflipper:1.0 ./rolling-update/

This will automatically build and push this Docker image to Google Container Registry.

Now, we are going to update the deployment created in the first step. You can see the new YAML file here.

Replace the <PROJECT_ID> placeholder with your Project ID. Use this command to do it automatically:

sed -i "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" ./rolling-update/deployment.yaml

Now use the apply command to update the deployment. The only change to this file from the first deployment.yaml is the new container image.

kubectl apply -f ./rolling-update/deployment.yaml

This will replace all the old containers with the new ones. Kubernetes will perform a rolling update; it will delete one old container at a time and replace it with a new one.

You can watch the containers being updated with this command:

watch kubectl get pods

Once it is done, press ctrl + c to quit.

If you visit the website now, you can see the updated website!

Step 4: Backend Service

The web frontend is created, but let's split the monolith into microservices. The backend service will do the image manipulation and will expose a REST API that the frontend service will communicate with.

You can see the source code for the service here.

Build the Docker Container using Google Container Builder:

gcloud container builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/annotate:1.0 ./second-service/

The service.yaml file for the backend service is very similar to the frontend service, but it does not specify type: LoadBalancer. This will prevent Kubernetes from spinning up a Cloud Load Balancer, and instead the service will only be accessable from inside the cluster.

Run the backend deployment:

sed -i "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" ./second-service/deployment.yaml

kubectl apply -f ./second-service/deployment.yaml

Expose the container with a service:

kubectl apply -f ./second-service/service.yaml

Step 5: Update Frontend Service to use the Backend with a Blue-Green deployment

Now the backend service is running, you need to update the frontend to use the new backend.

The new code is here.

Instead of doing a rolling update like we did before, we are going to use a Blue-Green strategy.

This means we will spin up a new deployment of the frontend, wait until all containers are created, then configure the service to send traffic to the new deployment, and finally spin down the old deployment. This allows us to make sure that users don't get different versions of the app, smoke test the new deployment at scale, and a few other benefits. You can read more about Blue-Green Deployments vs Rolling Updates here.

Build the Docker Container using Google Container Builder:

gcloud container builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/imageflipper:2.0 ./blue-green/

Spin up the the new deployment with the following command:

sed -i "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" ./blue-green/deployment.yaml

kubectl apply -f ./blue-green/deployment.yaml

You can see all the containers running with this command:

kubectl get pods

Now, we need to edit the service to point to this new deployment. The new service definition is here. Notice the only thing we changed is the selector.

kubectl apply -f ./blue-green/service.yaml

At this point, you can visit the website and the new code will be live. Once you are happy with the results, you can turn down the green deployment.

kubectl scale deployment hello-node-green --replicas=0

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published