Late-to-the-Table Kubernetes Part 5: Installing Applications on TKG Workload Clusters

For those of you paying attention, you may be aware that I am changing the title from what I’d planned way back in Part 1. What I’ve chosen is catchier and more aligned with what we are really doing here.

This post may have something for everyone. If you are adept enough at Kubernetes then you may be able to follow right along. If you’ve been “stanning” parts 1-4, then those have been nothing but a prelude.

In Part 4, we got our Tanzu Kubernetes Grid up and running with our Supervisor cluster. Here, we are going to be standing up at least one additional cluster, and by the end of this post, we’ll have a load-balanced “production” application that will be running on a Kubernetes workload cluster. Therefore, here’s what to expect in this post, and (bonus!) you will also become more acquainted with the kubectl command and Helm. You’re welcome. What we’ll be doing here:

  1. Create an “apps” (aka “workload”) cluster. It is not recommended to run production applications on the Supervisor cluster we previously created. This apps cluster will host a very simple application for the sake of learning.
  2. Implement MetalLB for load balanced connections to the application(s) we will build. Out of the box, our standalone TKG does not provide Load Balancing. We have to build it. Keep in mind that neither Load Balancing nor Ingress are required in Kubernetes for application access. You could just setup your app on a NodePort and be done with it, but I wanted to make this a bit more realistic.
  3. Implement Ingress so that we can access our web application properly over ports 80/443.
  4. And finally, we’ll rollout a simple nginx web-based application.

A Disclaimer: No More Training Wheels

I can’t hold back on shotgunning new concepts in your direction anymore. In the first four posts I really did my best to ease you into Kubernetes concepts, but this time there will be some new things we simply have not addressed yet, and it can’t be helped. Otherwise, this becomes a book, not a blog post.

Here, I have done my best to guide you with links to topics as best I can, but some of this is just going to be throwing you out of the nest so you can fly on your own, baby birds. You are going to have to put on your big-person pants and get acquainted with some of these concepts on your own. Perhaps the resources I mentioned in Part 2 have or will serve you well?

I have faith in you, followers of my blog. You can do it!

There’s a Git Repo for This?

Yeah. Remember way back . . . like . . . last year . . . when Bryan used to include git repos as companions for his blog posts? Well. We have one for this!

It’s a very simple one with two files you’ll need later. Clone this repo and use it as your working directory and you should be all good:

git clone https://github.com/bryansullins/toc-tkg-app-example.git
cd toc-tkg-app-example

Step 1: Create the Additional Cluster

Here we’ll create the new cluster for running our new whizbang-super-cool app in a new K8s namespace. Let’s create a namespace the easy-peasy way:

kubectl create namespace toc-apps

A Kubernetes namespace is a grouping of like objects. Think hierarchically. A Namespace is very high in the Kubernetes hierarchy, and just like with any other hierarchy of objects, you can apply permissions, etc. at that level.

Check it out:

kubectl get ns
NAME                                STATUS   AGE
capi-kubeadm-bootstrap-system       Active   11d
capi-kubeadm-control-plane-system   Active   11d
capi-system                         Active   11d
capi-webhook-system                 Active   11d
capv-system                         Active   11d
cert-manager                        Active   11d
default                             Active   11d
kube-node-lease                     Active   11d
kube-public                         Active   11d
kube-system                         Active   11d
tkg-system                          Active   11d
tkg-system-public                   Active   11d
tkg-system-telemetry                Active   11d
toc-apps                            Active   5m17s

Now we can create your additional cluster. Note: If you are doing this at home, you will need an additional static IP for the control node in this new cluster.

tkg create cluster toc-app-cluster --namespace toc-apps --plan dev --vsphere-controlplane-endpoint-ip [REDACTED]

You can see that the cluster was created by typing:

tkg get cluster toc-app-cluster --namespace toc-apps

NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
toc-app-cluster toc-apps running 1/1 1/1 v1.19.1+vmware.2

Note: Why are we not seeing the supervisor cluster? Because we created this cluster in the toc-apps namespace. If you want to see all clusters, type kubectl get cluster -A

Next we need to get the credentials and then set the context for the new cluster, otherwise the application will not be rolled out to the right cluster.

Getting the credentials will alter your $KUBECONFIG to switch out to a new context. Contexts in Kubenetes are arguably bigger than are namespaces. RBAC, and so on can also be applied to a particular context. In our example here, this will allow us to ensure the app workload ends up on the correct cluster.

tkg get credentials toc-app-cluster --namespace toc-apps

Credentials of workload cluster 'toc-app-cluster' have been saved
You can now access the cluster by running 'kubectl config use-context toc-app-cluster-admin@toc-app-cluster'

Make sure you copy out and paste that info for later use.

Now we will switch to the new Cluster context:

kubectl config use-context toc-app-cluster-admin@toc-app-cluster
Switched to context "toc-app-cluster-admin@toc-app-cluster".

We are now ready to implement the apps cluster infrastructure and the application itself.

Step 2: Implement MetalLB in the Apps Cluster

What follows here is straight out of the MetalLB documentation.

Before we dive in with our Load Balancer, we could actually rollout our application using what’s called a NodePort. The details for this can be discerned straight from the Kubernetes documentation. Using a NodePort will work just fine if you already have a load balancer like F5, or you’re OK with connecting using a special port. Contact your friendly Network Admin about those details.

Here, however, we are going to setup our own Load Balancer so we can setup our applications independently from physical network infrastructure, keeping in mind that it is a good idea that whatever load balancing you do, it should be redundant.

The first step is that we will have to set a parameter in the kube-proxy configmap. MetalLB requires that the strictARP setting be set to:

strictARP: true

There are multiple ways to change this, but we’ll edit the configmap on the fly, which is straight out of MetalLB documentation:

kubectl edit configmap -n kube-system kube-proxy

This will open the kube-proxy configmap in your default editor. Edit the line with the strictARP line to true (if it isn’t already) and save your changes.

Now, straight from the documentation, we are going to simply use the installation instructions from the MetalLB documentation:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

You should see a bunch of objects created. Here’s from the last two commands:

And now, one more configuration file. This has been done for you in the file called metallb-config.yaml from the toc-tkg-app-example repo and you will need to alter the addresses there to IPs (highlighted below) that are legitimate and usable on your network:

apiVersion: v1
 kind: ConfigMap
 metadata:
   namespace: metallb-system
   name: config
 data:
   config: |
     address-pools:
     - # A name for the address pool. Services can request allocation
       # from a specific address pool using this name, by listing this
       # name under the 'metallb.universe.tf/address-pool' annotation.
       name: tkg-space
       # Protocol can be used to select how the announcement is done.
       # Supported values are bgp and layer2.
       protocol: layer2
       # A list of IP address ranges over which MetalLB has
       # authority. You can list multiple ranges in a single pool, they
       # will all share the same settings. Each range can be either a
       # CIDR prefix, or an explicit start-end range of IPs.
       addresses:
       - 192.168.1.240-192.168.1.250 # Change these!
       # (optional) If true, MetalLB will not allocate any address that
       # ends in .0 or .255. Some old, buggy consumer devices
       # mistakenly block traffic to such addresses under the guise of
       # smurf protection. Such devices have become fairly rare, but
       # the option is here if you encounter serving issues.
       avoid-buggy-ips: true
       # (optional, default true) If false, MetalLB will not automatically
       # allocate any address in this pool. Addresses can still explicitly
       # be requested via loadBalancerIP or the address-pool annotation.
       auto-assign: true

Once you have the changes saved into the file, apply the configuration:

kubectl apply -f metallb-config.yaml
configmap/config created

Step 3: Setup Ingress

Next we’ll setup Ingress. The short explanation is that Ingress obfuscates multiple web-based applications residing on Kubernetes so that they can be accessed externally by the HTTP well-known ports (80 and 443). It uses HTTP headers to differentiate each request.

This is not uncommon. Differentiating connections based on HTTP Headers has been around for quite some time; at least a decade or so? Many standalone web servers do this natively. We are going “next level” by including a Kubernetes Service that will do this for us.

The curve I am throwing you here, however, is that it’s installed using a helm chart. So first, you will need to install helm. It’s pretty straightforward. I’ll wait here. Remember, baby bird. You can fly!

Now type:

helm repo add haproxytech https://haproxytech.github.io/helm-charts
. . .
"haproxytech" has been added to your repositories

This will add the haproxytech helm repo so you can use it for installation in the next step. To install the ingress controller:

helm install haproxycontroller haproxytech/kubernetes-ingress
. . .
HAProxy Kubernetes Ingress Controller has been successfully installed.
. . .

You should not need to alter any settings. The defaults are good for 90% of all rollouts.

Step 4: This is it! Who’s Rolling Out Real World Web Applications in Kubernetes? YOU ARE!

I know this may not seem like a big deal to you youngsters out there, but I remember the days of manually setting up an entire Linux OS and apache web server. And I remember then installing web applications on that apache server that had to be configured to use a certain directory and a certain port, blah, blah, blah.

Now that we have all of our “API-defined” infrastructure up and running, we can stand up tens or hundreds of web applications . . . by typing a few words.

In other words, this is the moment you have been waiting for for 4 blog posts.

Here we go. Ready?

In the toc-tkg-app-example repo, there is file named nginx-lb-deployment.yaml. It has everything you need to now deploy a default nginx web server that will now be serviced by MetalLB and HA Proxy Ingress.

This file has both the Deployment and the Service defined to run the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: whizbangapp
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1
        ports:
        - name: http
          containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: whizbangapp
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

Wait just a minute, Bryan. What about assigning an IP?

No need. Kubernetes and MetalLB have you covered. All you have to worry about is your application. That’s the whole point. Let’s put this into its own namespace:

kubectl create namespace whizbangapp
namespace/whizbangapp created

Now let’s roll this bad boy out:

kubectl create -f nginx-lb-deployment.yaml
deployment.apps/nginx created
service/nginx created

And Finally, Connecting to your Load Balanced Application

If this were a production environment, all you need to do is setup a DNS entry and alias for the given IP to the nginx service and off you go. We will use host files here so we can have an FQDN.

How do we find out what IP address was assigned to the whizbangapp nginx app? . . . . (I had to pull out the IP address, so you’ll have to trust me). You are looking for the IP under the EXTERNAL-IP field:

kubectl get svc --namespace whizbangapp

NAME    TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx   LoadBalancer   100.68.90.26   [REDACTED_IP]   80:31394/TCP   20s

Now simply add an entry into your /etc/hosts file and now you can get to the nginx app webserver with an FQDN (I used the fictional whizbang.toc-app.com):

more /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1       localhost
255.255.255.255 broadcasthost
::1             localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
[REDACTED_IP] whizbang.toc-app.com

Open your browser and go to http://whizbang.toc-app.com and your default nginx web page should load!

Credit Where Credit is Due: A Thank You

None of this would have been possible without the assistance of one Will Arroyo (warroyo@vmware.com) from VMware. He spent more than a few hours walking me through what we have been doing these past two posts, and I have nothing but thanks for him and his time. A tip of my hat to you, good sir:

“If I have seen further it is by standing on the shoulders of Giants.”

Sir Isaac Newton

Next time, we’ll take a look at containerization. But as for now, 5 parts is enough.

On to the next part of my journey.

Hit me up on twitter @RussianLitGuy or email me at bryansullins@thinkingoutcloud.org. I would love to hear from you.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s