The Usual Disclaimers
We’re coming up on a new year and I am meeting my insecurities head-on. I am getting out of my comfort zone with this Kubernetes series.
Sounds like a good idea I guess? Every journey starts somewhere.
Therefore, I fear that some of these Kubernetes posts may strike you, dear reader, as more rudimentary, compared to my usual fare of content backed by experience. Therefore, this series is not for experienced K8s Admins. But if you’re a fan of my feeble attempts at being witty or humorous, I try to include a few nuggets along the way.
If you are beginning your K8s journey, then this might just be a good place for you to start because I despise the exclusion of assumed knowledge, so I just might be including some things you need to know that other people don’t tell you. Some content I have planned for this series:
Late-to-the-Table Kubernetes Part 1: Yet Another Take on Kubernetes
Late-to-the-Table Kubernetes Part 2: Yet Another List of Learning Resources for Kubernetes
Late-to-the-Table Kubernetes Part 3: Yet Another Explanation of Kubernetes
Late-to-the-Table Kubernetes Part 4: Yet Another Tutorial on Installing Tanzu Kubernetes Grid
Late-to-the-Table Kubernetes Part 5: Yet Another Tutorial on Kubernetes
Oh! One more thing. I decided to use famous (or not-so-famous) quotes as headings for this one just for funsies. See if you can spot who said them. There are footnotes at the end. I have no idea if that will continue in the series, but this is what what happens when I get bored on vacation.
History Doesn’t Repeat Itself, But It Does Rhyme1
The movie Contagion was in theaters: A far-fetched movie where a bat-borne virus causes a world-wide pandemic that wipes out like a bunch of people while the US government theorizes incorrectly that it’s a bioweapon. Meanwhile conspiracy theorists tout ineffective homeopathic cures, adding to the chaos. What finally ends the pandemic? A fast-tracked vaccine.
Silly Hollywood. Such a dumb and far-fetched premise. How do they come up with this stuff?
Anyway, Bryan Sullins (that’s me) was attending his first VMworld way back there in 2011. And as we have previously discussed, there’s always “a thing” at these VMworld conferences:
In 2019 it was . . . **looks at notes** . . . Kubernetes.
In 2020 it was . . . **looks at notes** . . . Kubernetes.
In 2011, however, it was the “Software Defined Datacenter,” otherwise known as the SDDC, otherwise known as what I would describe as kind of how we do things now . . . sort of . . .
In my humble opinion, SDDC, the way VMware envisioned it at the time, was never fully realized except in the largest of enterprises.
Instead, as you will see in this series, Kubernetes has become the very epitome of a “Software Defined Datacenter”. But first . . .
Confession is Not Betrayal. What You Say or Do Doesn’t Matter; Only Feelings Matter2
I will admit myself, whole-heartedly, that I didn’t “get it” in my first pass at Kubernetes. It’s not rocket science, but for the types who want to know how things work under the hood, like me, you have to make an effort to figure that out. Kubernetes is the very definition of obfuscation. And there are a lot of moving parts, as you will see in ensuing posts here.
While learning Kubernetes, and let’s be clear, I am still learning it, I have sometimes completed wide-sweeping and epic actions in a single command, but then said aloud, “Awesome. But. Like. WTF did I just do?”
Kubernetes is what I would call an AIB technology. AIB stands for “Any Idiot But“:
Any Idiot can create a Kubernetes cluster, possibly in minutes with a little know-how and a google search. For example, if you haven’t cashed in on your $300 credit to GPC (we’ll see how well that link ages), you can create a K8s cluster in a few short minutes.
But . . .
. . . knowing WTF it really means to . . . ** TAKING A DEEP BREATH IN ** . . . create a Kubernetes cluster, the scope of that cluster, the consequences of it, planning for it, changing it, knowing what runs on it and why, running applications and services on it, how to connect to those services running on it, what to do when a service or pod or node fails, and troubleshooting all of the above, is a totally different story.
And I am not even getting into the knowledge you need to have about how APIs work, containers, infrastructure as code, GitOps, registries, and a litany of other concepts surrounding this topic.
Talent is Cheaper Than Table Salt. What Separates the Talented Individual From the Successful One is A Lot of Hard Work3
So, who are the best candidates to be Kubernetes Admins? Well, Linux knowledge is a must. That is a good place to start if you don’t have that.
But. Do you remember back around the turn of the decade when people would be like, “yeah, everyone should be a . . . .’full stack’ . . . engineer”?
Well . . . in the context of what became Cloud/Kubernetes, they were right. But WTF does that mean? People gatekeep that phrase a lot: “full stack” engineer.
I would say that “full stack” would be experienced datacenter engineers who have depth of knowledge in more than one IT discipline.
As we will see in Part 3, Kubernetes is nothing more than a “Datacenter in a box”. All of the “physical” hardware one would expect in a datacenter simply runs as a set of “software defined” containers accessible through the Kubernetes API. Therefore, anyone administering Kubernetes needs to know how those more “traditional” moving parts work. Without datacenter-level experience, Kubernetes is going to be a heavy lift. I am not saying it can’t be done without that experience, but if you don’t know how firewall rules work or how load balancers work or how to make things highly available in a datacenter and then translate that into how kubernetes does those things; that’s a lengthy journey.
Buy the Ticket, Take the Ride4
If you are at all concerned with moving up in your career into and beyond the higher levels (Senior Engineer on up), it is highly recommended to learn Kubernetes. Like, yesterday. In fact, one might argue that as of right now, it might still set you apart from your peers.
But the more “controversial” question, if we can call it that, is “Why would an organization run Kubernetes?” Well, let’s work through it and some considerations:
- Your apps need to be containerized. This is usually a given; I don’t see that spelled out all that often. You’re welcome. Even then, there is a critical mass of containers you’d need to run before one might consider running K8s. Is it really worth it to run K8s for 1 or 2 containers? Even a handful? You’d need to decide that for yourself. I am not saying one wouldn’t do it, but it could be a lot of unnecessary complexity for a few containers.
- Converting your existing apps to be containerized properly is time-consuming. As with pretty much anything “cloud,” lifting and shifting is not recommended. This takes time to figure out right-sizing, performance testing, and load-testing, to name a few.
- Not all applications can be containerized. That’s it. That’s the tweet.
- Kubernetes is best served with an Infrastructure as Code approach. With what I have experienced so far, bare minimum, creating kubernetes objects through yaml files can be done out-of-the-box. There are other methods, of course, but without IaC, you would not be leveraging kubernetes for its many strengths.
- Kubernetes lends itself very well to CI/CD. Why would you not do CI/CD if you were running Kubernetes? I am not saying it’s a requirement, and I am not saying it’s easy, but this would be an additional consideration.
- You will need to have the talent on-hand. Again, and at the risk of stating the obvious, K8s is not rocket science, but it does require a plethora of skills: Compute, Networking, Storage, various CLIs, yaml, coding, APIs, and so forth. That’s not even delving into any application stacks. It will require training and time for people to get up to speed in order to run K8s effectively.
- Related to #1, as above, Applications-level engineers have to know how to run applications in containers. We infrastructure types have a grasp of how containers work, but applications people sometimes don’t. They’ll need to know how to package their application as a container image and upload it to a registry. This may be new territory for them and yet another challenge to overcome.
- Kubernetes is still maturing. There are a lot of moving parts with K8s, and as vendors figure out where they fit and vie for territory, it’s possible that you may have to switch up what you are doing down the road (I’m looking at you, IBM), and you need to be OK with that.
- Security is still not, like, built-in and stuff. It’s getting better, but I read somewhere along the way that Kubernetes (previously, Borg), was not built with security as its highest priority. It was built to be easily scaled. Therefore, getting your Kubernetes infrastructure to be secure will need to be planned.
- Managed KaaS is a Thing. You could opt to outsource your Kubernetes management to a Cloud provider. This is certainly up to you. The pros and cons (there are always pros and cons) are too numerous to go into here.
- If not KaaS, then Virtualized or Bare Metal? I would say that if you are running your own Kubernetes clusters, you’d need to evaluate those providers, like VMware TKG, or Rancher on either vSphere or Bare Metal. There are also too many pros and cons to list here.
- The current primary method for interacting with K8s is to use
kubectl. From what I can tell, when I ask how K8s Admins interact with their K8s infrastructure, it’s always
kubectl, once the Kubernetes Cluster(s) are created.
Wow. That last part was just a mish-mosh of stuff wasn’t it? #SorryNotSorry
Tune in for Part 2 where I list of the plethora of resources that got me to where I am with K8s, as it stands right now. I’ll talk about what got me to finally turn the corner, so if you are struggling with it, perhaps it will help you too.