How many readers do I need in order to call this post, “long awaited”? I guess we’ll find out. In Part 1, I disclosed the trials and tribulations of putting together this lab. Here, I will give you the lab itself in all its glory, and some tips and tricks.
Officially, “my” Vagrant Kubernetes home lab is here. The “my” is in quotes because, as I disclosed in Part 1, I didn’t code most of it. I gave credit to Exxactcorp with their blog post and repo that put me on the right track to get here.
I did add some flexibility with what I am calling “Practice Mode”, discussed later in this post.
Building and Running the Lab
Assuming you have met the requirements (described in the repo README.md or later in this post), clone the repo to your local machine and run
vagrant up from the root dir:
git clone https://github.com/bryansullins/vagrantk8scentos.git cd vagrantk8scentos vagrant up
This will create a hidden directory called
.vagrant, which has an entry in .gitignore.
Once you have the cluster up and running (
vagrant up takes about 7 minutes, 41 seconds on my machine!), get into the master node by typing:
vagrant ssh kmaster
From there you are off and running with Kubernetes (YEAH!):
To start/stop the VMs and therefore the K8s Cluster nodes:
vagrant halt # Powers off the VMs. vagrant up # Powers on the VMs assuming the .vagrant folder is still intact.
To tear down everything:
vagrant destroy # or vagrant destroy -f # (no prompts)
Consult the Vagrant documentation for more details.
This section is for dumb people. If you are not dumb, skip on down to the next section. If you are dumb (or just want a good laugh), this section is for you:
I wish I didn’t need to say this, but, trust me, I do:
** clears throat and authoritatively puts on lawyerly reading glasses **
“This Vagrant Kubernetes Lab should not be used in any Production Environment, and no guarantees about performance or security are made by the author or any other related party HEREWITH.“
** takes off reading glasses **
In the interest of full disclosure, there is a plaintext password included in this repo, but it is the widely published default password “vagrant” for the vagrant user.
Additionally, all of these VMs are expected to run on ~.56GHz CPU and ~4.5GB RAM. It’s a home lab. So, please don’t expect to join the program to find Mersenne Primes, or calculate meteor trajectories heading towards earth from outer-freaking-space. . . .
Requirements and Considerations
Tested Software Requirements:
- MacOS Catalina Version 10.15.4.
- VirtualBox Version 6.0.18 r136238 (Qt5.6.3). VirtualBox
- Vagrant Version 2.2.6. Vagrant
Other platforms or versions outside of the above are untested, but it should be easy to alter this for use with other platforms. Feel free to contribute at https://github.com/bryansullins/vagrantk8scentos
- 1.4GHz quad-core Intel Core i5 – Uses 5 vCPUs and about .56GHz idle.
- 16GB of 2133MHz LPDDR3 onboard memory – Uses about 4.5GB RAM.
- 128GB SSD – Requires 4GB Drive Space.
- 1GB Network – Mac’s USB 10/100/1000.
Options Available for “Practice Mode”
I added some flexibility to the lab, in the form of what I am calling, “Practice Mode”. By default, a
vagrant up will build the lab with a fully initialized Kubernetes Cluster with one Master Node and three Worker Nodes.
But if you want to practice initializing your own K8s Cluster and practice joining worker nodes to the cluster, you can change the
ENV['K8SINIT'] variable to ‘false’ in the Vagrantfile:
ENV['K8SINIT'] = 'false'
vagrant up, this will install all K8s components on the master and all workers, but will not run any
Please read the README.md file in the repo for more information.
How I Added “Practice Mode” Flexibility
Let’s level set a bit. To configure Vagrant to build K8s, or most things for that matter, you will need:
- A Vagrant Provider – I am using VirtualBox for this, and as I disclosed in Part 1, I am using Host Only networking. Read Part 1 about why Host Only Networking was chosen.
Vagrantfile– The main configuration file that builds the infrastructure and installs all components.
- *.sh files – The
Vagrantfilecalls run-of-the-mill *.sh scripts in each VM to install and configure anything you specify. This is called by the
*.vm.provisionlines in the
Vagrantfile. If you take a look at the
bootstrap.shfile in the repo, you’ll see it’s mostly yum installations. This is how each VM gets the Kubernetes components installed.
You will notice that the Vagrant file has these lines:
## ENV['K8SINIT'] = 'true': Full Kubernetes component installation, initializes a Kubernetes cluster, and joins all 3 worker nodes into the Kubernetes Cluster ## ENV['K8SINIT'] = 'false': Full Kubernetes component installation, but will not initialize a cluster or add worker nodes, but will be ready for "kubeadm init . . ." ENV['K8SINIT'] = 'true'
This means that if you set
ENV['K8SINIT'] = 'false', it will not run the
kmaster.vm.provision "shell", path: "bootstrap_kmaster.sh" line:
if ENV['K8SINIT'] == "true" # Uncomment as desired to use the different Network Providers: kmaster.vm.provision "shell", path: "bootstrap_kmaster.sh" # kmaster.vm.provision "shell", path: "bootstrap_kmaster_calico.sh" # kmaster.vm.provision "shell", path: "bootstrap_kmaster_flannel.sh" end
We have the same conditional for the worker nodes as well.
You will also notice that you can choose different K8s Network Providers, as long as
ENV['K8SINIT'] = 'true', but be aware you will be creating a new vagrant infrastructure with each; that’s how Vagrant works.
You can also add more worker nodes, but they will need to be added into the
/etc/hosts section of the
The lesson here is that with a little tweaking on your own, you can further automate these VMs any way you want.