Kubernetes (or Kube) belongs to that kind of system, which can only be learned through exploration. It’s an orchestration system for container-based services (Micro-services in most cases), which can be used to transparently manage many of those components.
In terms of reliability and actual performance gains, you’ll see the advantages when you have medium to large-scale deployments only. For small-scale stuff, I recommend Docker-compose.
Exploratory approach with minikube
Clean the lab (lab work)
sudo iptables -F sudo iptables -F -t nat sudo minikube delete sudo docker system prune -a sudo reboot
The iptables
NAT chain gets flushed separately here.
The following may be a got fit for the lab’s ~/.bash_aliases
:
alias minikube-kill = `docker rm $(docker kill $(docker ps -a --filter="name=k8s_" --format="{{.ID}}"))` alias minikube-stop = `docker stop $(docker ps -a --filter="name=k8s_" --format="{{.ID}}")`
coredns versus systemd
Systemd likes to reinvent how Linux handles things. This does not exclude DNS resolvers:
sudo mv /etc/resolv.conf /etc/resolv.conf.old sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
Without doing this, you may get problems when you are starting the minikube env. I had many issues with coredns.
https://kubernetes.io/docs/tasks/administer-cluster/coredns/pidof systemd && echo "yes" || echo "no" ping $(hostname -f) && echo "yes" || echo "no"
Snap installs
snap install helm --classic snap install kubectl --classic export PATH=$PATH:/snap/bin
This is quick and dirty. In production: never use Snapd.
Start minikube
sudo minikube start --driver=none \ --extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook" \ --memory 9000 \ --cpus 4 sudo chown -R $USER ~/.kube sudo chown -R $USER ~/.minikube
You may be tempted to think, that the --memory
and --cpus
parameters can be omitted because this installs the services natively. I found out that these variables are used to calculate quotas and limits. If you do not specify them, you have to edit the ConfigMaps and Namespaces later. If you know where they are…
Now to the computing backbone: our simple single-node Kubernetes “cluster” is ready. Please be patient with that little 4 core system:
kubectl get nodes kubectl get services -A --watch watch -n 0.3 \ kubectl get po,services,deployment,rc,rs,ds,no,job,cm,ing -A # or similar
Basics: what is all this stuff? Recap on Kubernetes
The idea to placing the theory here is to allow a pragmatic and simplified reflection of the Kubernetes related concepts. At this point, it can be combined with a systematic exploration.
If you are a seasoned Kubernetes administrator, you may find the illustration to be inaccurate. That is because this is just teaching material.
Pods are a core concept of Kubernetes
Within a Deployment, you specify that you want a certain number of instances to be available (or let an Operator manage the scaling). In our case, we will deploy an application that gets managed within the cluster by a Serverless operator.
A Service is the result of a Deployment. This way it becomes available. In our case, we will create an API server endpoint that clients can connect to.
If persistent storage is required, it’s usually better to use network file systems and to mount these external resources into the transient instances.
Labels and Selectors are important if you have many Services. You may have different settings for different environments (test, prod, staging, … lab)
On-premise Serverless stack - Istio and Knative
Install Istio with the observability tools
Istio is like an extension to Kubernetes. It adds a Service Mesh. In my opinion, this is the future of how we are going to run and think about services in Cloud-Native environments.
From a security perspective, I like having an mTLS (mutual Transport Layer Security) enabled Service Mesh network between Pods. If you make a Threat Model of your env you may find out that the Micro Services exchange confidential data containing PII (Personally Identifiable Information).
Encryption can help to develop means within the Micro Service architecture that treat the trust boundaries within third-party networks. Like between AWS EC2 instances.
If you are only interested in in-transit encryption for Kubernetes, you may also like to learn more about Cilium. I only used CNI and for a single-node minikube lab that is fine.
https://cilium.io/Other use-cases of Istio are Blue-Green deployments, Canary deployments or Dark Deployments. These happen with the Mixer component.
cd ~ curl -L https://istio.io/downloadIstio | sh - cd istio-1.5.1 export PATH=$PWD/bin:$PATH istioctl manifest apply --set profile=demo kubectl label namespace default istio-injection=enabled export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') export INGRESS_HOST=$(minikube ip)
Keep these variables (or the commands). We are going to need them to test the routes and reachability of our Egress endpoints of the Service Mesh (with curl
).