Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Kubernetes (or Kube) belongs to that kind of system, which can only be learned through exploration. It’s an orchestration system for container-based services (Micro-services in most cases), which can be used to transparently manage many of those components.

In terms of reliability and actual performance gains, you’ll see the advantages when you have medium to large-scale deployments only. For small-scale stuff, I recommend Docker-compose.

Exploratory approach with minikube

Clean the lab (lab work)

sudo iptables -F
sudo iptables -F -t nat
sudo minikube delete
sudo docker system prune -a
sudo reboot

The iptables NAT chain gets flushed separately here.

The following may be a got fit for the lab’s ~/.bash_aliases:

alias minikube-kill = `docker rm $(docker kill $(docker ps -a --filter="name=k8s_" --format="{{.ID}}"))`
alias minikube-stop = `docker stop $(docker ps -a --filter="name=k8s_" --format="{{.ID}}")`

coredns versus systemd

Systemd likes to reinvent how Linux handles things. This does not exclude DNS resolvers:

sudo mv /etc/resolv.conf /etc/resolv.conf.old
sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Without doing this, you may get problems when you are starting the minikube env. I had many issues with coredns.

https://kubernetes.io/docs/tasks/administer-cluster/coredns/
pidof systemd && echo "yes" || echo "no"
ping $(hostname -f) && echo "yes" || echo "no"

Snap installs

snap install helm --classic
snap install kubectl --classic
export PATH=$PATH:/snap/bin

This is quick and dirty. In production: never use Snapd.

Start minikube

sudo minikube start --driver=none \
--extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook" \
--memory 9000 \
--cpus 4

sudo chown -R $USER ~/.kube
sudo chown -R $USER ~/.minikube

You may be tempted to think, that the --memory and --cpus parameters can be omitted because this installs the services natively. I found out that these variables are used to calculate quotas and limits. If you do not specify them, you have to edit the ConfigMaps and Namespaces later. If you know where they are…

Now to the computing backbone: our simple single-node Kubernetes “cluster” is ready. Please be patient with that little 4 core system:

kubectl get nodes
kubectl get services -A --watch

watch -n 0.3 \
kubectl get po,services,deployment,rc,rs,ds,no,job,cm,ing -A # or similar

Basics: what is all this stuff? Recap on Kubernetes

The idea to placing the theory here is to allow a pragmatic and simplified reflection of the Kubernetes related concepts. At this point, it can be combined with a systematic exploration.

image-20240127-144336.png

If you are a seasoned Kubernetes administrator, you may find the illustration to be inaccurate. That is because this is just teaching material.

  • Pods are a core concept of Kubernetes

https://kubernetes.io/docs/concepts/workloads/pods/

  • Within a Deployment, you specify that you want a certain number of instances to be available (or let an Operator manage the scaling). In our case, we will deploy an application that gets managed within the cluster by a Serverless operator.

  • A Service is the result of a Deployment. This way it becomes available. In our case, we will create an API server endpoint that clients can connect to.

https://kubernetes.io/docs/concepts/services-networking/service/

  • If persistent storage is required, it’s usually better to use network file systems and to mount these external resources into the transient instances.

  • Labels and Selectors are important if you have many Services. You may have different settings for different environments (test, prod, staging, … lab)

On-premise Serverless stack - Istio and Knative

Install Istio with the observability tools

Istio is like an extension to Kubernetes. It adds a Service Mesh. In my opinion, this is the future of how we are going to run and think about services in Cloud-Native environments.

From a security perspective, I like having an mTLS (mutual Transport Layer Security) enabled Service Mesh network between Pods. If you make a Threat Model of your env you may find out that the Micro Services exchange confidential data containing PII (Personally Identifiable Information).

Encryption can help to develop means within the Micro Service architecture that treat the trust boundaries within third-party networks. Like between AWS EC2 instances.

If you are only interested in in-transit encryption for Kubernetes, you may also like to learn more about Cilium. I only used CNI and for a single-node minikube lab that is fine.

https://cilium.io/

Other use-cases of Istio are Blue-Green deployments, Canary deployments or Dark Deployments. These happen with the Mixer component.

cd ~ 
curl -L https://istio.io/downloadIstio | sh - 
cd istio-1.5.1 
export PATH=$PWD/bin:$PATH 
istioctl manifest apply --set profile=demo 
kubectl label namespace default istio-injection=enabled

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
export INGRESS_HOST=$(minikube ip)

Keep these variables (or the commands). We are going to need them to test the routes and reachability of our Egress endpoints of the Service Mesh (with curl).

Connect to Kiali with the browser from your Dev system without a tunnel

Kiali is a wonderful way to gain instant familiarity with a Micro Service Mesh Network. It uses Prometheus (a metrics collector) and Jaeger (a tracer).

With both systems combined, you can gain insights into the auto-scaling behavior of Knative services. This relates to Istio in so far, that its Sidecar containers get injected into the Knative service Pods, and that the metrics are provided this way.
– But let’s focus on one thing at a time. Before we are ready to observe all of this, we need to continue with the setup. Keep in mind: you don’t just want the front-row seats for this. You want to be a player!

Although minikube has tunneling and kubectl has got proxyfication features, it may be more comfortable to map Kiali to the VM’s IP. This could also be done with Ingress or MetalLB, but the simplest form of mapping out the service is this:

kubectl patch service kiali -n istio-system -p '{"spec": {"type": "LoadBalancer", "externalIPs":["'$(minikube ip)'"]}}'

Consider this specific step replaceable, but keep the one-liner in mind unless you prefer Ingress or MetalLB.

Add the gateway - only in a Dev env

Knative needs this because its routing (by default) will assume a real Kube cluster.

cd ~/istio-1.5.1 
helm template --namespace=istio-system \
  --set gateways.custom-gateway.autoscaleMin=1 \
  --set gateways.custom-gateway.autoscaleMax=2 \
  --set gateways.custom-gateway.cpu.targetAverageUtilization=60 \
  --set gateways.custom-gateway.labels.app='cluster-local-gateway' \
  --set gateways.custom-gateway.labels.istio='cluster-local-gateway' \
  --set gateways.custom-gateway.type='ClusterIP' \
  --set gateways.istio-ingressgateway.enabled=false \
  --set gateways.istio-egressgateway.enabled=false \
  --set gateways.istio-ilbgateway.enabled=false \
  --set global.mtls.auto=false \
  install/kubernetes/helm/istio \
  -f install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \
  | sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \
  > ./istio-local-gateway.yaml

kubectl apply -f istio-local-gateway.yaml

Patch the Istio Ingress-gateway

Again, this is just for the lab. Naturally, if you were to use OpenShift this would be a little different.

kubectl patch service istio-ingressgateway -n istio-system -p '{"spec": {"type": "LoadBalancer", "externalIPs":["'$(minikube ip)'"]}}'
kubectl apply -f https://github.com/knative/serving-operator/releases/download/v0.13.0/serving-operator.yaml 

cat <<-EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
 name: knative-serving
---
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
EOF

kubectl get deployment knative-serving-operator

kubectl apply -f https://github.com/knative/eventing-operator/releases/download/v0.13.0/eventing-operator.yaml

cat <<-EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
 name: knative-eventing
---
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
  name: knative-eventing
  namespace: knative-eventing
EOF

kubectl get deployment -n knative-serving

Recap: what are Operators, ConfigMaps and Routes?

At this point (Q2 2020), there are different kinds of Operators that can extend the Kubernetes architecture. The topic is vast and involves Custom Resource Definitions (CRDs) that can extend the set of functions of the Kube API. Knative brings an operator, that will initialize an “Autoscaler”. Keep in mind that in production you may not want your API endpoints to scale down to 0 instances, and that you can configure all of that. For demonstration purposes, I refrained from doing this here. It’s super easy to do, via a ConfigMap object.

ConfigMaps simply hold the configuration similar to Linux’s /etc/ directory. They can be edited, and the respective Operators observe them. So in case you want to change Knative’s domain or the autoscaling behavior, you will find documented variables in the ConfigMap.

Routes are how Services are bridged out, either via custom LoadBalancers, Ingress, or other Service objects. Knative changes the Routes for the autoscaling to distribute the requests. This is visualized in a video in this essay.

The good news is, that you’ll pick this up in great detail over time. I find very little use for encyclopedic summaries about Kubernetes. This is an area of rapid progress and in a couple of months, everything here will be different again. If you are a Linux user, you will many opportunities to harness your command-line skills to pick things up on the fly.

Time for Hello World - Knative Go

Here we build, tag and push the local image. You can use Docker or Podman to do so.

cd ~/Source/knative-docs/docs/serving/samples/hello-world/helloworld-go
docker build .
docker login
docker tag e...6 wishi/knative-go-hello

docker push wishi/knative-go-hello
The push refers to repository [docker.io/wishi/knative-go-hello]
...

Time for 1, 2, 3, many curls

Let’s make a few HTTP requests to the service in parallel:

  • No labels