My Wiki!

Kubernetes Begin

1. Installation

1.1 Postinstall

Bash completion. Add the completion script to the /etc/bash_completion.d directory:

kubectl completion bash >/etc/bash_completion.d/kubectl

2. Concepts

3. Minikube

3.1 Quickstart

Assume:

  • kvm2 driver installed
  • libvirtd, qemu-kvm, docker

Start minikube (no root):

  minikube start --vm-driver kvm2
  

Use connect local docker with minikube docker:

  eval $(minikube docker-env)

Switch context to minikube (default):

  kubectl config use-context minikube
  

Stop cluster

  minikube stop
  

Delete cluster

  minikube delete
  

3.2 Driver

3.2.1 vm-driver none

Even further by running the Kubernetes cluster directly on local docker host VM or dedicated dev machine by starting Minikube with the –vm-driver=none option.

If run as user, virtualbox driver will be used. So must run as root.

sudo minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
After some time you should see the following output:

WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
 The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks
When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions.  An example of this is below:
 sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
 sudo chown -R $USER $HOME/.kube
 sudo chgrp -R $USER $HOME/.kube
 
 sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
 sudo chown -R $USER $HOME/.minikube
 sudo chgrp -R $USER $HOME/.minikube
This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.
3.2.1.1 Change owner and group of context files

Running as root results in context being created under /root/{.kube, .minikube}. This disallow kubectl to use the context. Hence the move and chown above.

I only changed the owner and group of .minikube/ to belong to $USER (me).

sudo chown -R $USER $HOME/.minikube sudo chgrp -R $USER $HOME/.minikube

3.3 Use local images by re-using the Docker daemon

When using a single VM of Kubernetes, it’s really handy to reuse the Minikube’s built-in Docker daemon; as this means you don’t have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. Just make sure you tag your Docker image with something other than ‘latest’ and use that tag while you pull the image. Otherwise, if you do not specify version of your image, it will be assumed as :latest, with pull image policy of Always correspondingly, which may eventually result in ErrImagePull as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet.

To be able to work with the docker daemon on your mac/linux host use the docker-env command in your shell:

eval $(minikube docker-env)

You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM:

docker ps

On Centos 7, docker may report the following error:

Could not read CA certificate “/etc/docker/ca.pem”: open /etc/docker/ca.pem: no such file or directory The fix is to update /etc/sysconfig/docker to ensure that Minikube’s environment changes are respected:

< DOCKER_CERT_PATH=/etc/docker
---
> if [ -z "${DOCKER_CERT_PATH}" ]; then
>   DOCKER_CERT_PATH=/etc/docker
> fi

Remember to turn off the imagePullPolicy:Always (or tag and use image version), otherwise Kubernetes won’t use images you built locally.

3.4 Clean up

Cleanup and rerun minikube

  • stop minikube
  • delete the minikube cluster
  • sudo rm -rf /etc/kubernetes
  • start minikube
  • recreate all my namespaces and re-apply all my configurations

3.5 etcs

the specific commands I used to install k8s and minikube:

apt-get -y update
apt-get -y upgrade
apt-get install -y docker.io unzip
service docker start
systemctl enable docker
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && mv minikube /usr/local/bin/

4. Remote contrl k8s cluster

4.1 config

The default configuration kubectl is stored in ~/.kube/config and if you have Minikube installed, it added the context minikube to your config.

With kubectl you can specify a config to use with the command flag –kubeconfig.

E.g, pointing to default config

kubectl –kubeconfig=~/.kube/config config view

4.2 context

Config output:

dang@gt130:~> kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/dang/.minikube/ca.crt
    server: https://192.168.39.93:8443
  name: minikube
contexts:
- context:                   <------------ 
    cluster: minikube          
    user: minikube
  name: minikube
current-context: minikube     <--------------
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /home/dang/.minikube/client.crt
    client-key: /home/dang/.minikube/client.key

Check current context:

  kubectl config current-context

4.3 Multiple clusters

4.3.1 Case 1: Using multiple config for different clusters

After setting up k8s, the cluster config should be avaialble:

sudo ls /etc/kubernetes/admin.conf
~/.kube/config (should be copied chowned of the one above)

The file contains ca, key, user, etc., to authn to cluster.

Pull it to workstation:

  scp user@cluster:~/.kube/config my-cluster.config
  

Show Merged kubeconfig settings.

  kubectl config view 

Use multiple kubeconfig files at the same time and view merged config

  KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view
  KUBECONFIG=~/.kube/config:~/data/src/81_chariot_ws/k8s.dai/config.k8s.dai kubectl config view

Or

  kubectl --kubeconfig=~/data/src/81_chariot_ws/k8s.dai/config.k8s.dai config view

Get the password for the e2e user

  kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'

Display the current-context

  kubectl --kubeconfig=~/data/src/81_chariot_ws/k8s.dai/config.k8s.dai config current-context              

Set the default context to my-cluster-name

  kubectl --kubeconfig=~/data/src/81_chariot_ws/k8s.dai/config.k8s.dai config use-context my-cluster-name  

the trick is always specifying kubeconfig to use

  export KUBECONFIG=~/data/src/81_chariot_ws/k8s.dai/config.k8s.dai
  

ETCS

add a new cluster to your kubeconf that supports basic auth

kubectl config set-credentials kubeuser/foo.kubernetes.com –username=kubeuser –password=kubepassword

set a context utilizing a specific username and namespace.

kubectl config set-context gce –user=cluster-admin –namespace=foo \

&& kubectl config use-context gce

4.3.2 Case 2: Add a Cluster to current config

Get the public certificate from your cluster or use –insecure-skip-tls-verify:

kubectl config set-cluster example --server https://example.com:6443 --certificate-authority=example.ca

Add a User

Users in the configuration can use a path to a certificate –client-certificate or use the certificate data directly –client-certificate-data

kubectl config set-credentials example \
  --client-certificate=/some/path/example.crt \
  --client-key=/some/path/example.key

Add a Context

Add a context to tie a user and cluster together.

kubectl config set-context deasil --cluster=example \
  --namespace=default --user=example-admin

Change Current Context

At this point you can change your current context from minikube to example:

kubectl config use-context example

Of course, kubectl config use-context minikube will put you back to managing your local Minikube.

4.3.3 Case 3: my way

Merge the 2 config files: copy cluster, context, user.

kubectl config view
kubectl config use-context kubernetes-admin@kubernetes 
Switched to context "kubernetes-admin@kubernetes"
kubectl config current-context

4.4 More ...

Port Forwarding / Local Development Check out kubefwd (https://github.com/txn2/kubefwd) for a simple command line utility that bulk forwards services of one or more namespaces to your local workstation.


Navigation