Creating a kubernetes cluster


How to create kubernetes cluster?

Get three EC2 instances from AWS and start the following steps.
 

  1. The first thing that we are going to do is use SSH to log in to all machines. Once we have logged in, we need to elevate privileges using sudo.
    sudo su  
  2. Disable SELinux.
    setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
  3. Enable the br_netfilter module for cluster communication.
    modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
  4. Ensure that the Docker dependencies are satisfied.
    yum install -y yum-utils device-mapper-persistent-data lvm2
  5. Add the Docker repo and install Docker.
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    yum install -y docker-ce
  6. Set the cgroup driver for Docker to systemd, then reload systemd, enable and start Docker
    sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service
    systemctl daemon-reload
    systemctl enable docker --now
  7. Add the repo for Kubernetes.
    cat << EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
  8. Install Kubernetes.
    yum install -y kubelet kubeadm kubectl
  9. Enable the kubelet service. The kubelet service will fail to start until the cluster is initialized, this is expected.
    systemctl enable kubelet

*Note: Complete the following section on the MASTER ONLY!

  1. Initialize the cluster using the IP range for Flannel.
    kubeadm init --pod-network-cidr=10.244.0.0/16
  2. Copy the kubeadmn join command that is in the output. We will need this later.
  3. Exit sudo and copy the admin.conf to your home directory and take ownership.
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. Deploy Flannel.
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  5. Check the cluster state.
    kubectl get pods --all-namespaces

    Note: Complete the following steps on the NODES ONLY!

     

  6. Run the join command that you copied earlier, this requires running the command as sudo on the nodes. Then check your nodes from the master.
    kubectl get nodes
Create and scale a deployment using kubectl.keyboard_arrow_up
  1. Create a simple deployment.
    kubectl create deployment nginx --image=nginx
  2. Inspect the pod.
    kubectl get pods
  3. Scale the deployment.
    kubectl scale deployment nginx --replicas=4
  4. Inspect the pods. You should now have 4.
    kubectl get pods

Building a Kubernetes Cluster with Kubeadm


To build a cluster, you need to have master and slave nodes and dockers need to be installed.

1. Install Docker on all three nodes.keyboard_arrow_up
  1. Do the following on all three nodes:
    • apt install gpg-agent
    
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    sudo apt-get update
    sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
    sudo apt-mark hold docker-ce
  2. Verify that Docker is up and running with:
    sudo systemctl status docker

2. Install Kubeadm, Kubelet, and Kubectl on all three nodes.keyboard_arrow_up

  1. Install the Kubernetes components by running this on all three nodes:
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    sudo apt-get update
    sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
    sudo apt-mark hold kubelet kubeadm kubectl
3. Bootstrap the cluster on the Kube master node.keyboard_arrow_up
  1. On the Kube master node, do this:
    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    That command may take a few minutes to complete.

  2. When it is done, set up the local kubeconfig:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Take note that the kubeadm init command printed a long kubeadm join command to the screen.

    You will need that kubeadm join command in the next step!

  3. Run the following commmand on the Kube master node to verify it is up and running:
    kubectl version

    This command should return both a Client Version and a Server Version.

  4. Join the two Kube worker nodes to the cluster.keyboard_arrow_up

    1. Copy the kubeadm join command that was printed by the kubeadm init command earlier, with the token and hash. Run this command on both worker nodes, but make sure you add sudo in front of it:
      sudo kubeadm join $some_ip:6443 --token $some_token --discovery-token-ca-cert-hash $some_hash
    2. Now, on the Kube master node, make sure your nodes joined the cluster successfully:
      kubectl get nodes

      Verify that all three of your nodes are listed. It will look something like this:

      NAME            STATUS     ROLES    AGE   VERSION
      ip-10-0-1-101   NotReady   master   30s   v1.12.2
      ip-10-0-1-102   NotReady      8s    v1.12.2
      ip-10-0-1-103   NotReady      5s    v1.12.2

      Note that the nodes are expected to be in the NotReady state for now.

  5. Set up cluster networking with flannel.keyboard_arrow_up

    1. Turn on iptables bridge calls on all three nodes:
      echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
      sudo sysctl -p
    2. Next, run this only on the Kube master node:
      kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

      Now flannel is installed! Make sure it is working by checking the node status again:

      kubectl get nodes

      After a short time, all three nodes should be in the Ready state. If they are not all Ready the first time you run kubectl get nodes, wait a few moments and try again. It should look something like this:

      NAME            STATUS   ROLES    AGE   VERSION
      ip-10-0-1-101   Ready    master   85s   v1.12.2
      ip-10-0-1-102   Ready       63s   v1.12.2
      ip-10-0-1-103   Ready       60s   v1.12.2

 

Monitoring in Kubernetes with Prometheus and Grafana


My team is building the new severless web application. They currently have it running on a Kubernetes cluster, but they need to monitor the performance of the cluster and the applications running on it. My task is to install and setup up Prometheus to aggregate data and Grafana to display this data. Both can be installed on the Kubernetes cluster itself. To make sure everything is working, you will need to create two dashboards in Grafana:

  1. Import the Kubernetes All Nodes community dashboard to display basic metrics about the Kubernetes cluster.
  2. Create a new Dashboard and add a graph showing requests per minute for the train-schedule app.

You need to setup two server 

1. Kubernetes Master
2. Kubernetes Node
Steps to Follow:

1. Login into Kubernetes Master Server

kubectl-get-node

kubectl get nodes

2. Initialize helm with: helm init --wait

cloud_user@ip-10-0-1-101:~$ helm init –wait
Creating /home/cloud_user/.helm
Creating /home/cloud_user/.helm/repository
Creating /home/cloud_user/.helm/repository/cache
Creating /home/cloud_user/.helm/repository/local
Creating /home/cloud_user/.helm/plugins
Creating /home/cloud_user/.helm/starters
Creating /home/cloud_user/.helm/cache/archive
Creating /home/cloud_user/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/cloud_user/.helm.

3. Now you can use HELM to install Prometheus and Grafana. I am going to use the charts provided by Kubernetes. You can get these from https://github.com/kubernetes/charts

git clone https://github.com/kubernetes/charts
cd charts
git checkout efdcffe0b6973111ec6e5e83136ea74cdbe6527d
cd ../

cloud_user@ip-10-0-1-101:~$ git clone https://github.com/kubernetes/charts
Cloning into ‘charts’…
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 41509 (delta 0), reused 1 (delta 0), pack-reused 41502
Receiving objects: 100% (41509/41509), 13.04 MiB | 25.28 MiB/s, done.
Resolving deltas: 100% (27058/27058), done.
Checking connectivity… done.
cloud_user@ip-10-0-1-101:~$ cd charts
cloud_user@ip-10-0-1-101:~/charts$ git checkout efdcffe0b6973111ec6e5e83136ea74cdbe6527d
Note: checking out ‘efdcffe0b6973111ec6e5e83136ea74cdbe6527d’.

You are in ‘detached HEAD’ state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b

HEAD is now at efdcffe… [stable/prometheus] Ability to enable admin API (#5570)
cloud_user@ip-10-0-1-101:~/charts$ cd ../

4. Create a prometheus-values.yml for prometheus to turn off persistent storage:

cloud_user@ip-10-0-1-101:~$ vi prometheus-values.yml
lertmanager:
  persistentVolume:
      enabled: false
server:
  persistentVolume:
      enabled: false
5. Use helm to install prometheus in the prometheus namespace: cloud_user@ip-10-0-1-101:~$ vi prometheus-values.yml cloud_user@ip-10-0-1-101:~$ helm install -f ~/prometheus-values.yml ~/charts/stable/prometheus --name prometheus --namespace prometheus NAME: prometheus LAST DEPLOYED: Mon Oct 8 19:49:37 2018 NAMESPACE: prometheus STATUS: DEPLOYED RESOURCES: ==> v1/ServiceAccount NAME SECRETS AGE prometheus-alertmanager 1 2s prometheus-kube-state-metrics 1 2s prometheus-node-exporter 1 2s prometheus-pushgateway 1 2s prometheus-server 1 2s ==> v1/ConfigMap NAME DATA AGE prometheus-alertmanager 1 2s prometheus-server 3 2s ==> v1beta1/ClusterRole NAME AGE prometheus-kube-state-metrics 2s prometheus-server 2s ==> v1beta1/ClusterRoleBinding NAME AGE prometheus-kube-state-metrics 2s prometheus-server 2s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus-alertmanager ClusterIP 10.111.155.200 80/TCP 1s prometheus-kube-state-metrics ClusterIP None 80/TCP 1s prometheus-node-exporter ClusterIP None 9100/TCP 1s prometheus-pushgateway ClusterIP 10.101.32.144 9091/TCP 1s prometheus-server ClusterIP 10.106.116.125 80/TCP 1s ==> v1beta1/DaemonSet NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE prometheus-node-exporter 1 1 0 1 0 1s ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE prometheus-alertmanager 1 1 1 0 1s prometheus-kube-state-metrics 1 1 1 0 1s prometheus-pushgateway 1 1 1 0 1s prometheus-server 1 1 1 0 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE prometheus-node-exporter-rpdkx 0/1 ContainerCreating 0 1s prometheus-alertmanager-6df98765f4-vml9x 0/2 Pending 0 1s prometheus-kube-state-metrics-6584885ccf-4zxrl 0/1 ContainerCreating 0 1s prometheus-pushgateway-66c9fdb48f-2znbs 0/1 ContainerCreating 0 1s prometheus-server-65d5cc8544-jwwvr 0/2 Init:0/1 0 1s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prometheus-alertmanager Pending 2s NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: prometheus-server.prometheus.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace prometheus port-forward $POD_NAME 9090 ################################################################################# ###### WARNING: Persistence is disabled!!! You will lose your data when ##### ###### the Server pod is terminated. ##### ################################################################################# The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster: prometheus-alertmanager.prometheus.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace prometheus port-forward $POD_NAME 9093 The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster: prometheus-pushgateway.prometheus.svc.cluster.local Get the PushGateway URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace prometheus port-forward $POD_NAME 9091 For more information on running Prometheus, visit: https://prometheus.io/

6. Create a grafana-values.yml for grafana to set an admin password:

cloud_user@ip-10-0-1-101:~$ vi grafana-values.yml
adminPassword: password

7. Use helm to install grafana in the grafana namespace:

helm install -f ~/grafana-values.yml ~/charts/stable/grafana –name grafana –namespace grafan

Deploy a NodePort service to provide external access to grafana. Make a file called grafana-ext.yml:

grafana-ext.yml

And deploy the service:   kubectl apply -f ~/grafana-ext.yml

Apache Camel using docker and kubernetes with fabric8


A quick screencast demonstrating the camel-example-servlet example which is a simple Camel Tomcat application that has been packaged as a docker container, and deployed on a kubernetes platform. And with the fabric8 web console we can manage and scale up and down this application easily.

Apache Camel using docker and kubernetes with fabric8 from JBoss Developer on Vimeo.