Creating a kubernetes cluster


How to create kubernetes cluster?

Get three EC2 instances from AWS and start the following steps.
 

  1. The first thing that we are going to do is use SSH to log in to all machines. Once we have logged in, we need to elevate privileges using sudo.
    sudo su  
  2. Disable SELinux.
    setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
  3. Enable the br_netfilter module for cluster communication.
    modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
  4. Ensure that the Docker dependencies are satisfied.
    yum install -y yum-utils device-mapper-persistent-data lvm2
  5. Add the Docker repo and install Docker.
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    yum install -y docker-ce
  6. Set the cgroup driver for Docker to systemd, then reload systemd, enable and start Docker
    sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service
    systemctl daemon-reload
    systemctl enable docker --now
  7. Add the repo for Kubernetes.
    cat << EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
  8. Install Kubernetes.
    yum install -y kubelet kubeadm kubectl
  9. Enable the kubelet service. The kubelet service will fail to start until the cluster is initialized, this is expected.
    systemctl enable kubelet

*Note: Complete the following section on the MASTER ONLY!

  1. Initialize the cluster using the IP range for Flannel.
    kubeadm init --pod-network-cidr=10.244.0.0/16
  2. Copy the kubeadmn join command that is in the output. We will need this later.
  3. Exit sudo and copy the admin.conf to your home directory and take ownership.
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. Deploy Flannel.
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  5. Check the cluster state.
    kubectl get pods --all-namespaces

    Note: Complete the following steps on the NODES ONLY!

     

  6. Run the join command that you copied earlier, this requires running the command as sudo on the nodes. Then check your nodes from the master.
    kubectl get nodes
Create and scale a deployment using kubectl.keyboard_arrow_up
  1. Create a simple deployment.
    kubectl create deployment nginx --image=nginx
  2. Inspect the pod.
    kubectl get pods
  3. Scale the deployment.
    kubectl scale deployment nginx --replicas=4
  4. Inspect the pods. You should now have 4.
    kubectl get pods

Building a Kubernetes Cluster with Kubeadm


To build a cluster, you need to have master and slave nodes and dockers need to be installed.

1. Install Docker on all three nodes.keyboard_arrow_up
  1. Do the following on all three nodes:
    • apt install gpg-agent
    
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    sudo apt-get update
    sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
    sudo apt-mark hold docker-ce
  2. Verify that Docker is up and running with:
    sudo systemctl status docker

2. Install Kubeadm, Kubelet, and Kubectl on all three nodes.keyboard_arrow_up

  1. Install the Kubernetes components by running this on all three nodes:
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    sudo apt-get update
    sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
    sudo apt-mark hold kubelet kubeadm kubectl
3. Bootstrap the cluster on the Kube master node.keyboard_arrow_up
  1. On the Kube master node, do this:
    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    That command may take a few minutes to complete.

  2. When it is done, set up the local kubeconfig:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Take note that the kubeadm init command printed a long kubeadm join command to the screen.

    You will need that kubeadm join command in the next step!

  3. Run the following commmand on the Kube master node to verify it is up and running:
    kubectl version

    This command should return both a Client Version and a Server Version.

  4. Join the two Kube worker nodes to the cluster.keyboard_arrow_up

    1. Copy the kubeadm join command that was printed by the kubeadm init command earlier, with the token and hash. Run this command on both worker nodes, but make sure you add sudo in front of it:
      sudo kubeadm join $some_ip:6443 --token $some_token --discovery-token-ca-cert-hash $some_hash
    2. Now, on the Kube master node, make sure your nodes joined the cluster successfully:
      kubectl get nodes

      Verify that all three of your nodes are listed. It will look something like this:

      NAME            STATUS     ROLES    AGE   VERSION
      ip-10-0-1-101   NotReady   master   30s   v1.12.2
      ip-10-0-1-102   NotReady      8s    v1.12.2
      ip-10-0-1-103   NotReady      5s    v1.12.2

      Note that the nodes are expected to be in the NotReady state for now.

  5. Set up cluster networking with flannel.keyboard_arrow_up

    1. Turn on iptables bridge calls on all three nodes:
      echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
      sudo sysctl -p
    2. Next, run this only on the Kube master node:
      kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

      Now flannel is installed! Make sure it is working by checking the node status again:

      kubectl get nodes

      After a short time, all three nodes should be in the Ready state. If they are not all Ready the first time you run kubectl get nodes, wait a few moments and try again. It should look something like this:

      NAME            STATUS   ROLES    AGE   VERSION
      ip-10-0-1-101   Ready    master   85s   v1.12.2
      ip-10-0-1-102   Ready       63s   v1.12.2
      ip-10-0-1-103   Ready       60s   v1.12.2

 

DevOps Roadmap – 2019


Road to become a DevOps Engineer

devops.png

 

source: https://github.com/kamranahmedse/developer-roadmap#devops-roadmap

Install Docker from the Default CentOS 7 Repository


After logging into the server, install the latest version of Docker using yum.

$ sudo yum -y install docker

Once installation completes, enable & start the service using systemd.

Create a new group named docker, then add the cloud_user user to the group.

 

# groupadd docker
# usermod -aG docker cloud_user
# systemctl enable –now docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd
/system/docker.service.
[root@ip-10-0-1-121 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES

Logout from Root

ssh cloud_user@youripaddress
Enter the password

[cloud_user@ip-10-0-1-121 ~]$ ssh cloud_user@10.0.1.121
The authenticity of host ‘10.0.1.121 (10.0.1.121)’ can’t be established.
ECDSA key fingerprint is SHA256:URC6viTN+8b87rv0V7t9lsCDiq5fwAlguWIYtQSbfCI.
ECDSA key fingerprint is MD5:ac:49:84:5e:c2:53:c7:d1:a2:d8:70:80:61:31:df:60.
Are you sure you want to continue connecting (yes/no)? y
Please type ‘yes’ or ‘no’: yes
Warning: Permanently added ‘10.0.1.121’ (ECDSA) to the list of known hosts.
Password:
[cloud_user@ip-10-0-1-121 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES

Docker is Installed and available to test

Build Automation Gradle Interview Questions


What does the command ./gradlew do when it is run from within a Gradle project’s directory?
– It runs the Gradle Wrapper.

What does a smoke test do?
– Tests large-scale, basic things like whether or not the application runs.

What can you use to take advantage of pre-built Gradle tasks created by the Gradle community?
– Gradle plugins

Which of the following are the kinds of tasks build automation tools usually handle? (Choose all that apply)

– Compiling code
– Packaging the app for deployment
– Dependency management

What does Gradle do?
– Build Automation

Which type of testing tests the smallest amount of code in a single test?
– Unit test

Which of the following does declaring a dependency between two tasks in Gradle do? (Choose all that apply)

– It ensures that when a task is called, the task that it depends on also runs.
– It ensures that if both tasks are run, the task which is a dependency will always run before the one that depends on it.
What software does Gradle require to be installed on a system before it can run?
– Java JDK 7 or higher

What does build automation do?
– It automates the process of processing source code in preparation for deployment.