When should you use Lambda over EC2?


– Changes to Amazon S3 Buckets
– Updates to an Amazon DynamoDb tables
– Custom events generated bu your application or devices
– Kinesis Streams

Monitoring in Kubernetes with Prometheus and Grafana


My team is building the new severless web application. They currently have it running on a Kubernetes cluster, but they need to monitor the performance of the cluster and the applications running on it. My task is to install and setup up Prometheus to aggregate data and Grafana to display this data. Both can be installed on the Kubernetes cluster itself. To make sure everything is working, you will need to create two dashboards in Grafana:

  1. Import the Kubernetes All Nodes community dashboard to display basic metrics about the Kubernetes cluster.
  2. Create a new Dashboard and add a graph showing requests per minute for the train-schedule app.

You need to setup two server 

1. Kubernetes Master
2. Kubernetes Node
Steps to Follow:

1. Login into Kubernetes Master Server

kubectl-get-node

kubectl get nodes

2. Initialize helm with: helm init --wait

cloud_user@ip-10-0-1-101:~$ helm init –wait
Creating /home/cloud_user/.helm
Creating /home/cloud_user/.helm/repository
Creating /home/cloud_user/.helm/repository/cache
Creating /home/cloud_user/.helm/repository/local
Creating /home/cloud_user/.helm/plugins
Creating /home/cloud_user/.helm/starters
Creating /home/cloud_user/.helm/cache/archive
Creating /home/cloud_user/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/cloud_user/.helm.

3. Now you can use HELM to install Prometheus and Grafana. I am going to use the charts provided by Kubernetes. You can get these from https://github.com/kubernetes/charts

git clone https://github.com/kubernetes/charts
cd charts
git checkout efdcffe0b6973111ec6e5e83136ea74cdbe6527d
cd ../

cloud_user@ip-10-0-1-101:~$ git clone https://github.com/kubernetes/charts
Cloning into ‘charts’…
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 41509 (delta 0), reused 1 (delta 0), pack-reused 41502
Receiving objects: 100% (41509/41509), 13.04 MiB | 25.28 MiB/s, done.
Resolving deltas: 100% (27058/27058), done.
Checking connectivity… done.
cloud_user@ip-10-0-1-101:~$ cd charts
cloud_user@ip-10-0-1-101:~/charts$ git checkout efdcffe0b6973111ec6e5e83136ea74cdbe6527d
Note: checking out ‘efdcffe0b6973111ec6e5e83136ea74cdbe6527d’.

You are in ‘detached HEAD’ state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b

HEAD is now at efdcffe… [stable/prometheus] Ability to enable admin API (#5570)
cloud_user@ip-10-0-1-101:~/charts$ cd ../

4. Create a prometheus-values.yml for prometheus to turn off persistent storage:

cloud_user@ip-10-0-1-101:~$ vi prometheus-values.yml
lertmanager:
  persistentVolume:
      enabled: false
server:
  persistentVolume:
      enabled: false
5. Use helm to install prometheus in the prometheus namespace: cloud_user@ip-10-0-1-101:~$ vi prometheus-values.yml cloud_user@ip-10-0-1-101:~$ helm install -f ~/prometheus-values.yml ~/charts/stable/prometheus --name prometheus --namespace prometheus NAME: prometheus LAST DEPLOYED: Mon Oct 8 19:49:37 2018 NAMESPACE: prometheus STATUS: DEPLOYED RESOURCES: ==> v1/ServiceAccount NAME SECRETS AGE prometheus-alertmanager 1 2s prometheus-kube-state-metrics 1 2s prometheus-node-exporter 1 2s prometheus-pushgateway 1 2s prometheus-server 1 2s ==> v1/ConfigMap NAME DATA AGE prometheus-alertmanager 1 2s prometheus-server 3 2s ==> v1beta1/ClusterRole NAME AGE prometheus-kube-state-metrics 2s prometheus-server 2s ==> v1beta1/ClusterRoleBinding NAME AGE prometheus-kube-state-metrics 2s prometheus-server 2s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus-alertmanager ClusterIP 10.111.155.200 80/TCP 1s prometheus-kube-state-metrics ClusterIP None 80/TCP 1s prometheus-node-exporter ClusterIP None 9100/TCP 1s prometheus-pushgateway ClusterIP 10.101.32.144 9091/TCP 1s prometheus-server ClusterIP 10.106.116.125 80/TCP 1s ==> v1beta1/DaemonSet NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE prometheus-node-exporter 1 1 0 1 0 1s ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE prometheus-alertmanager 1 1 1 0 1s prometheus-kube-state-metrics 1 1 1 0 1s prometheus-pushgateway 1 1 1 0 1s prometheus-server 1 1 1 0 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE prometheus-node-exporter-rpdkx 0/1 ContainerCreating 0 1s prometheus-alertmanager-6df98765f4-vml9x 0/2 Pending 0 1s prometheus-kube-state-metrics-6584885ccf-4zxrl 0/1 ContainerCreating 0 1s prometheus-pushgateway-66c9fdb48f-2znbs 0/1 ContainerCreating 0 1s prometheus-server-65d5cc8544-jwwvr 0/2 Init:0/1 0 1s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prometheus-alertmanager Pending 2s NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: prometheus-server.prometheus.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace prometheus port-forward $POD_NAME 9090 ################################################################################# ###### WARNING: Persistence is disabled!!! You will lose your data when ##### ###### the Server pod is terminated. ##### ################################################################################# The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster: prometheus-alertmanager.prometheus.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace prometheus port-forward $POD_NAME 9093 The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster: prometheus-pushgateway.prometheus.svc.cluster.local Get the PushGateway URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace prometheus port-forward $POD_NAME 9091 For more information on running Prometheus, visit: https://prometheus.io/

6. Create a grafana-values.yml for grafana to set an admin password:

cloud_user@ip-10-0-1-101:~$ vi grafana-values.yml
adminPassword: password

7. Use helm to install grafana in the grafana namespace:

helm install -f ~/grafana-values.yml ~/charts/stable/grafana –name grafana –namespace grafan

Deploy a NodePort service to provide external access to grafana. Make a file called grafana-ext.yml:

grafana-ext.yml

And deploy the service:   kubectl apply -f ~/grafana-ext.yml

Endereum True Blockchain Ecosystem


Endereum True Blockchain Ecosystem. Aryan Nava CTO of Endereum speaking with WOW TV.

Endereum- The True Blockchain Ecosystem-CTO- Aryan NavaEndereum-CTO-Aryan-Nava

AliBaba Cloud Security


AliBaba Cloud Provide Multi Level Security to Make Data Safer

AliBaba-Cloud-Security

Mapping From AWS to Alibaba Cloud


How to map AWS to Alibaba cloud platform?

Alibaba Cloud provide following databases

ApsaraDB for MySQL
ApsaraDB for SQL Server
ApsaraDB for PostgreSQL
PPAS and POLARDB
mapping-from-aws-to-alibaba-cloud