AWS Certified Solutions Architect IAM Questions and Answers – Sample

1.What best describes an IAM role?

A. A role is used when configuring multi-factor authentication.
B. A role is a policy that determines an IAM user’s access to AWS resources.
C. A role is something that a user, application or service can “assume” to receive temporary security credentials that provide access to a resource.
D. A role is a policy that is applied directly to an AWS resource, such as an EC2 instance.

Correct Answer : C

2.You work for a large consulting firm that has just hired a junior consultant, named Jessica, who will be working on a large AWS project. She will be working remotely and, therefore, is not present in the office. You create a new IAM user for her named “Jessica” in your company’s AWS account. On Jessica’s first day, you ask her to make a change to a CloudWatch alarm in an Auto Scaling group. Jessica reports back that she does not have access to CloudWatch or Auto Scaling in the AWS console. What is a possible explanation for this?

A. Only IAM account admins can make changes to Auto Scaling groups.
B. Because she is working remotely, she would need to SSH into the instances in the Auto Scaling group via her terminal to make the changes
C. You have not added the appropriate IAM permissions and access policies to her IAM user.
D. When you created the new user, you forgot to assign access keys.

Correct Answer : C

3.Which of the following are managed using IAM? (Choose 2)

A. Multi-Factor Authentication
B. Bucket Policies
C. Billing Reports
D. Roles

Correct Answer: A, D

4.When requested through an STS API call, credentials are returned with what components?

A. Signed URL, Security Token, Username
B. Security Token, Access Key ID, Secret Access Key, Expiration
C. Security Token, Secret Access Key, Personal Pin Code
D. Security Token, Access Key ID, Signed URL

Correct Answer: B
5.API Access Keys are required in which scenarios below? (Choose 2)

A. Retrieving data from an ElastiCache cluster.
B. On premise servers connecting to RDS databases
D. Windows PowerShell
E. Managing AWS resources through the AWS console

Correct Answer: C and D
6.You would like to use STS to allow end users to authenticate from third-party providers such as Facebook, Google, and Amazon. What is this type of authentication called?

A. Web Identity Federation
B. Cross-Account Access
C. Enterprise Identity Federation
D. Commercial Federation

Correct Answer: A
7.Which of the following is NOT required as part of AWS’s suggested “best practices” for new accounts?

A. Delete the root account
B. Create individual IAM users
C. Use user groups to assign permissions
D. Apply an IAM password policy

Correct Answer: A
8.You have hired an engineer, Kathy Johnson, and have created an IAM user for her in the company’s AWS account. She will be overseeing the company’s DynamoDB database, so you attached the “AmazonDynamoDBFullAccess” IAM Policy to her IAM user. Six months later, Kathy was promoted to a manager and you added her to the “Managers” IAM group. The “Managers” group does not have the “AmazonDynamoDBFullAccess” policy attached to it. What will happen to Kathy’s DynamoDB access?
A. It is not possible for an IAM group to have IAM permission policies, they need to be placed at the user level
B. Nothing, as an IAM user can have multiple IAM permission policies attached to them at the same time, either directly to the user or through an associated IAM group. The multiple policies are combined and evaluated together.
C. Only one IAM policy can be attached to a user at a time. You need to create another IAM user for her to use for her to perform her DynamoDB activities.
D. You would need to remove the DynamoDB policy from her IAM user and add it to the manager’s group policy

Correct Answer: C


Want to pass AWS Certified Solutions Architect – Associate Level (2018)?  Join


Play with Docker Classroom

The Play with Docker classroom brings you labs and tutorials that help you get hands-on experience using Docker. In this classroom you will find a mix of labs and tutorials that will help Docker users, including SysAdmins, IT Pros, and Developers. There is a mix of hands-on tutorials right in the browser, instructions on setting up and using Docker in your own environment, and resources about best practices for developing and deploying your own applications.

We recommend you start with one of our Getting Started Guides, and then explore the individual labs that explore many advanced features of Docker. For a comprehensive approach to understanding Docker, choose your preferred journey, IT Pros and System Administrators, or Developers.

Getting Started Walk-through for IT Pros and System Administrators

Learn more about Docker, how it works and how it can help you deploy secure, scalable applications and save money along the way.






How built Elastic Stack for Docker Swarm using Docker Application Packages(docker-app)

Let’s begin with Problem Statement !

DockerHub is a cloud-based registry service which allows you to link to code repositories, build your images, test them, store manually pushed images so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management as well as workflow automation throughout the development pipeline. We share Docker images all the time, but let’s agree to the fact that we don’t have a good way of sharing the multi-service applications that use them.

Let us take an example of Elastic Stack. Built on an open source foundation, the Elastic Stack lets you reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real time with the help of Elasticsearch, Logstash, Kibana and multiple other tools and technique. In order to build these tools in the form of containers, one need to start building Docker Image for each of these tools. The recommended way is constructing a Dockerfile for each of these tools. In turn, Docker Compose uses these images to build required services. Whenever the docker stack deploy CLI is used to deploy the application stack, these Docker images are pulled from Dockerhub for the first time and then picked up locally once downloaded to your system. What if you could upload your whole application stack to DockerHub? Yes, it’s possible today and docker-app is the tool which can make Compose-based applications shareable on Docker Hub and DTR.

Docker-app v0.5.0 is now Available !

Docker Application Package v0.5.0 is the latest offering from  Docker, Inc. You can download it from this link. The binaries are available for Linux, Windows and MacOS Platform. If you are looking out for source code,  this is the direct link.


The docker-app v0.5.0 comes with notable features and improvements which are listed below:

  • The improved docker-app inspect command to shows a summary of services, networks, volumes and secrets.

  • The docker-app push CLI now works on Windows and bypasses the local docker daemon by talking directly to the registry.
  • The docker-app save and docker-app ls have been obsoleted.
  • All commands now accept an application package as a URL.
  • The docker-app push command now accepts a custom repository name.
  • The docker-app completion command can generate zsh completion in addition to bash.

In my last blog post, I talked about docker-app for the first time and showcased its usage soon after I returned back from Dockercon. Under this post, I will show how I built Elastic Stack using docker-app for 5-Node Docker Swarm cluster.


  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Docker Swarm Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSENGINE VERSION
iy9mbeduxd4mmjxoikbn5ulds *   manager1            Ready               Active              Reachable18.03.1-ce
mx916kgqg6gfgqdr2gn1eksxy     manager2            Ready               Active              Leader18.03.1-ce
xaeq943o84g9spy6mebj64tw3     manager3            Ready               Active              Reachable18.03.1-ce
8umdv6m82nrpevuris1e45wnq     worker1             Ready               Active18.03.1-ce
o3yobqgg7wjvjw2ec5ythszgw     worker2             Ready               Active18.03.1-ce

Cloning the Repository

$ git clone
Cloning into 'app'...remote: Enumerating objects: 134, done.
remote: Counting objects: 100% (134/134), done.remote: Compressing objects: 100% (134/134), done.
remote: Total 14511 (delta 95), reused 0 (delta 0), pack-reused 14377Receiving objects: 100% (14511/14511), 17.37 MiB | 13.35 MiB/s, done.
Resolving deltas: 100% (5391/5391), done.

Install Docker-app

$ cd app/examples/elk/
[manager1] (local) root@ ~/app/examples/elk$ ls          devel              elk.dockerapp      install-dockerapp  prod
[manager1] (local) root@ ~/app/examples/elk
$ chmod +x install-dockerapp[manager1] (local) root@ ~/app/examples/elk
$ sh install-dockerappConnecting to (
Connecting to ( 100% |*************************************************************|  8895k  0:00:00 ETA
[manager1] (local) root@ ~/app/examples/elk

Verifying Docker-app Version

$ docker-app version
Version:      v0.4.0
Git commit:   525d93bc
Built:        Tue Aug 21 13:02:46 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

I assume you have a docker compose file for ELK stack application already available with you. If not, you can download a sample file from this link. Place this YAML file under the same directory(app/examples/elk/). Now with docker-app installed, let’s create an Application Package based on this Compose file:

$ docker-app init elk

Once you run the above command, it create a new directory elk.dockerapp/ that contains three different YAML files:

docker-compose.yml  elk.dockerapp
[manager1] (local) root@ ~/myelk
$ tree elk.dockerapp/
├── docker-compose.yml
├── metadata.yml
└── settings.yml

0 directories, 3 files

Edit each of these files as shown to look similar to what are placed under this link.

Rendering Docker Compose file

$ docker-app render elk
version: "3.4"
  elasticsearch:    command:
    - elasticsearch    -
      mode: replicated
      replicas: 2
      ES_JAVA_OPTS: -Xms2g -Xmx2g
    image: elasticsearch:5
      elk: null
    - type: volume
      target: /usr/share/elasticsearch/data
      mode: replicated
      replicas: 2
      ELASTICSEARCH_URL: http://elasticsearch:9200
      - CMD-SHELL
      - wget -qO- http://localhost:5601 > /dev/null
      interval: 30s
      retries: 3
    image: kibana:latest
      elk: null
    - mode: ingress
      target: 5601
      published: 5601
      protocol: tcp
    - sh
    - -c
    - logstash -e 'input { syslog  { type => syslog port => 10514   } gelf { } } output
      { stdout { codec => rubydebug } elasticsearch { hosts => [ "elasticsearch" ]
      } }'
      mode: replicated
      replicas: 2
    hostname: logstash
    image: logstash:alpine
      elk: null
    - mode: ingress
      target: 10514
      published: 10514
      protocol: tcp
    - mode: ingress
      target: 10514
      published: 10514
      protocol: udp
    - mode: ingress
      target: 12201
      published: 12201
      protocol: udp
  elk: {

Setting the kernel parameter for ELK stack

sysctl -w vm.max_map_count=262144

Deploying the Application Stack

[manager1] (local) root@ ~/app/examples/elk
$ docker-app deploy elk --settings-files elk.dockerapp/settings.yml
Creating network elk_elk
Creating service elk_kibana
Creating service elk_logstash
Creating service elk_elasticsearch
[manager1] (local) root@ ~/app/examples/elk

Inspecting ELK Stack

[manager1] (local) root@ ~/app/examples/elk
$ docker-app inspect elk
myelk 0.1.0
Maintained by: Ajeet_Raina <>

ELK using Dockerapp

Setting                       Default
-------                       -------
elasticsearch.deploy.mode     replicated
elasticsearch.deploy.replicas 2      elasticsearch:5
kibana.deploy.mode            replicated
kibana.deploy.replicas        2             kibana:latest
kibana.port                   5601
logstash.deploy.mode          replicated

Verifying Stack services are up & running

[manager1] (local) root@ ~/app/examples/elk/docker101/play-with-docker/visualizer
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
uk2whax6f3jq        elk_elasticsearch   replicated          2/2                 elasticsearch:5
nm4p3yswvh5y        elk_kibana          replicated          2/2                 kibana:latest       *:5601->56
g5ubng6rhcyp        elk_logstash        replicated          2/2                 logstash:alpine     *:10514->1
0514/tcp, *:10514->10514/udp, *:12201->12201/udp
[manager1] (local) root@192

Pushing the App Package to Dockerhub

Password:[manager1] (local) root@ ~/app/examples/elk
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to to create one.
Username: ajeetraina
Password:Login Succeeded

Pushing the App package to DockerHub

[manager1] (local) root@ ~/app/examples/elk$ docker-app push --namespace ajeetraina --tag 1.0.2
The push refers to repository []
15e73d68a400: Pushed
1.0.2: digest: sha256:c5a8e3b7e2c7a5566a3e4247f8171516033e7e9791dfdb6ebe622d3830884d9b size: 524
[manager1] (local) root@ ~/app/examples/elk

Important Note: If you are using Docker-app v0.5.0, you might face issue related to pulling the image from Dockerhub as it report unsupported OS error message. Here’s a link to this open issue.

Testing the Application Package

Open up a new PWD window. Install docker-app as shown above and try to run the below command:

docker-app deploy ajeetraina/elk.dockerapp:1.0.2

This should bring up your complete Elastic Stack Platform.


Re-Post from:

 How to install Docker on a Jenkins server and configure the Jenkins

To deploy a containerized app, Jenkins needs to be able to interact with Docker. This means that Docker needs to be installed locally on the Jenkins server, and the Jenkins user needs to be provided with the permissions necessary to use that Docker installation.

How to install Docker on a Jenkins server and configure the Jenkins user to be able to access it.

Follow the instruction:

sudo yum -y install docker
sudo systemctl start docker
sudo systemctl enable docker
sudo groupadd docker
sudo adduser jenkins docker
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
sudo systemctl restart docker

Jenkins Continuous Delivery (CD) questions and answers

1.Which of the following are things that could be a Jenkins Pipeline stage? (Choose all that apply)

A. Building the code. (Correct)
B. Testing the code. (Correct)
C. Executing a command (wrong)
D. Deploying to Production (Correct)

2.What are the basic building blocks of a Jenkins pipeline?

A. Stages and playbooks
B. Stages and commands
C. Stages and steps (Correct)
D. Declarative and scripted
3.Which of the following are styles of syntax allowed in a Jenkins file? (Choose all that apply)

A. Scripted (Correct)
B. Declarative (Correct)
C. Infrastructure as Code (IaC)
D. Bash

4.What is Jenkins Pipelines?

A. A code review process
B. A set of Jenkins plugins that support CD (Correct)
C. A quick way to install a Jenkins server
D. A way to send code back and forth between developers
5.What is a Jenkins file?

A. The configuration file for a Jenkins freestyle project
B. The file that contains the code defining a Jenkins pipeline (Correct)
C. The binary that is used to install Jenkins
D. A file that is downloaded from the Jenkins server

6.What can you do in a Jenkins pipeline to pause execution and wait for user feedback?

A. Implement an empty stage in the pipeline.
B. Set a trigger in Jenkins to pause the deployment.
C. Use an input step to pause the pipeline until a user clicks either abort or proceed. (Correct)
D. Cancel the build in Jenkins.

7.Which of the following are things that could be a Jenkins Pipeline step? (Choose all that apply)

A. Copy files to a server. (Correct)
B. Execute a command. (Correct)
C. Wait for human input. (Correct)
D. Deploy to production.
8.What does the Publish Over SSH Jenkins plugin do?

A. It notifies other servers of a build status using ssh.
B. It downloads files from other servers.
C. It allows you to copy files to another server using ssh. (Correct)
D. It allows you to execute Jenkins builds over ssh.

%d bloggers like this: