Access Management services:

IAM

Computing services :

Lambda, API Gateway,

Storage Services:

S3

Data Services:

RDS, DynamoDB, Amazon Redshift

Analytics services:

Amazon Redshift, Amazon Elasticsearch Service, Amazon EMR, Amazon Kinesis Data Analytics

IOT services:

AWS IoT Greengrass, AWS IoT Core, AWS IoT Device Defender, AWS IoT Device Management, AWS IoT Things Graph, AWS IoT Analytics, AWS IoT Events, AWS IoT SiteWise,

Scenarios:

Suppose you have an application where you have to render images and also do some general computing. From the following services which service will best fit your need?

  1. Classic Load Balancer
  2. Application Load Balancer
  3. Both of them
  4. None of these

A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month around 4GB of sensor data is generated. The company uses a load balanced auto scaled layer of EC2 instances and a RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at least 100K sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?

  1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
  2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
  3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
  4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day at a particular time the data is extracted into S3 on a per user basis and then your application is later used to visualize the data to the user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?

  1. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
  2. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
  3. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
  4. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.

How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?

You can load the data in the following two ways:

  • AWS Data Pipeline provides a high performance, reliable, fault tolerant solution to load data from a variety of AWS data sources. You can use AWS Data Pipeline to specify the data source, desired data transformations, and then execute a pre-written import script to load your data into Amazon Redshift.

Which of the following use cases are suitable for Amazon DynamoDB? Choose 2 answers

  1. Managing web sessions.
  2. Storing JSON documents.
  3. Storing metadata for Amazon S3 objects.
  4. Running relational joins and complex updates.

What happens to my backups and DB Snapshots if I delete my DB Instance?

When you delete a DB instance, you have an option of creating a final DB snapshot, if you do that you can restore your database from that snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted, also automated backups are deleted and only manually created DB Snapshots are retained.

A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?

  1. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
  2. Amazon RDS for MySQL with Multi-AZ
  3. Amazon ElastiCache
  4. Amazon DynamoDB

Can I retrieve only a specific element of the data, if I have a nested JSON data in DynamoDB?

Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.

Which AWS services will you use to collect and process e-commerce data for near real-time analysis?

  1. Amazon ElastiCache
  2. Amazon DynamoDB
  3. Amazon Redshift
  4. Amazon Elastic MapReduce

Your company’s branch offices are all over the world, they use a software with a multi-regional deployment on AWS, they use MySQL 5.6 for data persistence.

The task is to run an hourly batch process and read data from every region to compute cross-regional reports which will be distributed to all the branches. This should be done in the shortest time possible. How will you build the DB architecture in order to meet the requirements?

  1. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
  2. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
  3. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region

If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with primary DB instance?

  1. Yes
  2. Only with MySQL based RDS
  3. Only for Oracle RDS instances
  4. No

A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named “company-backup”?

  1. A custom bucket policy limited to the Amazon S3 API in three Amazon Glacier archive “company-backup”
  2. A custom bucket policy limited to the Amazon S3 API in “company-backup”
  3. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive “company-backup”.
  4. A custom IAM user policy limited to the Amazon S3 API in “company-backup”.

You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?

  1. Set permissions on the object to public read during upload.
  2. Configure the bucket policy to set all objects to public read.
  3. Use AWS Identity and Access Management roles to set the bucket to public read.
  4. Amazon S3 objects default to public read, so no action is needed.

How do you choose an Availability Zone?

Let’s understand this through an example, consider there’s a company which has user base in India as well as in the US.

Learner, thinker and Doer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store