Week 1 Assessment
1. A solutions architect must design a solution to help manage their
customer’s containerized applications. Currently, the customer workload runs
in Docker containers on top of Amazon Elastic Compute Cloud (Amazon EC2)
instances and on-premises servers that run a hybrid Kubernetes cluster. The
customer wants to migrate part of their hybrid Kubernetes deployment to the
cloud with a minimum amount of effort, and they want to keep all the native
features of Kubernetes. The customer also wants to reduce their operational
overhead for managing their Kubernetes cluster. Which managed AWS service
should the solutions architect suggest to best satisfy these
requirements?
- AWS Fargate with Amazon Elastic Container Service (Amazon ECS)
- AWS Fargate with Amazon Elastic Kubernetes Service (Amazon EKS)
- Amazon Elastic Container Service (Amazon ECS)
- Amazon Elastic Kubernetes Service (Amazon EKS)
- Amazon Simple Notification Service (Amazon SNS) with a fan-out strategy
- Amazon Simple Queue Service (Amazon SQS) with FIFO queues
- Amazon EventBridge with rules
- Amazon Elastic Compute Cloud (Amazon EC2) with Spot Instances
- True
- False
- DAX reduces operational and application complexity by providing a managed service that is compatible with the DynamoDB API.
- Although using DAX has a cost, it can reduce the consumption of DynamoDB table capacity. If the data is read intensive (that is, millions of requests per second), DAX can result in cost savings by caching the data while also providing better read latency, being beneficial for scenarios in need of repeated reads for individual keys.
- DAX does not support server-side encryption (SSE).
- DAX is not designed for applications that are write-intensive. It can also add cost to applications that do not perform much read activity.
- DAX does not support encrypting data in transit, which means that communication between an application and DAX cannot be encrypted.
- True
- False
Week 2 Assessment
1. A solutions architect is designing an architecture that can provide HTML
pages to customers. They want a serverless solution that can host content over
the internet and serve a static website with minimal effort. Which AWS service
should the solutions architect choose to meet these requirements?
2. A solutions architect is designing a solution that needs real-time data
ingestion. They are considering either Amazon Kinesis Data Firehose or Amazon
Kinesis Data Streams for this solution. Which service should the solutions
architect choose to meet the requirement for real-time data ingestion, and
why? (Remember that lower data latency means a lower roundtrip time from when
data is ingested and available.)
3. True or False: When creating data lakes for analytics on AWS, Amazon Simple
Storage Service (Amazon S3) would be a preferred service. Users can use data
in an S3 bucket with an independent data-processing or visualization layer,
such as Amazon QuickSight, Amazon Athena, or Amazon EMR.
4. A solutions architect is designing a serverless solution that can do
Structured Query Language (SQL) queries over multiple objects that are stored
in Amazon Simple Storage Service (Amazon S3). All the objects share the same
data structure (schema) and are in JSON. Which service would make it easier to
query the data, in addition to providing serverless capabilities?
5. True or False: When architecting a solution that can handle high demand and
usage spikes, Amazon CloudFront should be used in front of an Amazon Simple
Storage Service (Amazon S3) bucket. CloudFront can cache data that gets
delivered to customers, and it lets customers use custom domain names. In
addition, CloudFront can serve custom SSL certificates that are issued by
Amazon Certificate Manager (at no additional cost) and it can provide
distributed denial of service (DDoS) protection that is powered by AWS WAF and
AWS Shield.
- Amazon Simple Storage Service (Amazon S3)
- Amazon Elastic Compute Cloud (Amazon EC2)
- Amazon DynamoDB
- Amazon Kinesis
You can use Amazon S3 to host a static website. On a static website,
individual webpages include static content. They might also contain
client-side scripts. By contrast, a dynamic website relies on
server-side processing, which can include server-side scripts that are
written in PHP, JSP, or ASP.NET. Amazon S3 does not support server-side
scripting, but AWS has other resources for hosting dynamic websites. To
learn more about website hosting on AWS, see Web Hosting (https://aws.amazon.com/websites/).
- Amazon Kinesis Data Firehose, because it has lower latency when compared to Amazon Kinesis Data Streams
- Amazon Kinesis Data Firehose, because it has higher latency when compared to Amazon Kinesis Data Streams
- Amazon Kinesis Data Streams, because it has lower latency when compared to Amazon Kinesis Data Firehose
- Amazon Kinesis Data Streams, because it has higher latency when compared to Amazon Kinesis Data Firehose
Amazon S3 provides storage capabilities without charging for data
processing. You can use additional services to get data from Amazon S3
and process it. For example, you could use the services that were
mentioned in the question, or you could use any other AWS service that
can GET or PUT data in Amazon S3 through the S3 API operations.
- True
- False
Amazon S3 provides storage capabilities without charging for data
processing. You can use additional services to get data from Amazon S3
and process it. For example, you could use the services that were
mentioned in the question, or you could use any other AWS service that
can GET or PUT data in Amazon S3 through the S3 API operations.
- Amazon Athena
- AWS Database Migration Service (AWS DMS)
- Amazon S3 Select
- AWS Data Exchange
Amazon Athena is an interactive query service that makes it easier to
analyze data in Amazon S3 by using standard SQL. Athena is serverless.
There is no infrastructure to manage, and you pay only for the queries
that you run. Athena is straightforward to use: point to the data in
Amazon S3, define the schema, and query with standard SQL.
- True
- False
CloudFront is a web service that speeds up the distribution of static
and dynamic web content (such as .html, .css, .js, and image files) to
users. CloudFront delivers content through a worldwide network of data
centers that are called edge locations. CloudFront is designed to use
edge locations to deliver content with the best possible performance.
When a user requests content that is served through CloudFront, the
request is routed to the edge location that provides the lowest latency
(time delay).
Week 3 Assessment
1. Which of the following options includes true statements for both Amazon
Simple Storage Service (Amazon S3) cross-Region replication and AWS Key
Management Service (AWS KMS)?
2. A solutions architect is designing a hybrid solution. The solution uses
Amazon Virtual Private Cloud (Amazon VPC) resources, such Amazon Relational
Database Service (Amazon RDS) and Amazon Elastic Compute Cloud (Amazon EC2).
It also uses services that are not in a VPC, such as Amazon Simple Storage
Service (Amazon S3) and AWS Systems Manager. Which statements about Amazon VPC
and the scope of AWS services are correct? (Choose THREE.)
3. Which statements about AWS Storage Gateway are correct? (Choose THREE.)
4. Which set of AWS services is the BEST fit for the “Object, file, and block
storage” category (which means that the services are dedicated to storing data
in a durable way)?
5. True or False: Amazon Simple Storage Service (Amazon S3) is better than
Amazon Elastic Block Store (Amazon EBS) because it is designed to provide a
higher level of data durability.
- To configure Amazon S3 cross-Region replication, both the source and destination buckets must belong to the same AWS account. Server-side encryption (SSE) is possible for replicated objects.
- To configure Amazon S3 cross-Region replication, both the source and destination buckets must belong to the same AWS account. Server-side encryption (SSE) is not possible for replicated objects.
- To configure Amazon S3 cross-Region replication, the source and destination buckets can belong to different AWS accounts. Server-side encryption (SSE) is possible for replicated objects.
- To configure Amazon S3 cross-Region replication, the source and destination buckets can belong to different AWS accounts. Server-side encryption is not possible for replicated objects.
Both statements are true. Buckets can belong to different accounts. SSE
(powered by Amazon KMS) can be enabled for the replicated objects. For
more information, see Replicating objects.
- Amazon VPC gives the user full control over their virtual networking environment. Therefore, the solutions architect can define firewall rules on the networking level for VPC-based resources.
- Because S3 buckets do not reside inside a VPC, the customer can rely on AWS to configure security mechanisms, such as permissions and bucket policies. Thus, security is automatically applied on the data level because this level of security is the responsibility of AWS.
- VPC-based services that reside in a private subnet require specific configurations to enable internet access, such as a NAT gateway and route tables.
- When possible, customers should avoid having services reside in VPCs because a networking misconfiguration can accidentally leave the infrastructure in an unsafe state.
- Using AWS resources like Amazon S3 is less secure because they are public resources by default.
- AWS VPN solutions can be configured to establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network.
You can use Amazon VPC to launch AWS resources in a virtual network that
you have defined. This virtual network closely resembles a traditional
network that you'd operate in your own data center, with the benefits of
using the scalable infrastructure of AWS.
Route tables are configured to route packets to specific destinations,
and you can configure them in a VPC. If you want to allow internet
access from private subnets, you can create a NAT gateway (or a NAT
instance that is configured to forward packets) and change private route
tables to point to that resource for internet destinations. This
configuration would allow the private subnet to have outgoing access to
the internet, without exposing it to incoming requests from the
internet.
You can use AWS VPN to administratively access VPC resources from
on-premises networks. AWS VPN comprises two services—AWS Site-to-Site
VPN and AWS Client VPN—which means that single computers can connect
with the help of a client, or two entire network scopes can also be
connected. With the correct routing, you can even access VPC resources
that sit in private subnets.
- AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage.
- AWS Storage Gateway offers virtually unlimited cloud storage to users and applications, at the cost of new storage hardware.
- AWS Storage Gateway delivers data access to on-premises applications while taking advantage of the agility, economics, and security capabilities of the AWS Cloud.
- AWS Storage Gateway is limited to only on-premises applications, which means that it cannot be used from cloud to cloud.
- AWS Storage Gateway helps support compliance requirements through integration with AWS Backup to manage the backup and recovery of Volume Gateway volumes, which simplifies backup management.
- AWS Storage Gateway can only work as an Amazon S3 File Gateway.
You can use Storage Gateway in on-premises environments so that
on-premises networks can access cloud storage resources. Storage Gateway
works well in hybrid scenarios, such as the one that Morgan designed for
this week’s architecture.
You can use Storage Gateway during only the times that you need it,
which means that you can take advantage of the economics and security
capabilities of the cloud.
Integration with AWS Backup is supported by Storage Gateway, more
information here.
- AWS DataSync, AWS Snow Family
- Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), Amazon Elastic Block Store (Amazon EBS), Amazon FSx
- AWS Storage Gateway, AWS Snow Family
- AWS Elastic Disaster Recovery, AWS Backup
According to the Cloud Storage on AWS page, Amazon S3, Amazon EFS,
Amazon EBS, and Amazon FSx belong to the “Object, file, and block
storage” category. Amazon S3 is designed to store virtually any amount
of data from virtually anywhere. Amazon EFS automatically grows and
shrinks as you add and remove files, and it reduces the need for
management or provisioning. Amazon EBS is an easy-to-use, scalable,
high-performance block-storage service that is designed for Amazon
Elastic Compute Cloud (Amazon EC2). Amazon FSx makes it easier to
provide broadly-accessible and highly-performant file storage for a wide
variety of use cases. For more information, see Cloud Storage on AWS.
- True
- False
This question statement is not true. Throughout this course, Raf and
Morgan have been reinforcing the message that there is no one better
service, but the most appropriate one for customer needs. If a customer
needs to have access to block-level storage, Amazon EBS is better suited
for the job. If the customer need a place to store static assets, Amazon
S3 could be better. There is no way to affirm one service is better than
another without looking at the requirements.
Week 4 Final Assessment
1. A solutions architect must design a solution to help manage their customer’s containerized applications. Currently, the customer workload runs in Docker containers on top of Amazon Elastic Compute Cloud (Amazon EC2) instances and on-premises servers that run a hybrid Kubernetes cluster. The customer wants to migrate part of their hybrid Kubernetes deployment to the cloud with a minimum amount of effort, and they want to keep all the native features of Kubernetes. The customer also wants to reduce their operational overhead for managing their Kubernetes cluster. Which managed AWS service should the solutions architect suggest to best satisfy these requirements?
- AWS Fargate with Amazon Elastic Container Service (Amazon ECS)
- AWS Fargate with Amazon Elastic Kubernetes Service (Amazon EKS)
- Amazon Elastic Container Service (Amazon ECS)
- Amazon Elastic Kubernetes Service (Amazon EKS)
- Amazon Simple Notification Service (Amazon SNS) with a fan-out strategy
- Amazon Simple Queue Service (Amazon SQS) with FIFO queues
- Amazon EventBridge with rules
- Amazon Elastic Compute Cloud (Amazon EC2) with Spot Instances
- True
- False
- DAX reduces operational and application complexity by providing a managed service that is compatible with the DynamoDB API.
- Although using DAX has a cost, it can reduce the consumption of DynamoDB table capacity. If the data is read intensive (that is, millions of requests per second), DAX can result in cost savings by caching the data while also providing better read latency, being beneficial for scenarios in need of repeated reads for individual keys.
- DAX does not support server-side encryption (SSE).
- DAX is not designed for applications that are write-intensive. It can also add cost to applications that do not perform much read activity.
- DAX does not support encrypting data in transit, which means that communication between an application and DAX cannot be encrypted.
- True
- False
- True
- False
- Amazon Athena
- AWS Database Migration Service (AWS DMS)
- Amazon S3 Select
- AWS Data Exchange
- True
- False
- AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage.
- AWS Storage Gateway offers virtually unlimited cloud storage to users and applications, at the cost of new storage hardware.
- AWS Storage Gateway delivers data access to on-premises applications while taking advantage of the agility, economics, and security capabilities of the AWS Cloud.
- AWS Storage Gateway is limited to only on-premises applications, which means that it cannot be used from cloud to cloud.
- AWS Storage Gateway helps support compliance requirements through integration with AWS Backup to manage the backup and recovery of Volume Gateway volumes, which simplifies backup management.
- AWS Storage Gateway can only work as an Amazon S3 File Gateway.
- Grouping workloads based on business purpose and ownership
- Using different payment methods per account
- Limiting the scope of impact from adverse events
- Distributing AWS service quotas and API request rate limits
- Having multiple account root users with unrestricted access on each account
- True
- False
- AWS Identity and Access Management (IAM) users
- Amazon CloudWatch
- AWS IAM Identity Center (successor to AWS Single Sign-On)
- AWS CloudTrail
- Enable Amazon Cloud Watch billing alarms per account and configure tagging policies in AWS Organizations.
- Give AdministratorAccess policies to developers in their development AWS accounts.
- Prevent CloudTrail configuration from being disabled in the shared services account.
- Use multi-factor authentication (MFA) for users in centralized credentialing, such as using AWS IAM Identity Center (successor to AWS Single Sign-On).
- Reuse passwords for simplicity and ease of access.
- Provide powerful users and broad roles for Cloud Center of Excellence (CCoE) members, such as granting Administrator Access permissions to them.
- Enable AWS CloudTrail for all accounts in AWS Organizations. Use Organizations to centralize all logs into one Amazon Simple Storage Service (Amazon S3) bucket. As the circuit breaker, use service control policies (SCPs) that have an explicit deny for Amazon EC2 API activity. These SCPs can then be applied to the root organizational unit (OU) as needed.
- Enable AWS CloudTrail for all accounts in AWS Organizations. Use Organizations to centralize all logs into one Amazon Simple Storage Service (Amazon S3) bucket. Use multi-factor authentication (MFA) devices for every user in AWS IAM Identity Center (successor to AWS Single Sign-On).
- Enable AWS CloudTrail for only the production accounts in AWS Organizations. Use Organizations to centralize logs into one Amazon Simple Storage Service (Amazon S3 bucket). For single sign-on, use AWS IAM Identity Center (successor to AWS Single Sign-ON).
- Enable AWS CloudTrail for all accounts in AWS Organizations. Use Organizations to centralize logs in one Amazon Simple Storage Service (Amazon S3) bucket. As the circuit breaker, use AWS Identity and Access Management (IAM) policies on each account that have an explicit deny for Amazon EC2 API activity. The IAM policies can then be applied to the root organizational unit (OU) as needed.