2월23일에 응시할 AWS SAA C03 시험을 준비하기 위해
Ultimate AWS Certified Solutions Architect Associate SAA-C03
를 수강했고, 실전문제를 풀이한 내용을 정리하는 포스트입니다.
AWS의 기초적인 서비스들에 대해선 익숙하고 프로젝트에도 적용해 봤으므로 준비기간을 7일 정도로 타이트하게 잡았다. 예상과는 달리 IAM, VPC, Kinesis 등의 개념들이 너무 새롭고 복잡해서 합격에 많은 어려움이 예상된다.
2월20일에 부트캠프가 끝나서 좀 쉬고 싶었지만 나태해지지 않기 위해서 과감하게 시험을 접수했고, 하루종일 공부해야한다는 사실에 기뻐 눈물을 흘리면서 학습중.😢
키워드 중심으로 간략하게 이해하기 쉽게 정리한다.
Placement group as cluster
ec2 placement group options are
use cloud front in front of alb, and use aurora read replica
wrong answers are
Netpune for Graph DB
AWS neptune details
Wrong answers
AWS Config managed rule, Trigger SNS notification
wrong answers
Guard Duty for monitor activities on the data by watching streams of meta-data generated by related activities
Macie for ML supported scan and protection on S3
AWS DB Migration Service supports migrations between heterogeneous DBs
AWS Schema Conversion Tool can perform migrations with complex configurations
Wrong answers
Use permissions boundary to control the maximum permissions employees can grant to the IAM principals
Wrong Answers
ElastiCache for Redis/Memcached
VPN CloudHub
Wrong Answers
Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations
- By default, user data runs only during the boot cycle when you first launch an instance
- By default, scripts entered as user data are executed with root user privileges
Wrong answers
VPC Flow Logs, DNS Logs, Cloud Trail events
Create a Virtual Private Gateway on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN
You must set up appropriate gateway for both on-premise and AWS
at rest, in flight, snapshot encryptions are available
Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier
Wrong Answers
Set up a read replica and modify the application to use the appropriate endpoint
Wrong Answers
Distribute the static content through Amazon S3
SNS, CloudWatch
You can use CloudWatch Alarms to send an email via SNS whenever any of the EC2
Wrong Answers
Use EC2 dedicated hosts
Wrong Answers:
Route traffic to instances using the primary private IP address specified in the primary network interface for the instance
Configure an AWS DataSync agent on the on-premises server that has access to the NFS file system. Transfer data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Set up a DataSync scheduled task to send the video files to the EFS file system every 24 hours
You can get direct connection between on-premise and EFS through private VIF.
You can use AWS DataSync to migrate data located on-premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP.
Wrong Answers
Leverage multi-AZ configuration of RDS Custom for Oracle that allows the database administrators to access and customize the database environment and the underlying operating system
You must choose RDS custom for OS level access to RDS instance
- Copy data from the source bucket to the destination bucket using the aws S3 sync command
- Set up S3 batch replication to copy objects across S3 buckets in different Regions using S3 console
Wrong Answers:
Amazon API Gateway, Amazon SQS and Amazon Kinesis
API Gateway - API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account.
SQS - SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.
Kinesis - Kinesis is a fully managed, scalable service that can ingest, buffer, and process streaming data in real-time.
Wrong Answers:
Use a transit gateway to interconnect the VPCs
A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks.
Wrong Answers:
Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3
easy cheesy
ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
easy cheesy2
Use VPC endpoint to access Amazon SQS
Wrong Answers:
- Enable versioning on the bucket
- Enable MFA delete on the bucket
- Delete the existing standard queue and recreate it as a FIFO queue
- Make sure that the name of the FIFO queue ends with the .fifo suffix
- Make sure that the throughput for the target FIFO queue does not exceed 3,000 messages per second
Can't change existing SQS to FIFO
Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check failure, an EC2 Reboot CloudWatch Alarm Action can be used to reboot the instance
CloudWatch Alarm Action can reboot EC2 instance
Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
Wrong Answers
Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment. this is network layer service with anycast IP
With AWS Global Accelerator, you can shift traffic gradually or all at once between the blue and the green environment and vice-versa without being subject to DNS caching on client devices and internet resolvers, traffic dials and endpoint weights changes are effective within seconds.
DNS is possible for normal usecases, DNS cache prevents rapid application of traffic routing.
- S3 Transfer Acceleration
- multipart uploads
Wrong Answers
A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object
Amazon S3 delivers strong read-after-write consistency automatically
- DynamoDB
- S3
There are two types of VPC endpoints: Interface Endpoints and Gateway Endpoints. An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.
A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3 and DynamoDB.
Amazon Cognito User Pools
Cognito User Pools provide built-in user management or integrate with external identity providers, such as Facebook, Twitter, Google
Wrong Answer
- DynamoDB : key-value, managed
- Lambda : serverless, managed, auto scalable
Wrong Answer
Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations
VPC sharing targest must be a subnet
Wrong Answer:
Create an IAM role for the Lambda function that grants access to the S3 bucket. Set the IAM role as the Lambda function's execution role. Make sure that the bucket policy also grants access to the Lambda function's execution role
Make sure set up permission on both ends
FSx for Lustre
Wrong Answers:
Use Enhanced Fanout feature of Kinesis Data Streams
You can use fan-out feature to process data in parallel
Kinesis data stream can store data for up to 7 days
Wrong Answers:
Use Redis Auth for id/pw authentication
S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed
for re-creatble data, One-Zone is cost-efficient choice
Attach the appropriate IAM role to the EC2 instance profile so that the instance can access S3 and DynamoDB
storing keys in EC2 itself is not recommended even if it is encrypted
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
For DynamoDB, Global Tables does the same work.
Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once
Wrong Answer:
Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift
Wrong Answer