A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.The website contains static content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue. The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.Which solution meets these requirements?
Answer(s): C
C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notification to publish events to the SQS queue. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos is the correct solution because it reduces operational overhead by leveraging AWS managed services. Hosting the static web content in Amazon S3 removes the need for EC2 instances for the web app, and storing uploaded videos in S3 simplifies the storage solution. Using S3 event notifications to trigger an SQS queue ensures the videos are processed efficiently, and AWS Lambda coupled with Amazon Rekognition provides a scalable, managed solution for video analysis, replacing the custom recognition software.
A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and AWS Lambda functions. The current deployment process of the application code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The company would like to decrease the time to deploy new versions of the application logic provided by the Lambda functions, and also reduce the time to detect and revert when errors are identified.How can this be accomplished?
Answer(s): B
B) Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered is the correct solution because AWS SAM (Serverless Application Model) integrates seamlessly with AWS CodeDeploy to allow safe, automated deployments of Lambda functions. The gradual traffic shifting feature minimizes the risk of deployment failures by slowly routing traffic to the new version while monitoring for errors. If an issue arises, automatic rollback can occur based on CloudWatch alarms, reducing deployment time and ensuring minimal disruption. This approach also provides better control over traffic and error detection.
A company is planning to store a large number of archived documents and make the documents available to employees through the corporate intranet. Employees will access the system by connecting through a client VPN service that is attached to a VPC. The data must not be accessible to the public.The documents that the company is storing are copies of data that is held on physical media elsewhere. The number of requests will be low. Availability and speed of retrieval are not concerns of the company.Which solution will meet these requirements at the LOWEST cost?
Answer(s): A
A) Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint is the correct answer because it offers the lowest cost option for infrequently accessed data while ensuring security and accessibility within the corporate network.Using S3 One Zone-IA significantly reduces the cost by storing data in a single Availability Zone, which is ideal since availability and redundancy are not a concern. An S3 interface endpoint allows access to the data securely without exposing it to the public internet, adhering to the security requirement. Website hosting on S3 is not strictly necessary for this use case but could provide a simple interface for document access, adding convenience for employees.This approach balances cost-efficiency with security, making it the best choice given the company's requirements.
A company is using an on-premises Active Directory service for user authentication. The company wants to use the same authentication service to sign in to the company’s AWS accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises environment and all the company’s AWS accounts.The company’s security policy requires conditional access to the accounts based on user groups and roles. User identities must be managed in a single location.Which solution will meet these requirements?
A) Configure AWS IAM Identity Center (AWS Single Sign-On) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs) is the correct solution because it allows seamless integration with the on-premises Active Directory for user authentication. AWS IAM Identity Center (AWS SSO) can connect with the Active Directory through SAML 2.0, which supports the single sign-on functionality. SCIM 2.0 ensures automatic provisioning of users and groups, and ABAC allows for conditional access based on user attributes, aligning with the company’s security policy requirements.
A software company has deployed an application that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small number of clients that are authenticated with specific API keys.A solutions architect has identified that a large number of the PUT requests originate from one client. The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API’s reputation.What should the solutions architect recommend to improve the customer experience?
B) Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error is the correct answer. Throttling at the API Gateway level can limit the number of PUT requests a client can make, which will reduce the load on the system and minimize errors. By ensuring the client handles HTTP 429 (Too Many Requests) responses properly, retries can be performed when needed, preventing errors from being displayed to customers and improving the overall experience.This solution ensures that clients making excessive requests are managed efficiently while allowing retries to maintain the application's integrity.
A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.Which solution will provide the LARGEST overall cost reduction while meeting these requirements?
A) Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete is the correct solution because it leverages Amazon S3 for cost-effective long-term storage with S3 Intelligent-Tiering. By using Amazon FSx for Lustre, the solution provides high-performance access to data during the 72-hour job, as Lustre is optimized for high-performance computing. Lazy loading ensures that only the necessary data is loaded as it's accessed, further optimizing costs. Deleting the file system after the job completes minimizes costs for the inactive periods.
A company is developing a new service that will be accessed using TCP on a static port. A solutions architect must ensure that the service is highly available, has redundancy across Availability Zones, and is accessible using the DNS name my.service.com, which is publicly accessible. The service must use fixed address assignments so other companies can add the addresses to their allow lists.Assuming that resources are deployed in multiple Availability Zones in a single Region, which solution will meet these requirements?
C) Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set is the correct solution because it ensures high availability and redundancy across Availability Zones with a Network Load Balancer (NLB) that can handle static TCP ports. By using Elastic IPs in each Availability Zone, the company ensures that fixed IP addresses are used, allowing other companies to whitelist them. Associating the NLB DNS name with the A (alias) record satisfies the requirement for the DNS name my.service.com.
A company uses an on-premises data analytics platform. The system is highly available in a fully redundant configuration across 12 servers in the company’s data center. The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from users. Scheduled jobs can take between 20 minutes and 2 hours to finish running and have tight SLAs. The scheduled jobs account for 65% of the system usage. User jobs typically finish running in less than 5 minutes and have no SLA. The user jobs account for 35% of system usage. During system failures, scheduled jobs must continue to meet SLAs. However, user jobs can be delayed.A solutions architect needs to move the system to Amazon EC2 instances and adopt a consumption-based model to reduce costs with no long-term commitments. The solution must maintain high availability and must not affect the SLAs.Which solution will meet these requirements MOST cost-effectively?
Answer(s): D
D) Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run one instance in each Availability Zone as a Spot Instance is the correct solution because it ensures high availability by distributing instances across three Availability Zones while using On-Demand Instances with Capacity Reservations for the critical scheduled jobs that need to meet SLAs. The use of Spot Instances helps reduce costs, as they are ideal for non-SLA-bound user jobs that can be delayed. This approach strikes the right balance between cost-effectiveness and high availability for critical scheduled jobs.
Post your Comments and Discuss Amazon SAP-C02 exam dumps with other Community members: