Free AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL Exam Braindumps (page: 37)

Page 37 of 134

A financial services company receives a regular data feed from its credit card servicing partner. Approximately 5,000 records are sent every 15 minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number (PAN) data. The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to remove and merge specific fields, and then transform the record into JSON format. Additionally, extra feeds are likely to be added in the future, so any design needs to be easily expandable.

Which solutions will meet these requirements?

  1. Invoke an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Invoke another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Invoke a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
  2. Invoke an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record, and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate instance.
  3. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Invoke an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
  4. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.

Answer(s): C

Explanation:

The chosen solution is:
C) Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Invoke an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
Reasoning:
1.Data Transformation and Processing:
oAWS Glue is a managed ETL (Extract, Transform, Load) service that is designed for processing and transforming large datasets. It is well-suited for handling the requirements of masking sensitive information (like PAN), merging fields, and converting records into JSON format.
2.Integration with S3:
oBy setting up a Glue crawler, the data from the incoming feed can be automatically classified, and a schema can be defined. This schema will help in processing the records effectively.
3.Scalability and Flexibility:
oAWS Glue can easily scale to handle varying volumes of data. As the company anticipates additional feeds in the future, Glue provides the flexibility to accommodate new data sources and formats without significant changes to the architecture.
4.Automation with Lambda:
oThe solution allows the use of AWS Lambda to trigger the Glue ETL job immediately upon the arrival of new data in S3. This event-driven architecture ensures timely processing of incoming data.
5.Minimal Operational Overhead:
oSince AWS Glue is a fully managed service, it minimizes the operational overhead of maintaining infrastructure, allowing the company to focus on data processing rather than resource management.
This combination of AWS Glue and Lambda effectively meets the company’s requirements for data transformation, security (masking sensitive data), and scalability while ensuring that the process is streamlined and efficient.



A company wants to use AWS to create a business continuity solution in case the company's main on-premises application fails. The application runs on physical servers that also run other applications. The on-premises application that the company is planning to migrate uses a MySQL database as a data store. All the company's on-premises applications use operating systems that are compatible with Amazon EC2.

Which solution will achieve the company's goal with the LEAST operational overhead?

  1. Install the AWS Replication Agent on the source servers, including the MySQL servers. Set up replication for all servers. Launch test instances for regular drills. Cut over to the test instances to fail over the workload in the case of a failure event.
  2. Install the AWS Replication Agent on the source servers, including the MySQL servers. Initialize AWS Elastic Disaster Recovery in the target AWS Region. Define the launch settings. Frequently perform failover and fallback from the most recent point in time.
  3. Create AWS Database Migration Service (AWS DMS) replication servers and a target Amazon Aurora MySQL DB cluster to host the database. Create a DMS replication task to copy the existing data to the target DB cluster. Create a local AWS Schema Conversion Tool (AWS SCT) change data capture (CDC) task to keep the data synchronized. Install the rest of the software on EC2 instances by starting with a compatible base AMI.
  4. Deploy an AWS Storage Gateway Volume Gateway on premises. Mount volumes on all on-premises servers. Install the application and the MySQL database on the new volumes. Take regular snapshots. Install all the software on EC2 Instances by starting with a compatible base AMI. Launch a Volume Gateway on an EC2 instance. Restore the volumes from the latest snapshot. Mount the new volumes on the EC2 instances in the case of a failure event.

Answer(s): B

Explanation:

The chosen solution is:
B) Install the AWS Replication Agent on the source servers, including the MySQL servers. Initialize AWS Elastic Disaster Recovery in the target AWS Region. Define the launch settings. Frequently perform failover and fallback from the most recent point in time.
Reasoning:
1.AWS Elastic Disaster Recovery:
oAWS Elastic Disaster Recovery is designed specifically for disaster recovery scenarios. It allows for the replication of on-premises servers to AWS with minimal operational overhead, making it suitable for the company's needs.
2.Comprehensive Replication:
oBy installing the AWS Replication Agent on the source servers, the company can ensure that all data, including the MySQL database, is continuously replicated to AWS. This minimizes data loss and ensures that the latest state of the application is available in case of a failure.
3.Simple Cutover Process:
oElastic Disaster Recovery provides a straightforward process for failing over to the replicated instances in AWS. The company can define launch settings to ensure that the applications are configured correctly during the failover, simplifying the operational process during a disaster.
4.Testing and Validation:
oThe solution supports regular testing of the failover process, which is critical for validating the business continuity plan. This ensures that the company is prepared for actual failover events without disrupting ongoing operations.
5.Least Operational Overhead:
oThis approach minimizes the need for manual interventions or complex setups. The focus is on leveraging AWS services that are designed for disaster recovery, leading to less management overhead compared to other options.
Overall, option B effectively addresses the company's requirements for a business continuity solution with the least operational complexity and ensures that the on-premises application can failover to AWS smoothly.



A company is subject to regulatory audits of its financial information. External auditors who use a single AWS account need access to the company's AWS account. A solutions architect must provide the auditors with secure, read-only access to the company's AWS account. The solution must comply with AWS security best practices.

Which solution will meet these requirements?

  1. In the company's AWS account, create resource policies for all resources in the account to grant access to the auditors' AWS account. Assign a unique external ID to the resource policy.
  2. In the company's AWS account, create an IAM role that trusts the auditors' AWS account. Create an IAM policy that has the required permissions. Attach the policy to the role. Assign a unique external ID to the role's trust policy.
  3. In the company's AWS account, create an IAM user. Attach the required IAM policies to the IAM user. Create API access keys for the IAM user. Share the access keys with the auditors.
  4. In the company's AWS account, create an IAM group that has the required permissions. Create an IAM user in the company's account for each auditor. Add the IAM users to the IAM group.

Answer(s): B

Explanation:

The chosen solution is:
B) In the company's AWS account, create an IAM role that trusts the auditors' AWS account. Create an IAM policy that has the required permissions. Attach the policy to the role. Assign a unique external ID to the role's trust policy.
Reasoning:
1.Cross-Account Access:
oCreating an IAM role that trusts the auditors' AWS account allows for secure cross-account access. This ensures that auditors can assume the role only if they are coming from their specified AWS account.
2.Least Privilege Principle:
oBy creating an IAM policy with only the necessary read-only permissions and attaching it to the role, the company can ensure compliance with the principle of least privilege. Auditors will only have access to the resources they need for their audit without any unnecessary permissions.
3.Use of External ID:
oAssigning a unique external ID to the role's trust policy adds an extra layer of security. This helps prevent the confused deputy problem, where a trusted entity inadvertently accesses resources on behalf of an unauthorized user.
4.Compliance with AWS Security Best Practices:
oThis approach adheres to AWS security best practices by avoiding the use of static credentials (as seen in option C) and ensuring that access can be tightly controlled and monitored through IAM roles.
5.Temporary Access:
oSince roles are used, auditors can assume the role only when needed, and there is no need to create and manage individual users for each auditor (as suggested in option D), simplifying management and reducing overhead.
Overall, option B provides a secure, scalable, and compliant method for granting auditors read-only access to the company’s AWS resources, making it the best choice for this scenario.



A company has a latency-sensitive trading platform that uses Amazon DynamoDB as a storage backend. The company configured the DynamoDB table to use on-demand capacity mode. A solutions architect needs to design a solution to improve the performance of the trading platform. The new solution must ensure high availability for the trading platform.

Which solution will meet these requirements with the LEAST latency?

  1. Create a two-node DynamoDB Accelerator (DAX) cluster. Configure an application to read and write data by using DAX.
  2. Create a three-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data by using DAX and to write data directly to the DynamoDB table.
  3. Create a three-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data directly from the DynamoDB table and to write data by using DAX.
  4. Create a single-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data by using DAX and to write data directly to the DynamoDB table.

Answer(s): B

Explanation:

The chosen solution is:
B) Create a three-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data by using DAX and to write data directly to the DynamoDB table.
Reasoning:
1.High Availability:
oA three-node DAX cluster provides redundancy and high availability. If one node fails, the other two can still serve requests, ensuring that the application remains operational.
2.Low Latency for Reads:
oUsing DAX for read operations significantly improves the performance of read requests by caching frequently accessed data in memory. This is especially beneficial for a latency-sensitive application like a trading platform, where response times need to be minimized.
3.Direct Writes to DynamoDB:
oWriting data directly to DynamoDB while using DAX for reads ensures that all updates to the data are immediately consistent and eliminates the need for caching write operations. This design choice avoids potential data staleness in the cache, maintaining the integrity and timeliness of the trading data.
4.Performance Optimization:
oThe combination of a three-node DAX cluster and direct writes to DynamoDB provides an optimal balance of performance and reliability, minimizing read latencies without compromising data consistency during write operations.
In summary, option B effectively enhances the performance and availability of the trading platform while ensuring low latency for read operations, making it the best choice for this scenario.



Page 37 of 134



Post your Comments and Discuss Amazon AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL exam with other Community members:

Zak commented on June 28, 2024
@AppleKid, I manged to pass this exam after failing once. Do not set for your exam without memorizing these questions. These are what you will see in the real exam.
Anonymous
upvote

Apple Kid commented on June 26, 2024
Did anyone gave exam recently and tell if these are good?
Anonymous
upvote

Captain commented on June 26, 2024
This is so helpful
Anonymous
upvote

udaya commented on April 25, 2024
stulll learning and seem to be questions are helpful
Anonymous
upvote

Jerry commented on February 18, 2024
very good for exam !!!!
HONG KONG
upvote

AWS-Guy commented on February 16, 2024
Precise and to the point. I aced this exam and now going for the next exam. Very great full to this site and it's wonderful content.
CANADA
upvote

Jerry commented on February 12, 2024
very good exam stuff
HONG KONG
upvote

travis head commented on November 16, 2023
I gave the Amazon SAP-C02 tests and prepared from this site as it has latest mock tests available which helped me evaluate my performance and score 919/1000
Anonymous
upvote

Weed Flipper commented on October 07, 2020
This is good stuff man.
CANADA
upvote

IT-Guy commented on September 29, 2020
Xengine software is good and free. Too bad it is only in English and no support for French.
FRANCE
upvote

pema commented on August 30, 2019
Can I have the latest version of this exam?
GERMANY
upvote

MrSimha commented on February 23, 2019
Thank you
Anonymous
upvote

Phil C. commented on November 12, 2018
To soon to tell, but I will be back to post a review after my exam.
Anonymous
upvote

MD EJAZ ALI TANWIR commented on August 20, 2017
This is valid dump in US. Thank you guys for providing this.
UNITED STATES
upvote

flypig commented on June 02, 2017
The Braindumps will short my ready time for this exam!
CHINA
upvote