Free AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL Exam Braindumps (page: 41)

Page 41 of 134

A company is running an application that uses an Amazon ElastiCache for Redis cluster as a caching layer. A recent security audit revealed that the company has configured encryption at rest for ElastiCache. However, the company did not configure ElastiCache to use encryption in transit. Additionally, users can access the cache without authentication.

A solutions architect must make changes to require user authentication and to ensure that the company is using end-to-end encryption.

Which solution will meet these requirements?

  1. Create an AUTH token. Store the token in AWS System Manager Parameter Store, as an encrypted parameter. Create a new cluster with AUTH, and configure encryption in transit. Update the application to retrieve the AUTH token from Parameter Store when necessary and to use the AUTH token for authentication.
  2. Create an AUTH token. Store the token in AWS Secrets Manager. Configure the existing cluster to use the AUTH token, and configure encryption in transit. Update the application to retrieve the AUTH token from Secrets Manager when necessary and to use the AUTH token for authentication.
  3. Create an SSL certificate. Store the certificate in AWS Secrets Manager. Create a new cluster, and configure encryption in transit. Update the application to retrieve the SSL certificate from Secrets Manager when necessary and to use the certificate for authentication.
  4. Create an SSL certificate. Store the certificate in AWS Systems Manager Parameter Store, as an encrypted advanced parameter. Update the existing cluster to configure encryption in transit. Update the application to retrieve the SSL certificate from Parameter Store when necessary and to use the certificate for authentication.

Answer(s): B

Explanation:

The selected solution is:
B) Create an AUTH token. Store the token in AWS Secrets Manager. Configure the existing cluster to use the AUTH token, and configure encryption in transit. Update the application to retrieve the AUTH token from Secrets Manager when necessary and to use the AUTH token for authentication.
Reasoning:
1.Authentication: By creating an AUTH token, the solution implements user authentication for accessing the Amazon ElastiCache for Redis cluster, which addresses the audit finding of allowing access without authentication.
2.Storing the AUTH Token Securely: Using AWS Secrets Manager to store the AUTH token ensures that it is kept secure and encrypted, which is a best practice for managing sensitive information. Secrets Manager provides easy retrieval and management of the AUTH token.
3.Configuring Encryption in Transit: Configuring the existing ElastiCache cluster to use encryption in transit ensures that data exchanged between the client application and the Redis cluster is secure from eavesdropping or tampering.
4.Least Operational Overhead: This approach allows the company to make necessary changes to the existing setup with minimal disruption while effectively meeting both the authentication and encryption requirements.
Overall, this solution addresses both security audit concerns effectively while ensuring ease of management and implementation.



A company is running a compute workload by using Amazon EC2 Spot Instances that are in an Auto Scaling group. The launch template uses two placement groups and a single instance type.

Recently, a monitoring system reported Auto Scaling instance launch failures that correlated with longer wait times for system users. The company needs to improve the overall reliability of the workload.

Which solution will meet this requirement?

  1. Replace the launch template with a launch configuration to use an Auto Scaling group that uses attribute-based instance type selection.
  2. Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.
  3. Update the launch template Auto Scaling group to increase the number of placement groups.
  4. Update the launch template to use a larger instance type.

Answer(s): B

Explanation:

B) Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.
1.Attribute-Based Instance Type Selection: This allows the Auto Scaling group to automatically select from multiple instance types, increasing the chances of successfully launching instances.
2.Improving Reliability: Using multiple instance types helps mitigate launch failures, especially during Spot Instance unavailability, enhancing overall reliability.
3.Flexibility: This method optimizes resource allocation and usage across different instance types, maintaining compute capacity without excessive costs.
4.Minimal Changes Required: Updating to a new launch template version is straightforward and maintains existing configurations.
This solution effectively addresses the need for improved workload reliability while leveraging the cost benefits of Spot Instances.



A company is migrating a document processing workload to AWS. The company has updated many applications to natively use the Amazon S3 API to store, retrieve, and modify documents that a processing server generates at a rate of approximately 5 documents every second. After the document processing is finished, customers can download the documents directly from Amazon S3.

During the migration, the company discovered that it could not immediately update the processing server that generates many documents to support the S3 API. The server runs on Linux and requires fast local access to the files that the server generates and modifies. When the server finishes processing, the files must be available to the public for download within 30 minutes.

Which solution will meet these requirements with the LEAST amount of effort?

  1. Migrate the application to an AWS Lambda function. Use the AWS SDK for Java to generate, modify, and access the files that the company stores directly in Amazon S3.
  2. Set up an Amazon S3 File Gateway and configure a file share that is linked to the document store. Mount the file share on an Amazon EC2 instance by using NFS. When changes occur in Amazon S3, initiate a RefreshCache API call to update the S3 File Gateway.
  3. Configure Amazon FSx for Lustre with an import and export policy. Link the new file system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS.
  4. Configure AWS DataSync to connect to an Amazon EC2 instance. Configure a task to synchronize the generated files to and from Amazon S3.

Answer(s): B

Explanation:

B) Set up an Amazon S3 File Gateway and configure a file share that is linked to the document store. Mount the file share on an Amazon EC2 instance by using NFS. When changes occur in Amazon S3, initiate a RefreshCache API call to update the S3 File Gateway.
1.Immediate Access: The S3 File Gateway provides fast local access to files while still leveraging Amazon S3 for storage, allowing the processing server to operate without immediate changes.
2.Seamless Integration: By mounting the file share on the EC2 instance using NFS, the server can interact with files in a familiar manner while they are stored in S3.
3.Public Accessibility: Once the documents are processed, they can be made available in Amazon S3 for public download within the required timeframe.
4.Minimal Effort: This solution requires minimal changes to the existing infrastructure and avoids the complexities of modifying the processing server to natively use the S3 API.



A delivery company is running a serverless solution in the AWS Cloud. The solution manages user data, delivery information, and past purchase details. The solution consists of several microservices. The central user service stores sensitive data in an Amazon DynamoDB table. Several of the other microservices store a copy of parts of the sensitive data in different storage services.

The company needs the ability to delete user information upon request. As soon as the central user service deletes a user, every other microservice must also delete its copy of the data immediately.

Which solution will meet these requirements?

  1. Activate DynamoDB Streams on the DynamoDB table. Create an AWS Lambda trigger for the DynamoDB stream that will post events about user deletion in an Amazon Simple Queue Service (Amazon SQS) queue. Configure each microservice to poll the queue and delete the user from the DynamoDB table.
  2. Set up DynamoDB event notifications on the DynamoDB table. Create an Amazon Simple Notification Service (Amazon SNS) topic as a target for the DynamoDB event notification. Configure each microservice to subscribe to the SNS topic and to delete the user from the DynamoDB table.
  3. Configure the central user service to post an event on a custom Amazon EventBridge event bus when the company deletes a user. Create an EventBridge rule for each microservice to match the user deletion event pattern and invoke logic in the microservice to delete the user from the DynamoDB table.
  4. Configure the central user service to post a message on an Amazon Simple Queue Service (Amazon SQS) queue when the company deletes a user. Configure each microservice to create an event filter on the SQS queue and to delete the user from the DynamoDB table.

Answer(s): C

Explanation:

C) Configure the central user service to post an event on a custom Amazon EventBridge event bus when the company deletes a user. Create an EventBridge rule for each microservice to match the user deletion event pattern and invoke logic in the microservice to delete the user from the DynamoDB table.
1.Event-Driven Architecture: Using Amazon EventBridge allows for a decoupled architecture where microservices can react to specific events (in this case, user deletions) without needing direct coupling to the central user service.
2.Immediate Action: When the central user service deletes a user, it triggers an event that is published to EventBridge. Each microservice can subscribe to these events, allowing them to take immediate action in response to the deletion.
3.Scalability and Flexibility: This solution scales well with the addition of new microservices since they can simply subscribe to the EventBridge event bus without any changes to the central user service.
4.Reduced Complexity: It simplifies the workflow as each microservice only needs to implement logic to handle deletion events rather than managing its own polling or direct notifications.



Page 41 of 134



Post your Comments and Discuss Amazon AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL exam with other Community members:

Zak commented on June 28, 2024
@AppleKid, I manged to pass this exam after failing once. Do not set for your exam without memorizing these questions. These are what you will see in the real exam.
Anonymous
upvote

Apple Kid commented on June 26, 2024
Did anyone gave exam recently and tell if these are good?
Anonymous
upvote

Captain commented on June 26, 2024
This is so helpful
Anonymous
upvote

udaya commented on April 25, 2024
stulll learning and seem to be questions are helpful
Anonymous
upvote

Jerry commented on February 18, 2024
very good for exam !!!!
HONG KONG
upvote

AWS-Guy commented on February 16, 2024
Precise and to the point. I aced this exam and now going for the next exam. Very great full to this site and it's wonderful content.
CANADA
upvote

Jerry commented on February 12, 2024
very good exam stuff
HONG KONG
upvote

travis head commented on November 16, 2023
I gave the Amazon SAP-C02 tests and prepared from this site as it has latest mock tests available which helped me evaluate my performance and score 919/1000
Anonymous
upvote

Weed Flipper commented on October 07, 2020
This is good stuff man.
CANADA
upvote

IT-Guy commented on September 29, 2020
Xengine software is good and free. Too bad it is only in English and no support for French.
FRANCE
upvote

pema commented on August 30, 2019
Can I have the latest version of this exam?
GERMANY
upvote

MrSimha commented on February 23, 2019
Thank you
Anonymous
upvote

Phil C. commented on November 12, 2018
To soon to tell, but I will be back to post a review after my exam.
Anonymous
upvote

MD EJAZ ALI TANWIR commented on August 20, 2017
This is valid dump in US. Thank you guys for providing this.
UNITED STATES
upvote

flypig commented on June 02, 2017
The Braindumps will short my ready time for this exam!
CHINA
upvote