Free MCIA-LEVEL-1 Exam Braindumps (page: 17)

Page 17 of 62

In Anypoint Platform, a company wants to configure multiple identity providers(Idps) for various lines of business (LOBs) Multiple business groups and environments have been defined for the these LOBs. What Anypoint Platform feature can use multiple Idps access the company’s business groups and environment?

  1. User management
  2. Roles and permissions
  3. Dedicated load balancers
  4. Client Management

Answer(s): D

Explanation:

Correct answer is Client Management
* Anypoint Platform acts as a client provider by default, but you can also configure external client providers to authorize client applications.
* As an API owner, you can apply an OAuth 2.0 policy to authorize client applications that try to access your API. You need an OAuth 2.0 provider to use an OAuth 2.0 policy.
* You can configure more than one client provider and associate the client providers with different environments. If you configure multiple client providers after you have already created environments, you can associate the new client providers with the environment.
* You should review the existing client configuration before reassigning client providers to avoid any downtime with existing assets or APIs.
* When you delete a client provider from your master organization, the client provider is no longer available in environments that used it.
* Also, assets or APIs that used the client provider can no longer authorize users who want to access them.

MuleSoft Reference:
https://docs.mulesoft.com/access-management/managing-api-clients
https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html



An organization is sizing an Anypoint VPC to extend their internal network to Cloudhub.
For this sizing calculation, the organization assumes 150 Mule applications will be deployed among three(3) production environments and will use Cloudhub’s default zero-downtime feature. Each Mule application is expected to be configured with two(2) Cloudhub workers.This is expected to result in several Mule application deployments per hour.

  1. 10.0.0.0/21(2048 IPs)
  2. 10.0.0.0/22(1024IPs)
  3. 10.0.0.0/23(512 IPs)
  4. 10.0.0.0/24(256 IPs)

Answer(s): A

Explanation:

* When you create an Anypoint VPC, the range of IP addresses for the network must be specified in the form of a Classless Inter-Domain Routing (CIDR) block, using CIDR notation.
* This address space is reserved for Mule workers, so it cannot overlap with any address space used in your data center if you want to peer it with your VPC.
* To calculate the proper sizing for your Anypoint VPC, you first need to understand that the number of dedicated IP addresses is not the same as the number of workers you have deployed.
* For each worker deployed to CloudHub, the following IP assignation takes place: For better fault tolerance, the VPC subnet may be divided into up to four Availability Zones.
* A few IP addresses are reserved for infrastructure. At least two IP addresses per worker to perform at zero-downtime.
* Hence in this scenario 2048 IP's are required to support the requirement.



Refer to the exhibit.


This Mule application is deployed to multiple Cloudhub workers with persistent queue enabled. The retrievefile flow event source reads a CSV file from a remote SFTP server and then publishes each record in the CSV file to a VM queue. The processCustomerRecords flow’s VM Listner receives messages from the same VM queue and then processes each message separately.
How are messages routed to the cloudhub workers as messages are received by the VM Listener?

  1. Each message is routed to ONE of the Cloudhub workers in a DETERMINSTIC round robin fashion thereby EXACTLY BALANCING messages among the cloudhub workers
  2. Each messages routes to ONE of the available Clouhub workers in a NON- DETERMINSTIC non round-robin fashion thereby APPROXIMATELY BALANCING messages among the cloudhub workers
  3. Each message is routed to the SAME Cloudhub worker that retrieved the file, thereby BINDING ALL messages to ONLY that ONE Cloudhub worker
  4. Each message is duplicated to ALL of the Cloudhub workers, thereby SHARING EACH message with ALL the Cloudhub workers.

Answer(s): B



A mule application is being designed to perform product orchestration. The Mule application needs to join together the responses from an inventory API and a Product Sales History API with the least latency.
To minimize the overall latency. What is the most idiomatic (used for its intended purpose) design to call each API request in the Mule application?

  1. Call each API request in a separate lookup call from Dataweave reduce operator
  2. Call each API request in a separate route of a Scatter-Gather
  3. Call each API request in a separate route of a Parallel For Each scope
  4. Call each API request in a separate Async scope

Answer(s): B

Explanation:

Scatter-Gather sends a request message to multiple targets concurrently. It collects the responses from all routes, and aggregates them into a single message.


Reference:

https://docs.mulesoft.com/mule-runtime/4.3/scatter-gather-concept



Page 17 of 62



Post your Comments and Discuss MuleSoft MCIA-LEVEL-1 exam with other Community members:

sanath sekar commented on September 05, 2024
nice good good expirence with these dumps provided
Anonymous
upvote