MuleSoft MuleSoft Certified Platform Architect - Level 1 Exam
MuleSoft Certified Platform Architect - Level 1 (Page 3 )

Updated On: 11-Jan-2026

An Anypoint Platform organization has been configured with an external identity provider (IdP) for identity management and client management. What credentials or token must be provided to Anypoint CLI to execute commands against the Anypoint Platform APIs?

  1. The credentials provided by the IdP for identity management
  2. The credentials provided by the IdP for client management
  3. An OAuth 2.0 token generated using the credentials provided by the IdP for client management
  4. An OAuth 2.0 token generated using the credentials provided by the IdP for identity management

Answer(s): A

Explanation:

Correct Answer: The credentials provided by the IdP for identity management
*****************************************
>> There is no support for OAuth 2.0 tokens from client/identity providers to authenticate via Anypoint CLI. Only possible tokens are "bearer tokens" that too only generated using Anypoint Organization/Environment Client Id and Secret from https://anypoint.mulesoft.com/accounts/login. Not the client credentials of client provider. So, OAuth 2.0 is not possible. More over, the token is mainly for API Manager purposes and not associated with a user. You can NOT use it to call most APIs (for example Cloudhub and etc) as per this Mulesoft Knowledge article.
>> The other option allowed by Anypoint CLI is to use client credentials. It is possible to use client credentials of a client provider but requires setting up Connected Apps in client management but such details are not given in the scenario explained in the question.
>> So only option left is to use user credentials from identify provider


Reference:

https://docs.mulesoft.com/runtime-manager/anypoint-platform-cli#authentication



Say, there is a legacy CRM system called CRM-Z which is offering below functions:
1. Customer creation
2. Amend details of an existing customer
3. Retrieve details of a customer
4. Suspend a customer

  1. Implement a system API named customerManagement which has all the functionalities wrapped in it as various operations/resources
  2. Implement different system APIs named createCustomer, amendCustomer, retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
  3. Implement different system APIs named createCustomerInCRMZ, amendCustomerInCRMZ, retrieveCustomerFromCRMZ and suspendCustomerInCRMZ as they are modular and has seperation of concerns

Answer(s): B

Explanation:

Correct Answer: Implement different system APIs named createCustomer, amendCustomer, retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
*****************************************
>> It is quite normal to have a single API and different Verb + Resource combinations. However, this fits well for an Experience API or a Process API but not a best architecture style for System APIs. So, option with just one customerManagement API is not the best choice here.
>> The option with APIs in createCustomerInCRMZ format is next close choice w.r.t modularization and less maintenance but the naming of APIs is directly coupled with the legacy system. A better foreseen approach would be to name your APIs by abstracting the backend system names as it allows seamless replacement/migration of any backend system anytime. So, this is not the correct choice too.
>> createCustomer, amendCustomer, retrieveCustomer and suspendCustomer is the right approach and is the best fit compared to other options as they are both modular and same time got the names decoupled from backend system and it has covered all requirements a System API needs.



An organization wants MuleSoft-hosted runtime plane features (such as HTTP load balancing, zero downtime, and horizontal and vertical scaling) in its Azure environment. What runtime plane minimizes the organization's effort to achieve these features?

  1. Anypoint Runtime Fabric
  2. Anypoint Platform for Pivotal Cloud Foundry
  3. CloudHub
  4. A hybrid combination of customer-hosted and MuleSoft-hosted Mule runtimes

Answer(s): A

Explanation:

Correct Answer: Anypoint Runtime Fabric
*****************************************
>> When a customer is already having an Azure environment, It is not at all an ideal approach to go with hybrid model having some Mule Runtimes hosted on Azure and some on MuleSoft. This is unnecessary and useless.
>> CloudHub is a Mulesoft-hosted Runtime plane and is on AWS. We cannot customize to point CloudHub to customer's Azure environment.
>> Anypoint Platform for Pivotal Cloud Foundry is specifically for infrastructure provided by Pivotal Cloud Foundry
>> Anypoint Runtime Fabric is right answer as it is a container service that automates the deployment and orchestration of Mule applications and API gateways. Runtime Fabric runs within a customer-managed infrastructure on AWS, Azure, virtual machines (VMs), and bare-metal servers.
-Some of the capabilities of Anypoint Runtime Fabric include:
-Isolation between applications by running a separate Mule runtime per application.
-Ability to run multiple versions of Mule runtime on the same set of resources.
-Scaling applications across multiple replicas.
-Automated application fail-over.
-Application management with Anypoint Runtime Manager.


Reference:

https://docs.mulesoft.com/runtime-fabric/1.7/



A company has started to create an application network and is now planning to implement a Center for Enablement (C4E) organizational model. What key factor would lead the company to decide upon a federated rather than a centralized C4E?

  1. When there are a large number of existing common assets shared by development teams
  2. When various teams responsible for creating APIs are new to integration and hence need extensive training
  3. When development is already organized into several independent initiatives or groups
  4. When the majority of the applications in the application network are cloud based

Answer(s): C

Explanation:

Correct Answer: When development is already organized into several independent initiatives or groups
*****************************************
>> It would require lot of process effort in an organization to have a single C4E team coordinating with multiple already organized development teams which are into several independent initiatives. A single C4E works well with different teams having at least a common initiative. So, in this scenario, federated C4E works well instead of centralized C4E.



A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application's CloudHub deployment to help the company cope with this performance challenge?

  1. Permanently increase the size of each of the two (2) CloudHub workers by at least four times (4x) to one (1) vCore
  2. Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than 70%
  3. Permanently increase the number of CloudHub workers by four times (4x) to eight (8) CloudHub workers
  4. Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%

Answer(s): D

Explanation:

Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%m*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only "sometimes" occasionally when there is spike in the number of orders coming in.
So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.



Viewing page 3 of 20
Viewing questions 11 - 15 out of 95 questions



Post your Comments and Discuss MuleSoft MuleSoft Certified Platform Architect - Level 1 exam prep with other Community members:

MuleSoft Certified Platform Architect - Level 1 Exam Discussions & Posts