Free MCIA-LEVEL-1-MAINTENANCE Exam Braindumps (page: 9)

Page 8 of 30

What aspects of a CI/CD pipeline for Mule applications can be automated using MuleSoft-provided Maven plugins?

  1. Compile, package, unit test, validate unit test coverage, deploy
  2. Compile, package, unit test, deploy, integration test (Incorrect)
  3. Compile, package, unit test, deploy, create associated API instances in API Manager
  4. Import from API designer, compile, package, unit test, deploy, publish to Anypoint Exchange

Answer(s): A

Explanation:

Correct answer is "Compile, package, unit test, validate unit test coverage, deploy" : Anypoint Platform supports continuous integration and continuous delivery using industry standard tools Mule Maven Plugin The Mule Maven plugin can automate building, packaging and deployment of Mule applications from source projects Using the Mule Maven plugin, you can automate your Mule application deployment to CloudHub, to Anypoint Runtime Fabric, or on-premises, using any of the following deployment strategies · CloudHub deployment · Runtime Fabric deployment · Runtime Manager REST API deployment · Runtime Manager agent deployment MUnit Maven Plugin The MUnit Maven plugin can automate test execution, and ties in with the Mule Maven plugin. It provides a full suite of integration and unit test capabilities, and is fully integrated with Maven and Surefire for integration with your continuous deployment environment. Since MUnit 2.x, the coverage report goal is integrated with the maven reporting section. Coverage Reports are generated during Maven's site lifecycle, during the coverage-report goal. One of the features of MUnit Coverage is to fail the build if a certain coverage level is not reached. MUnit is not used for integration testing Also publishing to Anypoint Exchange or to create associated API instances in API Manager is not a part of CICD pipeline which can ne achieved using mulesoft provided maven plugin

Architecture mentioned in the question can be diagrammatically put as below. Persistent Object Store is the correct answer .

* Mule Object Stores: An object store is a facility for storing objects in or across Mule applications. Mule uses object stores to persist data for eventual retrieval.
Mule provides two types of object stores:
1) In-memory store ­ stores objects in local Mule runtime memory. Objects are lost on shutdown of the Mule runtime. So we cant use in memory store in our scenario as we want to share watermark within all cloudhub workers
2) Persistent store ­ Mule persists data when an object store is explicitly configured to be persistent. Hence this watermark will be available even any of the worker goes down



What condition requires using a CloudHub Dedicated Load Balancer?

  1. When cross-region load balancing is required between separate deployments of the same Mule application
  2. When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes
  3. When API invocations across multiple CloudHub workers must be load balanced
  4. When server-side load-balanced TLS mutual authentication is required between API implementations and API clients

Answer(s): D

Explanation:

Correct answer is When server-side load-balanced TLS mutual authentication is required between API implementations and API clients CloudHub dedicated load balancers (DLBs) are an optional component of Anypoint Platform that enable you to route external HTTP and HTTPS traffic to multiple Mule applications deployed to CloudHub workers in a Virtual Private Cloud (VPC). Dedicated load balancers enable you to:
* Handle load balancing among the different CloudHub workers that run your application.
* Define SSL configurations to provide custom certificates and optionally

enforce two-way SSL client authentication.
* Configure proxy rules that map your applications to custom domains. This enables you to host your applications under a single domain



A company is building an application network and has deployed four Mule APIs: one experience API, one process API, and two system APIs. The logs from all the APIs are aggregated in an external log aggregation tool. The company wants to trace messages that are exchanged between multiple API implementations. What is the most idiomatic (based on its intended use) identifier that should be used to implement Mule event tracing across the multiple API implementations?

  1. Mule event ID
  2. Mule correlation ID
  3. Client's IP address
  4. DataWeave UUID

Answer(s): B

Explanation:

Correct answer is Mule correlation ID By design, Correlation Ids cannot be changed within a flow in Mule 4 applications and can be set only at source. This ID is part of the Event Context and is generated as soon as the message is received by the application. When a HTTP Request is received, the request is inspected for "X-Correlation-Id" header. If "X-Correlation-Id" header is present, HTTP connector uses this as the Correlation Id. If "X-Correlation-Id" header is NOT present, a Correlation Id is randomly generated. For Incoming HTTP Requests: In order to set a custom Correlation Id, the client invoking the HTTP request must set "X-Correlation-Id" header. This will ensure that the Mule Flow uses this Correlation Id. For Outgoing HTTP Requests: You can also propagate the existing Correlation Id to downstream APIs. By default, all outgoing HTTP Requests send "X-Correlation-Id" header. However, you can choose to set a different value to "X-Correlation-Id" header or set "Send Correlation Id" to NEVER.



Mule application is deployed to Customer Hosted Runtime. Asynchronous logging was implemented to improved throughput of the system. But it was observed over the period of time that few of the important exception log messages which were used to rollback transactions are not working as expected causing huge loss to the Organization. Organization wants to avoid these losses. Application also has constraints due to which they cant compromise on throughput much. What is the possible option in this case?

  1. Logging needs to be changed from asynchronous to synchronous
  2. External log appender needs to be used in this case
  3. Persistent memory storage should be used in such scenarios
  4. Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way

Answer(s): D

Explanation:

Correct approach is to use Mixed configuration of asynchronous or synchronous loggers shoud be used to log exceptions via synchronous way Asynchronous logging poses a performance-reliability trade-off. You may lose some messages if Mule crashes before the logging buffers flush to the disk. In

this case, consider that you can have a mixed configuration of asynchronous or synchronous loggers in your app. Best practice is to use asynchronous logging over synchronous with a minimum logging level of WARN for a production application. In some cases, enable INFO logging level when you need to confirm events such as successful policy installation or to perform troubleshooting. Configure your logging strategy by editing your application's src/main/resources/log4j2.xml file






Post your Comments and Discuss MuleSoft MCIA-LEVEL-1-MAINTENANCE exam with other Community members:

MCIA-LEVEL-1-MAINTENANCE Discussions & Posts