Free MCIA-LEVEL-1-MAINTENANCE Exam Braindumps

As a part of business requirement , old CRM system needs to be integrated using Mule application. CRM system is capable of exchanging data only via SOAP/HTTP protocol. As an integration architect who follows API led approach , what is the the below step you will perform so that you can share document with CRM team?

  1. Create RAML specification using Design Center
  2. Create SOAP API specification using Design Center
  3. Create WSDL specification using text editor
  4. Create WSDL specification using Design Center

Answer(s): C

Explanation:

Correct answer is Create WSDL specification using text editor SOAP services are specified using WSDL. A client program connecting to a web service can read the WSDL to determine what functions are available on the server. We can not create WSDL specification in Design Center. We need to use external text editor to create WSDL.



Insurance organization is planning to deploy Mule application in MuleSoft Hosted runtime plane. As a part of requirement , application should be scalable . highly available. It also has regulatory requirement which demands logs to be retained for at least 2 years. As an Integration Architect what step you will recommend in order to achieve this?

  1. It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
  2. When deploying an application to CloudHub , logs retention period should be selected as 2 years
  3. When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
  4. Logging strategy should be configured accordingly in log4j file deployed with the application.

Answer(s): A

Explanation:

Correct answer is It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required. CloudHub has a specific log retention policy, as described in the documentation: the platform stores logs of up to 100 MB per app & per worker or for up to 30 days, whichever limit is hit first. Once this limit has been reached, the oldest log information is deleted in chunks and is irretrievably lost. The recommended approach is to persist your logs to a external logging system of your choice (such as Splunk, for instance) using a log appender. Please note that this solution results in the logs no longer being stored on our platform, so any support cases you lodge will require for you to provide the appropriate logs for review and case resolution



An organization is designing Mule application which connects to a legacy backend. It has been reported that backend services are not highly available and experience downtime quite often. As an

integration architect which of the below approach you would propose to achieve high reliability goals?

  1. Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
  2. Until Successful scope can be implemented while calling backend API's
  3. On Error Continue scope to be used to call in case of error again
  4. Create a batch job with all requests being sent to backend using that job as per the availability of backend API's

Answer(s): B

Explanation:

Correct answer is Untill Successful scope can be implemented while calling backend API's The Until

Successful scope repeatedly triggers the scope's components (including flow references) until they all succeed or until a maximum number of retries is exceeded The scope provides option to control the max number of retries and the interval between retries The scope can execute any sequence of processors that may fail for whatever reason and may succeed upon retry



A Mule application contains a Batch Job scope with several Batch Step scopes. The Batch Job scope is configured with a batch block size of 25.

A payload with 4,000 records is received by the Batch Job scope.

When there are no errors, how does the Batch Job scope process records within and between the Batch Step scopes?

  1. The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
  2. The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
  3. The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time
    All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
  4. The Batch Job scope processes multiple record blocks in parallel Each Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event
    For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope

Answer(s): A

Explanation:


Reference:

https://docs.mulesoft.com/mule-runtime/4.4/batch-processing-concept






Post your Comments and Discuss MuleSoft MCIA-LEVEL-1-MAINTENANCE exam with other Community members:

MCIA-LEVEL-1-MAINTENANCE Discussions & Posts