Free Oracle 1Z0-1084-23 Exam Questions (page: 3)

You want to push a new image in the Oracle Cloud Infrastructure (OCI) Registry.
Which TWO actions would you need to perform? (Choose two.)

  1. Generate an API signing key to complete the authentication via Docker CLI.
  2. Generate an auth token to complete the authentication via Docker CLI.
  3. Assign an OCI defined tag via OCI CLI to the image.
  4. Assign a tag via Docker CLI to the image.
  5. Generate an OCI tag namespace in your repository.

Answer(s): B,D

Explanation:

To push a new image to the Oracle Cloud Infrastructure (OCI) Registry, you would need to perform the following two actions: Assign a tag via Docker CLI to the image: Before pushing the image, you need to assign a tag to it using the Docker CLI. The tag helps identify the image and associate it with a specific version or label. Generate an auth token to complete the authentication via Docker CLI: To authenticate and authorize the push operation, you need to generate an auth token. This token is used to authenticate your Docker CLI with the OCI Registry, allowing you to push the image securely.
Note: Generating an API signing key, assigning an OCI defined tag via OCI CLI, and generating an OCI tag namespace are not required steps for pushing a new image to the OCI Registry.



You plan to implement logging in your services that will run in Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE).
Which statement describes the appropriate logging approach?

  1. All services log to standard output only.
  2. Each service logs to its own log file.
  3. All services log to an external logging system.
  4. All serviceAAs log to a shared log file.

Answer(s): A

Explanation:

The appropriate logging approach for services running in Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE) is: "All services log to standard output only." When running services in a containerized environment like OKE, it is recommended to follow the Twelve-Factor App methodology, which suggests treating logs as event streams. According to this methodology, services should write their log events to standard output (stdout) instead of writing to log files. By logging to standard output, the container runtime (such as Kubernetes) can collect and aggregate the logs generated by the services. These logs can then be accessed and managed through the container runtime's logging infrastructure. Logging to standard output offers several advantages in a containerized environment: Simplicity and consistency: Standardizing on logging to stdout ensures a consistent approach across different services, making it easier to manage and analyze logs. Log aggregation: The container runtime can collect the logs from all the running containers and provide centralized log management, allowing you to access and search logs from different services in one place. Scalability: Since logs are written to stdout, they can be easily handled by the container runtime's log management system, which can scale to handle large volumes of log data. Separation of concerns: By logging to stdout, the responsibility of managing log files and their rotation is shifted to the container runtime, allowing the services to focus on their core functionality.
While it is possible to log to log files or external logging systems, the recommended approach in a containerized environment like OKE is to log to standard output and leverage the logging infrastructure provided by the container runtime.



Which is ONE of the differences between a microservice and a serverless function?

  1. Microservices are used for long running operations while serverless functions are used for short running operations.
  2. Microservices are triggered by events while serverless functions are not.
  3. Microservices are stateless while serverless functions are stateful.
  4. Microservices always use a data store while serverless functions never use a data store.

Answer(s): A

Explanation:

The correct answer is: Microservices are used for long running operations while serverless functions are used for short running operations. One of the key differences between microservices and serverless functions is the duration of their execution. Microservices are typically designed to handle long-running operations and may continuously run and process requests as part of a larger system. They are often deployed and managed as long-lived services. On the other hand, serverless functions are designed to handle short-lived operations or tasks that execute in response to specific events or triggers. They are event-driven and execute only when invoked, providing a lightweight and ephemeral computing model. Serverless functions are often used for executing small, isolated pieces of code without the need for managing infrastructure or scaling concerns.
While both microservices and serverless functions can be stateless or stateful depending on the specific implementation, the key distinction lies in the typical duration and execution pattern of these components within an application architecture.



What are the TWO main reasons you would choose to implement a serverless architecture? (Choose two.)

  1. No need for integration testing
  2. Automatic horizontal scaling
  3. Easier to run long-running operations
  4. Reduced operational cost
  5. Improved in-function state management

Answer(s): B,D

Explanation:

The two main reasons to choose a serverless architecture are: Automatic horizontal scaling:
Serverless architectures allow for automatic scaling of resources based on demand. The infrastructure automatically provisions and scales resources as needed, ensuring that applications can handle varying workloads efficiently. This eliminates the need for manual scaling and optimizes resource utilization. Reduced operational cost: Serverless architectures follow a pay-per-use model, where you are billed only for the actual execution time and resources consumed by your functions. This leads to cost savings as you don't have to pay for idle resources. Additionally, serverless architectures remove the need for managing and maintaining servers, reducing operational overhead and associated costs. No need for integration testing: Integration testing is still necessary in serverless architectures to ensure that functions integrate correctly with other components and services. Serverless functions can interact with various event sources, databases, and APIs, and testing is required to verify the integration points. Improved in-function state management:
Serverless architectures typically encourage stateless functions that operate on short-lived requests or events.
While there are mechanisms to manage state within a function, serverless architectures are designed to be stateless by default, promoting scalability and fault tolerance. Easier to run long- running operations: Serverless functions are generally designed for short-lived operations rather than long-running tasks. If you have a requirement for long-running operations, a serverless architecture may not be the ideal choice, as it has execution time limits and may not provide the necessary resources for extended execution.






Post your Comments and Discuss Oracle 1Z0-1084-23 exam prep with other Community members:

1Z0-1084-23 Exam Discussions & Posts