Free Professional Cloud Network Engineer Exam Braindumps (page: 18)

Page 17 of 55

You need to define an address plan for a future new GKE cluster in your VPC. This will be a VPC native cluster, and the default Pod IP range allocation will be used. You must pre-provision all the needed VPC subnets and their respective IP address ranges before cluster creation. The cluster will initially have a single node, but it will be scaled to a maximum of three nodes if necessary. You want to allocate the minimum number of Pod IP addresses.
Which subnet mask should you use for the Pod IP address range?

  1. /21
  2. /22
  3. /23
  4. /25

Answer(s): B

Explanation:

https://cloud.google.com/kubernetes-engine/docs/how-to/alias- ips#cluster_sizing_secondary_range_pods


Reference:

https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#defaults_limits



You have created a firewall with rules that only allow traffic over HTTP, HTTPS, and SSH ports.
While testing, you specifically try to reach the server over multiple ports and protocols; however, you do not see any denied connections in the firewall logs. You want to resolve the issue.

What should you do?

  1. Enable logging on the default Deny Any Firewall Rule.
  2. Enable logging on the VM Instances that receive traffic.
  3. Create a logging sink forwarding all firewall logs with no filters.
  4. Create an explicit Deny Any rule and enable logging on the new rule.

Answer(s): D

Explanation:

https://cloud.google.com/vpc/docs/firewall-rules-logging#egress_deny_example You can only enable Firewall Rules Logging for rules in a Virtual Private Cloud (VPC) network. Legacy networks are not supported. Firewall Rules Logging only records TCP and UDP connections. Although you can create a firewall rule applicable to other protocols, you cannot log their connections. You cannot enable Firewall Rules Logging for the implied deny ingress and implied allow egress rules. Log entries are written from the perspective of virtual machine (VM) instances. Log entries are only created if a firewall rule has logging enabled and if the rule applies to traffic sent to or from the VM. Entries are created according to the connection logging limits on a best effort basis. The number of connections that can be logged in a given interval is based on the machine type. Changes to firewall rules can be viewed in VPC audit logs. https://cloud.google.com/vpc/docs/firewall-rules- logging#specifications



In your company, two departments with separate GCP projects (code-dev and data-dev) in the same organization need to allow full cross-communication between all of their virtual machines in GCP. Each department has one VPC in its project and wants full control over their network. Neither department intends to recreate its existing computing resources. You want to implement a solution that minimizes cost.

Which two steps should you take? (Choose two.)

  1. Connect both projects using Cloud VPN.
  2. Connect the VPCs in project code-dev and data-dev using VPC Network Peering.
  3. Enable Shared VPC in one project (e. g., code-dev), and make the second project (e. g., data- dev) a service project.
  4. Enable firewall rules to allow all ingress traffic from all subnets of project code-dev to all instances in project data-dev, and vice versa.
  5. Create a route in the code-dev project to the destination prefixes in project data-dev and use nexthop as the default gateway, and vice versa.

Answer(s): B,D



You need to create a GKE cluster in an existing VPC that is accessible from on-premises. You must meet the following requirements:

IP ranges for pods and services must be as small as possible. The nodes and the master must not be reachable from the internet. You must be able to use kubectl commands from on-premises subnets to manage the cluster.

How should you create the GKE cluster?

  1. · Create a private cluster that uses VPC advanced routes.
    · Set the pod and service ranges as /24.
    · Set up a network proxy to access the master.
  2. · Create a VPC-native GKE cluster using GKE-managed IP ranges.
    · Set the pod IP range as /21 and service IP range as /24.
    · Set up a network proxy to access the master.
  3. · Create a VPC-native GKE cluster using user-managed IP ranges.
    · Enable a GKE cluster network policy, set the pod and service ranges as /24.
    · Set up a network proxy to access the master.
    · Enable master authorized networks.
  4. · Create a VPC-native GKE cluster using user-managed IP ranges.
    · Enable privateEndpoint on the cluster master.
    · Set the pod and service ranges as /24.
    · Set up a network proxy to access the master.
    · Enable master authorized networks.

Answer(s): D

Explanation:

Creating GKE private clusters with network proxies for controller access When you create a GKE private cluster with a private cluster controller endpoint, the cluster's controller node is inaccessible from the public internet, but it needs to be accessible for administration. By default, clusters can access the controller through its private endpoint, and authorized networks can be defined within the VPC network. To access the controller from on-premises or another VPC network, however, requires additional steps. This is because the VPC network that hosts the controller is owned by Google and cannot be accessed from resources connected through another VPC network peering connection, Cloud VPN or Cloud Interconnect. https://cloud.google.com/solutions/creating- kubernetes-engine-private-clusters-with-net-proxies






Post your Comments and Discuss Google Professional Cloud Network Engineer exam with other Community members:

Professional Cloud Network Engineer Discussions & Posts