Free Google Professional Security Operations Engineer Exam Questions (page: 9)

Your company recently adopted Security Command Center (SCC) but is not using Google Security Operations (SecOps). Your organization has thousands of active projects. You need to detect anomalous behavior in your Google Cloud environment by windowing and aggregating data over a given time period, based on specific log events or advanced calculations. You also need to provide an interface for analysts to triage the alerts. How should you build this capability?

  1. Send the logs to Cloud SQL, and run a scheduled query against these events using a Cloud Run scheduled job. Configure an aggregated log filter to stream event-driven logs to a Pub/Sub topic. Configure a trigger to send an email alert when new events are sent to this feed.
  2. Sink the logs to BigQuery, and configure Cloud Run functions to execute a periodic job and generate normalized alerts in a Pub/Sub topic for findings. Use log-based metrics to generate event-driven alerts and send these alerts to the Pub/Sub topic. Write the alerts as findings using the SCC API.
  3. Use log-based metrics to generate event-driven alerts for the detection scenarios. Configure a Cloud Monitoring alert policy to send email alerts to your security operations team.
  4. Create a series of aggregated log sinks for each required finding, and send the normalized findings as JSON files to Cloud Storage. Use the write event to generate an alert.

Answer(s): B

Explanation:

The correct approach is to sink logs to BigQuery, where you can perform windowing and advanced aggregations over time. Then, use Cloud Run functions to periodically query BigQuery and generate normalized alerts published to a Pub/Sub topic. From there, alerts can be written back into SCC as findings via the SCC API, giving analysts a central interface for triage. This architecture supports large-scale environments, advanced calculations, and efficient integration with SCC.



Your organization is a Google Security Operations (SecOps) customer and monitors critical assets using a SIEM dashboard. You need to dynamically monitor the assets based on a specific asset tag.
What should you do?

  1. Ask Cloud Customer Care to add a custom filter to the dashboard.
  2. Add a custom filter to the dashboard.
  3. Copy an existing dashboard and add a custom filter.
  4. Export the dashboard configuration to a file, modify the file to add a custom filter, and import the file into Google SecOps.

Answer(s): B

Explanation:

In Google SecOps, you can add a custom filter directly to the SIEM dashboard to dynamically monitor assets based on a specific asset tag. This approach is straightforward, requires no external intervention, and ensures that the dashboard updates automatically as assets with the tag change over time.



A business unit in your organization plans to use Vertex AI to develop models within Google Cloud. The security team needs to implement detective and preventative guardrails to ensure that the environment meets internal security control requirements. How should you secure this environment?

  1. Implement Assured Workloads by creating a folder for the business unit and assigning the relevant control package.
  2. Implement preconfigured and custom organization policies to meet the control requirements. Apply these policies to the business unit folder.
  3. Create a policy bundle representing the control requirements using Rego. Implement these policies using Workload Manager. Scope this scan to the business unit folder.
  4. Create a posture consisting of predefined and custom organization policies and predefined and Security Health Analytics (SHA) custom modules. Scope this posture to the business unit folder.

Answer(s): D

Explanation:

The correct approach is to create a posture in SCC that combines predefined and custom organization policies with predefined and custom Security Health Analytics (SHA) modules, and then scope it to the business unit folder. This ensures both preventative guardrails (organization policies) and detective guardrails (SHA findings) are enforced for the Vertex AI environment, aligning with internal security control requirements.



You are implementing Google Security Operations (SecOps) with multiple log sources. You want to closely monitor the health of the ingestion pipeline's forwarders and collection agents, and detect silent sources within five minutes.
What should you do?

  1. Create a notification in Cloud Monitoring using a metric-absence condition based on sample policy for each collector_id.
  2. Create a Google SecOps SIEM dashboard to show the ingestion metrics for each log_type and collector_id.
  3. Create an ingestion notification for health metrics in Cloud Monitoring based on the total ingested log count for each collector_id.
  4. Create a Looker dashboard that queries the BigQuery ingestion metrics schema for each log_type and collector_id.

Answer(s): A

Explanation:

The best solution is to create a Cloud Monitoring notification with a metric-absence condition for each collector_id. A metric-absence alert triggers when expected ingestion metrics are missing within a defined period (e.g., five minutes), which quickly identifies silent sources or failed collectors. This provides near real- time detection of ingestion health issues in the SecOps pipeline.



A Google Security Operations (SecOps) detection rule is generating frequent false positive alerts. The rule was designed to detect suspicious Cloud Storage enumeration by triggering an alert whenever the storage.objects.list API operation is called using the api.operation UDM field. However, a legitimate backup automation tool that uses the same API, causing the rule to fire unnecessarily. You need to reduce these false positives from this trusted backup tool while still detecting potentially malicious usage. How should you modify the rule to improve its accuracy?

  1. Add principal.user.email != "backup-bot@foobaa.com" to the rule condition to exclude the automation account.
  2. Replace api.operation with api.service_name = "storage.googleapis.com" to narrow the detection scope.
  3. Convert the rule into a multi-event rule that looks for repeated API calls across multiple buckets.
  4. Adjust the rule severity to LOW to deprioritize alerts from automation tools.

Answer(s): A

Explanation:

The most accurate way to reduce false positives is to exclude the known trusted backup automation account by adding a condition such as principal.user.email != "backup-bot@foobaa.com". This keeps the rule active for all other accounts, ensuring you still detect suspicious or malicious Cloud Storage enumeration while preventing unnecessary alerts from legitimate automation.



Viewing page 9 of 28
Viewing questions 41 - 45 out of 60 questions



Post your Comments and Discuss Google Professional Security Operations Engineer exam prep with other Community members:

Professional Security Operations Engineer Exam Discussions & Posts