Free Splunk® SPLK-2002 Exam Questions (page: 5)

Which component in the splunkd.log will log information related to bad event breaking?

  1. Audittrail
  2. EventBreaking
  3. IndexingPipeline
  4. AggregatorMiningProcessor

Answer(s): D

Explanation:

The AggregatorMiningProcessor component in the splunkd.log file will log information related to bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data into events and applying the props.conf settings. If there is a problem with the event breaking, such as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log the error or warning messages in the splunkd.log file. The Audittrail component logs information about the audit events, such as user actions, configuration changes, and search activity. The EventBreaking component logs information about the event breaking rules, such as the LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information about the indexing pipeline, such as the parsing, routing, and indexing phases. For more information, see About Splunk Enterprise logging and [Configure event line breaking] in the Splunk documentation.



Which Splunk server role regulates the functioning of indexer cluster?

  1. Indexer
  2. Deployer
  3. Master Node
  4. Monitoring Console

Answer(s): C

Explanation:

The master node is the Splunk server role that regulates the functioning of the indexer cluster. The master node coordinates the activities of the peer nodes, such as data replication, data searchability, and data recovery. The master node also manages the cluster configuration bundle and distributes it to the peer nodes. The indexer is the Splunk server role that indexes the incoming data and makes it searchable. The deployer is the Splunk server role that distributes apps and configuration updates to the search head cluster members. The monitoring console is the Splunk server role that monitors the health and performance of the Splunk deployment. For more information, see About indexer clusters and index replication in the Splunk documentation.



When adding or rejoining a member to a search head cluster, the following error is displayed:
Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.

What corrective action should be taken?

  1. Restart the search head.
  2. Run the splunk apply shcluster-bundle command from the deployer.
  3. Run the clean raft command on all members of the search head cluster.
  4. Run the splunk resync shcluster-replicated-config command on this member.

Answer(s): D

Explanation:

When adding or rejoining a member to a search head cluster, and the following error is displayed:
Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member. The corrective action that should be taken is to run the splunk resync shcluster-replicated-config command on this member. This command will delete the existing configuration files on this member and replace them with the latest configuration files from the captain. This will ensure that the member has the same configuration as the rest of the cluster. Restarting the search head, running the splunk apply shcluster-bundle command from the deployer, or running the clean raft command on all members of the search head cluster are not the correct actions to take in this scenario. For more information, see Resolve configuration inconsistencies across cluster members in the Splunk documentation.



Which of the following commands is used to clear the KV store?

  1. splunk clean kvstore
  2. splunk clear kvstore
  3. splunk delete kvstore
  4. splunk reinitialize kvstore

Answer(s): A

Explanation:

The splunk clean kvstore command is used to clear the KV store. This command will delete all the collections and documents in the KV store and reset it to an empty state. This command can be useful for troubleshooting KV store issues or resetting the KV store data. The splunk clear kvstore, splunk delete kvstore, and splunk reinitialize kvstore commands are not valid Splunk commands. For more information, see Use the CLI to manage the KV store in the Splunk documentation.



Indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. There is ample CPU and memory available on the indexers.
Which of the following is most likely to improve indexing performance?

  1. Increase the maximum number of hot buckets in indexes.conf
  2. Increase the number of parallel ingestion pipelines in server.conf
  3. Decrease the maximum size of the search pipelines in limits.conf
  4. Decrease the maximum concurrent scheduled searches in limits.conf

Answer(s): B

Explanation:

Increasing the number of parallel ingestion pipelines in server.conf is most likely to improve indexing performance when indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. The parallel ingestion pipelines allow Splunk to process multiple data streams simultaneously, which increases the indexing throughput and reduces the indexing latency. Increasing the maximum number of hot buckets in indexes.conf will not improve indexing performance, but rather increase the disk space consumption and the bucket rolling time. Decreasing the maximum size of the search pipelines in limits.conf will not improve indexing performance, but rather reduce the search performance and the search concurrency. Decreasing the maximum concurrent scheduled searches in limits.conf will not improve indexing performance, but rather reduce the search capacity and the search availability. For more information, see Configure parallel ingestion pipelines in the Splunk documentation.



The guidance Splunk gives for estimating size on for syslog data is 50% of original data size. How does this divide between files in the index?

  1. rawdata is: 10%, tsidx is: 40%
  2. rawdata is: 15%, tsidx is: 35%
  3. rawdata is: 35%, tsidx is: 15%
  4. rawdata is: 40%, tsidx is: 10%

Answer(s): B

Explanation:

The guidance Splunk gives for estimating size on for syslog data is 50% of original data size. This divides between files in the index as follows: rawdata is 15%, tsidx is 35%. The rawdata is the compressed version of the original data, which typically takes about 15% of the original data size. The tsidx is the index file that contains the time-series metadata and the inverted index, which typically takes about 35% of the original data size. The total size of the rawdata and the tsidx is about 50% of the original data size. For more information, see [Estimate your storage requirements] in the Splunk documentation.



In an existing Splunk environment, the new index buckets that are created each day are about half the size of the incoming dat

  1. Within each bucket, about 30% of the space is used for rawdata and about 70% for index files.
    What additional information is needed to calculate the daily disk consumption, per indexer, if indexer clustering is implemented?
  2. Total daily indexing volume, number of peer nodes, and number of accelerated searches.
  3. Total daily indexing volume, number of peer nodes, replication factor, and search factor.
  4. Total daily indexing volume, replication factor, search factor, and number of search heads.
  5. Replication factor, search factor, number of accelerated searches, and total disk size across cluster.

Answer(s): B

Explanation:

The additional information that is needed to calculate the daily disk consumption, per indexer, if indexer clustering is implemented, is the total daily indexing volume, the number of peer nodes, the replication factor, and the search factor. These information are required to estimate how much data is ingested, how many copies of raw data and searchable data are maintained, and how many indexers are involved in the cluster. The number of accelerated searches, the number of search heads, and the total disk size across the cluster are not relevant for calculating the daily disk consumption, per indexer. For more information, see [Estimate your storage requirements] in the Splunk documentation.



A three-node search head cluster is skipping a large number of searches across time.
What should be done to increase scheduled search capacity on the search head cluster?

  1. Create a job server on the cluster.
  2. Add another search head to the cluster.
  3. server.conf captain_is_adhoc_searchhead = true.
  4. Change limits.conf value for max_searches_per_cpu to a higher value.

Answer(s): D

Explanation:

Changing the limits.conf value for max_searches_per_cpu to a higher value is the best option to increase scheduled search capacity on the search head cluster when a large number of searches are skipped across time. This value determines how many concurrent scheduled searches can run on each CPU core of the search head. Increasing this value will allow more scheduled searches to run at the same time, which will reduce the number of skipped searches. Creating a job server on the cluster, running the server.conf captain_is_adhoc_searchhead = true command, or adding another search head to the cluster are not the best options to increase scheduled search capacity on the search head cluster. For more information, see [Configure limits.conf] in the Splunk documentation.



Viewing page 5 of 21
Viewing questions 33 - 40 out of 197 questions



Post your Comments and Discuss Splunk® SPLK-2002 exam prep with other Community members:

SPLK-2002 Exam Discussions & Posts