You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?
Answer(s): B
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk, but can recover from replicating from other brokers. This makes Kafka amazing!
Select all that applies (select THREE)
Answer(s): C,D,F
acks is a producer setting min.insync.replicas is a topic or broker setting and is only effective when acks=all
A customer has many consumer applications that process messages from a Kafka topic. Each consumer application can only process 50 MB/s. Your customer wants to achieve a target throughput of 1 GB/s. What is the minimum number of partitions will you suggest to the customer for that particular topic?
each consumer can process only 50 MB/s, so we need at least 20 consumers consuming one partition so that 50 * 20 = 1000 MB target is achieved.
Your producer is producing at a very high rate and the batches are completely full each time. How can you improve the producer throughput? (select two)
Answer(s): A,C
batch.size controls how many bytes of data to collect before sending messages to the Kafka broker. Set this as high as possible, without exceeding available memory. Enabling compression can also help make more compact batches and increase the throughput of your producer. Linger.ms will have no effect as the batches are already full
In Avro, adding a field to a record without default is aschema evolution
Answer(s): A
Clients with old schema will be able to read records saved with new schema.
Post your Comments and Discuss Confluent CCDAK exam dumps with other Community members: