Free CCA-505 Braindumps


  • Exam Number: CCA-505
  • Provider: Cloudera
  • Questions: 45
  • Updated On: 18-Jun-2019

QUESTION: 1
You have installed a cluster running HDFS and MapReduce version 2 (MRv2) on YARN. You
have no afs.hosts entry()ies in your hdfs-alte.xml configuration file. You configure a new worker
node by setting fs.default.name in its configuration files to point to the NameNode on your
cluster, and you start the DataNode daemon on that worker node.
What do you have to do on the cluster to allow the worker node to join, and start storing HDFS
blocks?

A. Nothing; the worker node will automatically join the cluster when the DataNode daemon is
started.
B. Without creating a dfs.hosts file or making any entries, run the command hadoop dfsadmin
-refreshHadoop on the NameNode
C. Create a dfs.hosts file on the NameNode, add the worker node's name to it, then issue the
command hadoop dfsadmin -refreshNodes on the NameNode
D. Restart the NameNode

Answer(s): D
QUESTION: 2
Given:

You want to clean up this list by removing jobs where the state is KILLED. What command you
enter?

A. Yarn application -kil application_1374638600275_0109
B. Yarn rmadmin -refreshQueue
C. Yarn application -refreshJobHistory
D. Yarn rmadmin -kil application_1374638600275_0109

Answer(s): A
Reference:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_using-apache-
hadoop/content/common_mrv2_commands.html
QUESTION: 3
Assuming a cluster running HDFS, MapReduce version 2 (MRv2) on YARN with all settings at
their default, what do you need to do when adding a new slave node to a cluster?

A. Nothing, other than ensuring that DNS (or /etc/hosts files on all machines) contains am entry
for the new node.
B. Restart the NameNode and ResourceManager deamons and resubmit any running jobs
C. Increase the value of dfs.number.of.needs in hdfs-site.xml

Get The Premium Version

Allbraindumps.com
 Test Questions PDF from Myitguides.com

 Test Questions PDF from Myitguides.com