Free H13-711_V3.0 Exam Braindumps (page: 28)

Page 28 of 163

Which of the following descriptions about the basic operations of Hive SQL is correct?

  1. When loading data into Hive, the source data must be a path in HDFS
  2. To create an external table, you must specify location information
  3. Column delimiters can be specified when creating a table
  4. Create an external table using the external keyword. To create a normal table, you need to specify the internal keyword

Answer(s): C



When installing the Streaming component of Fusioninsight HD, the Nimbus role requires several nodes to be installed

  1. 1
  2. 2
  3. 3
  4. 4

Answer(s): B



In the cooperative work of Zookeeper and YARN, when Active ResourceManagerj fails, which directory will Standby ResourceManager obtain Application-related information from?

  1. warehouse
  2. Metastore
  3. State store
  4. Storage

Answer(s): C



Regarding DataSet, which of the following statements is incorrect?

  1. A DataSet is a strongly typed collection of domain-specific objects
  2. DataSet can perform most operations without deserialization
  3. DataSet needs to be deserialized to perform operations such as sort, filter, shuffle, et
  4. DataSet - highly similar to RD Better performance than RDD

Answer(s): C



Page 28 of 163



Post your Comments and Discuss Huawei H13-711_V3.0 exam with other Community members:

Anon commented on October 25, 2023
Q53, The answer is A. Region, not ColumnFamily
Anonymous
upvote

Anon commented on October 24, 2023
Q51, answer is D. Not,
Anonymous
upvote

Anon commented on October 24, 2023
Which statement is correct about the client uploading files to the HDF$ file system in the Hadoop system? The file data of the client is passed to the DataNode through the NameNode Allows the client to divide the file into multiple blocks, according to DaThe address information of taNodel is entered into each DataNode in order The client writes the entire file to each DataNodel in sequence according to the address information of the DataNode, and then divides the file into multiple blos by the DataNode)ck The client only uploads data to one DataNode, and then the NameNodet is responsible for block replication The answer is not B. In fact, all statements are wrong. D is almost correct but replication is done by DN not NN
Anonymous
upvote