New Year Sale - Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70dumps

CCA-500 Questions and Answers

Question # 6

Your cluster has the following characteristics:

  • A rack aware topology is configured and on
  • Replication is set to 3
  • Cluster block size is set to 64MB

Which describes the file read process when a client application connects into the cluster and requests a 50MB file?

A.

The client queries the NameNode for the locations of the block, and reads all three copies. The first copy to complete transfer to the client is the one the client reads as part of hadoop’s speculative execution framework.

B.

The client queries the NameNode for the locations of the block, and reads from the first location in the list it receives.

C.

The client queries the NameNode for the locations of the block, and reads from a random location in the list it receives to eliminate network I/O loads by balancing which nodes it retrieves data from any given time.

D.

The client queries the NameNode which retrieves the block from the nearest DataNode to the client then passes that block back to the client.

Full Access
Question # 7

You observed that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 1000MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?

A.

For a 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O

B.

Increase the io.sort.mb to 1GB

C.

Decrease the io.sort.mb value to 0

D.

Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records.

Full Access
Question # 8

Cluster Summary:

45 files and directories, 12 blocks = 57 total. Heap size is 15.31 MB/193.38MB(7%)

Refer to the above screenshot.

You configure a Hadoop cluster with seven DataNodes and on of your monitoring UIs displays the details shown in the exhibit.

What does the this tell you?

A.

The DataNode JVM on one host is not active

B.

Because your under-replicated blocks count matches the Live Nodes, one node is dead, and your DFS Used % equals 0%, you can’t be certain that your cluster has all the data you’ve written it.

C.

Your cluster has lost all HDFS data which had bocks stored on the dead DatNode

D.

The HDFS cluster is in safe mode

Full Access
Question # 9

You are configuring a server running HDFS, MapReduce version 2 (MRv2) on YARN running Linux. How must you format underlying file system of each DataNode?

A.

They must be formatted as HDFS

B.

They must be formatted as either ext3 or ext4

C.

They may be formatted in any Linux file system

D.

They must not be formatted - - HDFS will format the file system automatically

Full Access