High quality CCA-500 Exam Questions and Answers 2019

It is impossible to pass Cloudera CCA-500 exam without any help in the short term. Come to us soon and find the most advanced, correct and guaranteed . You will get a surprising result by our .

Cloudera CCA-500 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
You are configuring your cluster to run HDFS and MapReducer v2 (MRv2) on YARN. Which two daemons needs to be installed on your cluster’s master nodes?(Choose two)

  • A. HMaster
  • B. ResourceManager
  • C. TaskManager
  • D. JobTracker
  • E. NameNode
  • F. DataNode

Answer: BE

NEW QUESTION 2
Which two are features of Hadoop’s rack topology?(Choose two)

  • A. Configuration of rack awareness is accomplished using a configuration fil
  • B. You cannot use a rack topology script.
  • C. Hadoop gives preference to intra-rack data transfer in order to conserve bandwidth
  • D. Rack location is considered in the HDFS block placement policy
  • E. HDFS is rack aware but MapReduce daemon are not
  • F. Even for small clusters on a single rack, configuring rack awareness will improve performance

Answer: BC

NEW QUESTION 3
Your cluster implements HDFS High Availability (HA). Your two NameNodes are named nn01 and nn02. What occurs when you execute the command: hdfs haadmin –failover nn01 nn02?

  • A. nn02 is fenced, and nn01 becomes the active NameNode
  • B. nn01 is fenced, and nn02 becomes the active NameNode
  • C. nn01 becomes the standby NameNode and nn02 becomes the active NameNode
  • D. nn02 becomes the standby NameNode and nn01 becomes the active NameNode

Answer: B

Explanation: failover – initiate a failover between two NameNodes
This subcommand causes a failover from the first provided NameNode to the second. If the first
NameNode is in the Standby state, this command simply transitions the second to the Active statewithout error. If the first NameNode is in the Active state, an attempt will be made to gracefullytransition it to the Standby state. If this fails, the fencing methods (as configured bydfs.ha.fencing.methods) will be attempted in order until one of the methods succeeds. Only afterthis process will the second NameNode be transitioned to the Active state. If no fencing methodsucceeds, the second NameNode will not be transitioned to the Active state, and an error will bereturned.

NEW QUESTION 4
Your cluster is running MapReduce version 2 (MRv2) on YARN. Your ResourceManager is configured to use the FairScheduler. Now you want to configure your scheduler such that a new user on the cluster can submit jobs into their own queue application submission. Which configuration should you set?

  • A. You can specify new queue name when user submits a job and new queue can be created dynamically if the property yarn.scheduler.fair.allow-undecleared-pools = true
  • B. Yarn.scheduler.fair.user.fair-as-default-queue = false and yarn.scheduler.fair.allow- undecleared-pools = true
  • C. You can specify new queue name when user submits a job and new queue can be created dynamically if yarn .schedule.fair.user-as-default-queue = false
  • D. You can specify new queue name per application in allocations.xml file and have new jobs automatically assigned to the application queue

Answer: A

NEW QUESTION 5
You have A 20 node Hadoop cluster, with 18 slave nodes and 2 master nodes running HDFS High Availability (HA). You want to minimize the chance of data loss in your cluster. What should you do?

  • A. Add another master node to increase the number of nodes running the JournalNode which increases the number of machines available to HA to create a quorum
  • B. Set an HDFS replication factor that provides data redundancy, protecting against node failure
  • C. Run a Secondary NameNode on a different master from the NameNode in order to provide automatic recovery from a NameNode failure.
  • D. Run the ResourceManager on a different master from the NameNode in order to load- share HDFS metadata processing
  • E. Configure the cluster’s disk drives with an appropriate fault tolerant RAID level

Answer: D

NEW QUESTION 6
Your Hadoop cluster contains nodes in three racks. You have not configured the dfs.hosts property in the NameNode’s configuration file. What results?

  • A. The NameNode will update the dfs.hosts property to include machines running the DataNode daemon on the next NameNode reboot or with the command dfsadmin–refreshNodes
  • B. No new nodes can be added to the cluster until you specify them in the dfs.hosts file
  • C. Any machine running the DataNode daemon can immediately join the cluster
  • D. Presented with a blank dfs.hosts property, the NameNode will permit DataNodes specified in mapred.hosts to join the cluster

Answer: C

NEW QUESTION 7
You are running Hadoop cluster with all monitoring facilities properly configured. Which scenario will go undeselected?

  • A. HDFS is almost full
  • B. The NameNode goes down
  • C. A DataNode is disconnected from the cluster
  • D. Map or reduce tasks that are stuck in an infinite loop
  • E. MapReduce jobs are causing excessive memory swaps

Answer: B

NEW QUESTION 8
Your cluster’s mapred-start.xml includes the following parameters
<name>mapreduce.map.memory.mb</name>
<value>4096</value>
<name>mapreduce.reduce.memory.mb</name>
<value>8192</value>
And any cluster’s yarn-site.xml includes the following parameters
<name>yarn.nodemanager.vmen-pmen-ration</name>
<value>2.1</value>
What is the maximum amount of virtual memory allocated for each map task before YARN will kill its Container?

  • A. 4 GB
  • B. 17.2 GB
  • C. 8.9 GB
  • D. 8.2 GB
  • E. 24.6 GB

Answer: D

NEW QUESTION 9
You suspect that your NameNode is incorrectly configured, and is swapping memory to disk. Which Linux commands help you to identify whether swapping is occurring?(Select all that apply)

  • A. free
  • B. df
  • C. memcat
  • D. top
  • E. jps
  • F. vmstat
  • G. swapinfo

Answer: ADF

Explanation: Reference:http://www.cyberciti.biz/faq/linux-check-swap-usage-command/

NEW QUESTION 10
You observed that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 1000MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?

  • A. For a 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O
  • B. Increase the io.sort.mb to 1GB
  • C. Decrease the io.sort.mb value to 0
  • D. Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records.

Answer: D

NEW QUESTION 11
Which scheduler would you deploy to ensure that your cluster allows short jobs to finish within a reasonable time without starting long-running jobs?

  • A. Complexity Fair Scheduler (CFS)
  • B. Capacity Scheduler
  • C. Fair Scheduler
  • D. FIFO Scheduler

Answer: C

Explanation: Reference:http://hadoop.apache.org/docs/r1.2.1/fair_scheduler.html

NEW QUESTION 12
You have recently converted your Hadoop cluster from a MapReduce 1 (MRv1) architecture to MapReduce 2 (MRv2) on YARN architecture. Your developers are accustomed to specifying map and reduce tasks (resource allocation) tasks when they run jobs: A developer wants to know how specify to reduce tasks when a specific job runs. Which method should you tell that developers to implement?

  • A. MapReduce version 2 (MRv2) on YARN abstracts resource allocation away from the idea of “tasks” into memory and virtual cores, thus eliminating the need for a developer to specify the number of reduce tasks, and indeed preventing the developer from specifying the number of reduce tasks.
  • B. In YARN, resource allocations is a function of megabytes of memory in multiples of 1024m
  • C. Thus, they should specify the amount of memory resource they need by executing –D mapreduce-reduces.memory-mb-2048
  • D. In YARN, the ApplicationMaster is responsible for requesting the resource required for a specific launc
  • E. Thus, executing –D yarn.applicationmaster.reduce.tasks=2 will specify that the ApplicationMaster launch two task contains on the worker nodes.
  • F. Developers specify reduce tasks in the exact same way for both MapReduce version 1 (MRv1) and MapReduce version 2 (MRv2) on YAR
  • G. Thus, executing –D mapreduce.job.reduces-2 will specify reduce tasks.
  • H. In YARN, resource allocation is function of virtual cores specified by the ApplicationManager making requests to the NodeManager where a reduce task is handeled by a single container (and thus a single virtual core). Thus, the developer needs to specify the number of virtual cores to the NodeManager by executing –p yarn.nodemanager.cpu-vcores=2

Answer: D

NEW QUESTION 13
You decide to create a cluster which runs HDFS in High Availability mode with automatic failover, using Quorum Storage. What is the purpose of ZooKeeper in such a configuration?

  • A. It only keeps track of which NameNode is Active at any given time
  • B. It monitors an NFS mount point and reports if the mount point disappears
  • C. It both keeps track of which NameNode is Active at any given time, and manages the Edits fil
  • D. Which is a log of changes to the HDFS filesystem
  • E. If only manages the Edits file, which is log of changes to the HDFS filesystem
  • F. Clients connect to ZooKeeper to determine which NameNode is Active

Answer: A

Explanation: Reference: Reference:http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/PDF/CDH4-High-Availability-Guide.pdf(page 15)

NEW QUESTION 14
On a cluster running MapReduce v2 (MRv2) on YARN, a MapReduce job is given a directory of 10 plain text files as its input directory. Each file is made up of 3 HDFS blocks. How many Mappers will run?

  • A. We cannot say; the number of Mappers is determined by the ResourceManager
  • B. We cannot say; the number of Mappers is determined by the developer
  • C. 30
  • D. 3
  • E. 10
  • F. We cannot say; the number of mappers is determined by the ApplicationMaster

Answer: E

NEW QUESTION 15
Your company stores user profile records in an OLTP databases. You want to join these records with web server logs you have already ingested into the Hadoop file system. What is the best way to obtain and ingest these user records?

  • A. Ingest with Hadoop streaming
  • B. Ingest using Hive’s IQAD DATA command
  • C. Ingest with sqoop import
  • D. Ingest with Pig’s LOAD command
  • E. Ingest using the HDFS put command

Answer: C

NEW QUESTION 16
A slave node in your cluster has 4 TB hard drives installed (4 x 2TB). The DataNode is configured to store HDFS blocks on all disks. You set the value of the dfs.datanode.du.reserved parameter to 100 GB. How does this alter HDFS block storage?

  • A. 25GB on each hard drive may not be used to store HDFS blocks
  • B. 100GB on each hard drive may not be used to store HDFS blocks
  • C. All hard drives may be used to store HDFS blocks as long as at least 100 GB in total is available on the node
  • D. A maximum if 100 GB on each hard drive may be used to store HDFS blocks

Answer: B

100% Valid and Newest Version CCA-500 Questions & Answers shared by Certleader, Get Full Dumps HERE: https://www.certleader.com/CCA-500-dumps.html (New 60 Q&As)