Home
About Us
Contact Us
Bookmark
Saved Bookmarks
Current Affairs
General Knowledge
Chemical Engineering
UPSEE
BSNL
ISRO
BITSAT
Amazon
ORACLE
Verbal Ability
→
Big Data
→
Big Data in Big Data
→
What is default HDFS block size and default replic...
1.
What is default HDFS block size and default replication factor ?
Answer»
By default, the
HDFS
block
SIZE
is 64MB
Default replication factor is
3
Show Answer
Discussion
No Comment Found
Post Comment
Related InterviewSolutions
How sentry Architecture is helping Hadoop services to get secure? How Hive, Impala, HDFS and search activities are working with Sentry.
How LDAP, Active Directory and Kerberos will help the Hadoop environment to get secure?
Brief about the few optimizing techniques for the Hive performance.
How HIVE Database and IMPALA are working together in CLOUDERA?
In your MapReduce job you consistently see that MapReduce map tasks on your cluster are running slowly because of excessive garbage collection of JVM. How do you increase JVM heap size property to 3GB to optimize performance?
You have installed a cluster HDFS and Map Reduce version 2 on YARN. You have no dfs.hosts entries in your hdfs-site.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the Name Node on your cluster, and you start the Data Node daemon on that worker node. What do you have to do on the cluster to allow the worker node to join, and start sorting HDFS blocks?
You observed that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1gb and your io.sort.mb value is set to 1000mb. How would you tune your io.sort? Mb value to achieve maximum memory to I/O ratio.
<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>102400</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>48</value> </property> <name>yarn.scheduler.minimum-allocation-mb</name> <value> what is the correct value here</value>
Each node in your Hadoop cluster with running YARN and has 140GB memory and 40 cores .your yarn-site.xml has the configuration as shown below. you want YARN to launch a maximum of 100 containers per node. Enter the property value that would restrict YARN from launching more than 100 containers per node.
What is HDFS Snapshot how it helps you to recover
Reply to Comment
×
Name
*
Email
*
Comment
*
Submit Reply
Your experience on this site will be improved by allowing cookies. Read
Cookie Policy
Reject
Allow cookies