Home
About Us
Contact Us
Bookmark
Saved Bookmarks
Current Affairs
General Knowledge
Chemical Engineering
UPSEE
BSNL
ISRO
BITSAT
Amazon
ORACLE
Verbal Ability
→
Big Data
→
Big Data in Big Data
→
What are the key component of spark which internal...
1.
What are the key component of spark which internally spark require to execute the job?
Answer»
Spark follows a master/
SLAVE
architecture.
Master Daemon: (Master Drive process)
Worker Daemon: (Slave process)
Spark cluster has a single Master
No. of Slave worked as a
COMMODITY
server.
When we
SUBMIT
the spark job it triggers the spark driver.
Getting the current
STATUS
of spark application
Canceling the job
Canceling the Stage
Running job synchronously
Running job asynchronously
Accessing
PERSISTENT
RDD
Un-persisting RDD
Programmable dynamic allocation
Show Answer
Discussion
No Comment Found
Post Comment
Related InterviewSolutions
When submitting a Spark job, I notice that a few tasks are relatively taking longer time to get completed. What could be the cause of this and how to resolve this issue?
What is a broadcast variable in Spark? What purpose does it serve? How is it different from an accumulator?
Describe the Spark Memory model. What is the difference between Off-heap and On-heap memory?
What is checkpointing in Spark? How does it help Spark achieve exactly-once semantics? How does Checkpointing differ from Persistence?
What is Structured Streaming in Spark? What are the different output modes in Spark Structured Streaming?
List some of the use cases for Graph Analytics?
What is Graph Analytics in Spark?
List some of the use cases of Unsupervised Learning.
List some of the use cases of Supervised/Classification Algorithm.
What are the different types of Machine Learning?
Reply to Comment
×
Name
*
Email
*
Comment
*
Submit Reply
Your experience on this site will be improved by allowing cookies. Read
Cookie Policy
Reject
Allow cookies