Hadoop Interview Questions
Q11. What Mapper does ?
Ans: Maps are the individual tasks that transform input records into intermediate records. Thetransformed intermediate records do not need to be of the same type as the input records. Agiven input pair may map to zero or many output pairs.
Q12. What is the InputSplit in map reduce software?
Ans: An InputSplit is a logical representation of a unit (A chunk) of input work for a maptask; e.g., a filename and a byte range within that file to process or a row set in a text file.
Q13. What is the InputFormat ?
Ans: The InputFormat is responsible for enumerate (itemise) the InputSplits, and producing aRecordReader which will turn those logical work units into actual physical input records.
Q14. Where do you specify the Mapper Implementation?
Ans: Generally mapper implementation is specified in the Job itself.
Q15. How Mapper is instantiated in a running job?
Ans: The Mapper itself is instantiated in the running job, and will be passed a MapContextobject which it can use to configure itself.
Q16. Which are the methods in the Mapper interface?
Ans : The Mapper contains the run() method, which call its own setup() method onlyonce, it also call a map() method for each input and finally calls it cleanup() method. Allabove methods you can override in your code.
Q17. What happens if you don’t override the Mapper methods and keep them as it is?
Ans: If you do not override any methods (leaving even map as-is), it will act as the identity function, emitting each input record as a separate output.
Q18. What is the use of Context object?
Ans: The Context object allows the mapper to interact with the rest of the Hadoopsystem. It Includes configuration data for the job, as well as interfaces which allow it toemit output.
Q19. How can you add the arbitrary key-value pairs in your mapper?
Ans: You can set arbitrary (key, value) pairs of configuration data in your Job, e.g.with Job.getConfiguration().set("myKey", "myVal"), and then retrieve this data in yourmapper with Context.getConfiguration().get("myKey"). This kind of functionality is typically done in the Mapper's setup() method.
Q20. How does Mapper’s run() method works?
Ans: The Mapper.run() method then calls map(KeyInType, ValInType, Context) for eachkey/value pair in the InputSplit for that task
Home Spark Hadoop NiFi Java
1. Hortonworks® is a registered trademark of Hortonworks.
2. Cloudera® is a registered trademark of Cloudera Inc
3. Azure® is aregistered trademark of Microsoft Inc.
4. Oracle®, Java® are registered trademark of Oracle Inc
5. SAS® is a registered trademark of SAS Inc
6. IBM® is a registered trademark of IBM Inc
7. DataStax ® is a registered trademark of DataStax
8. MapR® is a registered trademark of MapR Inc.
2014-2017 © HadoopExam.com | Dont Copy , it's bad Karma |