Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
1: HDFS (17%)
- Describe the functions of HDFS Daemons.
- Explain the normal operation of an Apache Hadoop cluster, covering both data storage and data processing.
- Identify current limitations in computing systems that necessitate a solution like Apache Hadoop.
- Classify the primary objectives of HDFS design.
- Evaluate scenarios to determine the appropriate use case for HDFS Federation.
- Identify the components and daemons within an HDFS HA-Quorum cluster.
- Analyze the role of HDFS security mechanisms, specifically Kerberos.
- Determine the optimal data serialization choice for a given scenario.
- Describe the file read and write paths.
- Identify the commands used to manipulate files within the Hadoop File System Shell.
2: YARN and MapReduce version 2 (MRv2) (17%)
- Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 impacts cluster settings.
- Comprehend the deployment process for MapReduce v2 (MRv2 / YARN), including all YARN daemons.
- Grasp the fundamental design strategy behind MapReduce v2 (MRv2).
- Determine how YARN manages resource allocations.
- Identify the workflow of a MapReduce job executing on YARN.
- Determine which files require modification and the necessary steps to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN.
3: Hadoop Cluster Planning (16%)
- Examine the key considerations when selecting hardware and operating systems to host an Apache Hadoop cluster.
- Analyze the factors involved in selecting an operating system.
- Understand the principles of kernel tuning and disk swapping.
- Given a specific scenario and workload pattern, identify a hardware configuration that aligns with the requirements.
- Given a scenario, determine the necessary ecosystem components to run on the cluster to meet the Service Level Agreement (SLA).
- Perform cluster sizing: given a scenario and execution frequency, identify specific workload requirements, including CPU, memory, storage, and disk I/O.
- Address disk sizing and configuration, including comparisons between JBOD and RAID, SANs, virtualization, and cluster-specific disk sizing requirements.
- Network Topologies: understand network usage within Hadoop (for both HDFS and MapReduce) and propose or identify key network design components for a given scenario.
4: Hadoop Cluster Installation and Administration (25%)
- Given a scenario, identify how the cluster handles disk and machine failures.
- Analyze logging configurations and logging configuration file formats.
- Understand the fundamentals of Hadoop metrics and cluster health monitoring.
- Identify the function and purpose of available tools for cluster monitoring.
- Install all ecosystem components in CDH 5, including but not limited to: Impala, Flume, Oozie, Hue, Manager, Sqoop, Hive, and Pig.
- Identify the function and purpose of available tools for managing the Apache Hadoop file system.
5: Resource Management (10%)
- Understand the overarching design goals of each Hadoop scheduler.
- Given a scenario, determine how the FIFO Scheduler allocates cluster resources.
- Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN.
- Given a scenario, determine how the Capacity Scheduler allocates cluster resources.
6: Monitoring and Logging (15%)
- Understand the functions and features of Hadoop's metric collection capabilities.
- Analyze the NameNode and JobTracker Web UIs.
- Understand how to monitor cluster Daemons.
- Identify and monitor CPU usage on master nodes.
- Describe how to monitor swap and memory allocation across all nodes.
- Identify methods to view and manage Hadoop's log files.
- Interpret log file contents.
Requirements
- Fundamental Linux administration skills
- Basic programming competencies
35 Hours
Testimonials (3)
I genuinely enjoyed the many hands-on sessions.
Jacek Pieczatka
Course - Administrator Training for Apache Hadoop
I genuinely enjoyed the big competences of Trainer.
Grzegorz Gorski
Course - Administrator Training for Apache Hadoop
I mostly liked the trainer giving real live Examples.