Overview


This 4 day training course is designed for developers who need to create applications to analyze Big Data stored in Apache Hadoop using Pig and Hive. Topics include: Hadoop, YARN, HDFS, MapReduce, data ingestion, workflow definition, using Pig and Hive to perform data analytics on Big Data and an introduction to Spark Core and Spark SQL.

Duration


4 days

Format


50% Lecture/Discussion
50% Hands-on Labs

Who is the course for


Software developers who need to understand and develop applications for Hadoop.

Prerequisites


Students should be familiar with programming principles and have experience in software development. SQL knowledge is also helpful. No prior Hadoop knowledge is required.

What you will learn


At the completion of the course students will be able to:


↬ Describe Hadoop, YARN and use cases for Hadoop
↬ Describe Hadoop ecosystem tools and frameworks
↬ Describe the HDFS architecture
↬ Use the Hadoop client to input data into HDFS
↬ Transfer data between Hadoop and a relational database
↬ Explain YARN and MapReduce architectures
↬ Run a MapReduce job on YARN
↬ Use Pig to explore and transform data in HDFS
↬ Understand how Hive tables are defined and implemented
↬ Use Hive to explore and analyze data sets
↬ Use the new Hive windowing functions
↬ Use Hive to run SQL-like queries to perform data analysis


↬ Use Hive to join datasets using a variety of techniques
↬ Create and populate a Hive table that uses ORC file formats
↬ Explain and use the various Hive file formats
↬ Write efficient Hive queries
↬ Perform data analytics using the DataFu Pig library
↬ Explain the uses and purpose of HCatalog
↬ Use HCatalog with Pig and Hive
↬ Define and schedule an Oozie workflow
↬ Present the Spark ecosystem and high-level architecture
↬ Perform data analysis with Spark's Resilient Distributed Dataset API
↬ Explore Spark SQL and the DataFrame API

Course Outline


Hands-on Labs


↬ Use HDFS commands to add/remove files and folders
↬ Use advanced Hive features: windowing, views, ORC files
↬ Use Sqoop to transfer data between HDFS and a RDBMS
↬ Use Hive analytics functions
↬ Run MapReduce and YARN application jobs
↬ Write a custom reducer in Python
↬ Explore, transform, split and join datasets using Pig
↬ Analyze clickstream data and compute quantiles with DataFu
↬ Use Pig to transform and export a dataset for use with Hive


↬ Use Hive to compute ngrams on Avro-formatted files
↬ Use HCatLoader and HCatStorer
↬ Define an Oozie workflow
↬ Use Hive to discover useful information in a dataset
↬ Use Spark Core to read files and perform data analysis
↬ Describe how Hive queries get executed as MapReduce jobs
↬ Create and join DataFrames with Spark SQL
↬ Perform a join of two datasets with Hive

Training Details

MAY

8th - 11th

Location

Petaling Jaya, Malaysia

Time

9:00am - 5:00pm (MYT)

Contact Us

T: (+603) 92856711
E: [email protected]

HDP Developer: Apache Pig & Hive