Big Data Hadoop Certification Training
Intellipaat
Course Summary
Master Big Data Hadoop and Spark to get ready for Cloudera CCA Spark and Hadoop Developer Certification ( CCA175) as well as master Hadoop Administration with 12 real time industry oriented case study projects
-
+
Course Description
About Big Data Hadoop Certification Training Course
It is a comprehensive Hadoop Big Data training course designed by industry experts considering current industry job requirements to provide in-depth learning on big data and Hadoop Modules. This is an industry recognized Big Data certification training course that is a combination of the training courses in Hadoop developer, Hadoop administrator, Hadoop testing, and analytics. This Cloudera Hadoop training will prepare you to clear big data certification.
What you will learn in this Big Data Hadoop online training Course?
- Master fundamentals of Hadoop 2.7 and YARN and write applications using them
- Setting up Pseudo node and Multi node cluster on Amazon EC2
- Master HDFS, MapReduce, Hive, Pig, Oozie, Sqoop, Flume, Zookeeper, HBase
- Learn Spark, Spark SQL, Streaming, DataFrame, RDD, Graphx, MLlib writing Spark applications
- Master Hadoop administration activities like cluster managing,monitoring,administration and troubleshooting
- Configuring ETL tools like Pentaho/Talend to work with MapReduce, Hive, Pig, etc
- Detailed understanding of Big Data analytics, configuring Kerberos on Hadoop cluster
- Hadoop testing applications using MR Unit and other automation tools.
- Work with Avro data formats
- Practice real-life projects using Hadoop and Apache Spark
- Be equipped to clear Big Data Hadoop Certification.
Who should take this Big Data Hadoop Online Training Course?
- Programming Developers and System Administrators
- Experienced working professionals , Project managers
- Big DataHadoop Developers eager to learn other verticals like Testing, Analytics, Administration
- Mainframe Professionals, Architects & Testing Professionals
- Business Intelligence, Data warehousing and Analytics Professionals
- Graduates, undergraduates eager to learn the latest Big Data technology can take this Big Data Hadoop Certification online training
What are the prerequisites for taking this Hadoop Certification Training?
There is no pre-requisite to take this Big data training and to master Hadoop. But basics of UNIX, SQL and java would be good.At Intellipaat, we provide complimentary unix and Java course with our Big Data certification training to brush-up the required skills so that you are good on you Hadoop learning path.
Why you should go for Big Data Hadoop Online Training?
- Global Hadoop Market to Reach $84.6 Billion by 2021 – Allied Market Research
- Shortage of 1.4 -1.9 million Hadoop Data Analysts in US alone by 2018– Mckinsey
- Hadoop Administrator in the US can get a salary of $123,000 – indeed.com
Hadoop Courses Developer Admin Architect Proficiency MapReduce,Spark, HBase Cluster schedule, monitor, provision Includes all components Audience Analytics,BI,ETL personnel, Coders, Mainframe,QA personnel Includes audience of both Average Salaries $100,000 $123,000 $ 172,000 What Hadoop Projects You will be working on?
Project 1 – Working with MapReduce, Hive, SqoopTopics : As part of this Big data Hadoop certification training, you will undergo the project which involves working on the various Hadoop components like MapReduce, Apache Hive and Apache Sqoop. Work with Sqoop to import data from relational database management system like MySQL data into HDFS. Deploy Hive for summarizing data, querying and analysis. Convert SQL queries using HiveQL for deploying MapReduce on the transferred data. You will gain considerable proficiency in Hive, and Sqoop after completion of this project.Project 2 – Work on MovieLens data for finding top recordsData – MovieLens datasetTopics : In this project you will work exclusively on data collected through MovieLens available rating data sets. The project involves the following important components:- You will write a MapReduce program in order to find the top 10 movies by working in the data file
- Learn to deploy Apache Pig create the top 10 movies list by loading the data
- Work with Apache Hive and create the top 10 movies list by loading the
Project 3 – Hadoop YARN Project – End to End PoCTopics : In this Big Data project you will work on a live Hadoop YARN project. YARN is part of the Hadoop 2.0 ecosystem that lets Hadoop to decouple from MapReduce and deploy more competitive processing and wider array of applications. You will work on the YARN central Resource Manager. The salient features of this project include:- Importing of Movie data
- Appending the data
- Using Sqoop commands to bring the data into HDFS
- End to End flow of transaction data
- Processing data using MapReduce program in terms of the movie data, etc.
Project 4 – Partitioning Tables in HiveTopics : This project involves working with Hive table data partitioning. Ensuring the right partitioning helps to read the data, deploy it on the HDFS, and run the MapReduce jobs at a much faster rate. Hive lets you partition data in multiple ways like:- Manual Partitioning
- Dynamic Partitioning
- Bucketing
Project 5 – Connecting Pentaho with Hadoop EcosystemTopics : This project lets you connect Pentaho with the Hadoop ecosystem. Pentaho works well with HDFS, HBase, Oozie and Zookeeper. You will connect the Hadoop cluster with Pentaho data integration, analytics, Pentaho server and report designer. Some of the components of this project include the following:- Clear hands-on working knowledge of ETL and Business Intelligence
- Configuring Pentaho to work with Hadoop Distribution
- Loading, Transforming and Extracting data into Hadoop cluster
Project 6 – Multi-node cluster setupTopics : This is a project that gives you opportunity to work on real world Hadoop multi-node cluster setup in a distributed environment. The major components of this project involve:- Running a Hadoop multi-node using a 4 node cluster on Amazon EC2
- Deploying of MapReduce job on the Hadoop cluster
- Hadoop Multi-Node Cluster Setup using Amazon ec2 – Creating 4 node cluster setup
- Running Map Reduce Jobs on Cluster
Project 7 – Hadoop Testing using MRTopics : In this project you will gain proficiency in Hadoop MapReduce code testing using MRUnit. You will learn about real world scenarios of deploying MRUnit, Mockito, and PowerMock. Some of the important aspects of this project include:- Writing JUnit tests using MRUnit for MapReduce applications
- Doing mock static methods using PowerMock&Mockito
- MapReduceDriver for testing the map and reduce pair
Project 8 – Hadoop Weblog AnalyticsData – WeblogsTopics : This project is involved with making sense of all the web log data in order to derive valuable insights from it. You will work with loading the server data onto a Hadoop cluster using various techniques. The various modules of this project include:- Aggregation of log data
- Processing of the data and generating analytics
Project 9 – Hadoop MaintenanceTopics : This project is involved with working on the Hadoop cluster for maintaining and managing it. You will work on a number of important tasks like:- Administration of distributed file system
- Checking the file system
- Working with name node directory structure
- Audit logging, data node block scanner, balancer
- Learning about the properties of safe mode
- Entering and exiting safe mode
- HDFS federation and high availability
- Failover, fencing, DISTCP, Hadoop file formats
Apache Spark Projects
Project 1 – Movie RecommendationTopics : This is a hands-on Apache Spark project deployed for the real-world application of movie recommendations. This project helps you gain essential knowledge in Spark MLlib which is a machine learning library, you will know how to create collaborative filtering, regression, clustering and dimensionality reduction using Spark MLlib. Upon finishing the project you will have first-hand experience in the Apache Spark streaming data analysis, sampling, testing, statistics among other vital skills.Project 2 – Twitter API Integration for tweet AnalysisTopics : This is a hands-on Twitter analysis project using the Twitter API for analyzing of tweets. You will integrate the Twitter API,do programming using the various scripting languages like Ruby, Python, PHP for developing the essential server side codes. Finally you will be able to read the results for various operations by filtering, parsing, and aggregating it depending on the tweet analysis requirement.Project 3 – Data Exploration Using Spark SQL – Wikipedia data setTopics : In this project you will be using the Spark SQL tool for analyzing the Wikipedia data. You will gain hands-on experience in integrating Spark SQL for various applications like batch analysis, machine learning, visualizing and processing of data, ETL processes along with real-time analysis of data.
-
+
Course Syllabus
Big Data Hadoop Course Content
Hadoop Installation & setupHadoop 2.x Cluster Architecture, Federation and High Availability, A Typical Production Cluster setup, Hadoop Cluster Modes, Common Hadoop Shell Commands, Hadoop 2.x Configuration Files, Cloudera Single node cluster, Hive, Pig, Sqoop, Flume, Scala and Spark.Introduction to Big Data Hadoop. Understanding HDFS & MapreduceIntroducing Big Data & Hadoop, what is Big Data and where does Hadoop fits in, two important Hadoop ecosystem componentsnamely Map Reduce and HDFS, in-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability, in-depth YARN – Resource Manager, Node Manager.Hands-on Exercise – Working with HDFS, replicating the data, determining block size, familiarizing with Namenode and Datanode.
Deep Dive in MapreduceDetailed understanding of the working of MapReduce, the mapping and reducing process, the working of Driver, Combiners, Partitioners, Input Formats, Output Formats, Shuffle and SorHands-on Exercise – The detailed methodology for writing the Word Count Program in MapReduce, writing custom partitioner, MapReduce with Combiner, Local Job Runner Mode, Unit Test, ToolRunner, MapSide Join, Reduce Side Join, Using Counters, Joining two datasets using Map-Side Join &Reduce-Side JoinIntroduction to HiveIntroducing Hadoop Hive, detailed architecture of Hive, comparing Hive with Pig and RDBMS, working with Hive Query Language, creation of database, table, Group by and other clauses, the various types of Hive tables, Hcatalog, storing the Hive Results, Hive partitioning and Buckets.Hands-on Exercise – Creating of Hive database, how to drop database, changing the database, creating of Hive table, loading of data, dropping the table and altering it, writing hive queries to pull data using filter conditions, group by clauses, partitioning Hive tablesAdvance Hive & ImpalaThe indexing in Hive, the Map side Join in Hive, working with complex data types, the Hive User-defined Functions, Introduction to Impala, comparing Hive with Impala, the detailed architecture of ImpalaHands-on Exercise – Working with Hive queries, writing indexes, joining table, deploying external table, sequence table and storing data in another table.Introduction to PigApache Pig introduction, its various features, the various data types and schema in Hive, the available functions in Pig, Hive Bags, Tuples and Fields.Hands-on Exercise – Working with Pig in MapReduce and local mode, loading of data, limiting data to 4 rows, storing the data into file, working with Group By,Filter By,Distinct,Cross,Split in Hive.Flume, Sqoop & HBaseIntroduction to Apache Sqoop, Sqoop overview, basic imports and exports, how to improve Sqoop performance, the limitation of Sqoop, introduction to Flume and its Architecture, introduction to HBase, the CAP theorem.Hands-on Exercise – Working with Flume to generating of Sequence Number and consuming it, using the Flume Agent to consume the Twitter data, using AVRO to create Hive Table, AVRO with Pig, creating Table in HBase, deploying Disable, Scan and Enable Table.Writing Spark Applications using ScalaUsing Scala for writing Apache Spark applications, detailed study of Scala, the need for Scala, the concept of object oriented programing, executing the Scala code, the various classes in Scala like Getters,Setters, Constructors, Abstract ,Extending Objects, Overriding Methods, the Java and Scala interoperability, the concept of functional programming and anonymous functions, Bobsrockets package, comparing the mutable and immutable collections.Hands-on Exercise – Writing Spark application using Scala, understanding the robustness of Scala for Spark real-time analytics operation.Spark frameworkDetailed Apache Spark, its various features, comparing with Hadoop, the various Spark components, combining HDFS with Spark, Scalding, introduction to Scala, importance of Scala and RDD.Hands-on Exercise – The Resilient Distributed Dataset in Spark and how it helps to speed up big data processing.
RDD in SparkThe RDD operation in Spark, the Spark transformations, actions, data loading, comparing with MapReduce, Key Value Pair.Hands-on Exercise – How to deploy RDD with HDFS, using the in-memory dataset, using file for RDD, how to define the base RDD from external file, deploying RDD via transformation, using the Map and Reduce functions, working on word count and count log severity.
Data Frames and Spark SQLThe detailed Spark SQL, the significance of SQL in Spark for working with structured data processing, Spark SQL JSON support, working with XML data, and parquet files, creating HiveContext, writing Data Frame to Hive, reading of JDBC files, the importance of Data Frames in Spark, creating Data Frames, schema manual inferring, working with CSV files, reading of JDBC tables, converting from Data Frame to JDBC, the user-defined functions in Spark SQL, shared variable and accumulators, how to query and transform data in Data Frames, how Data Frame provides the benefits of both Spark RDD and Spark SQL, deploying Hive on Spark as the execution engine.Hands-on Exercise – Data querying and transformation using Data Frames, finding out the benefits of Data Frames over Spark SQL and Spark RDD.Machine Learning using Spark (Mlib)Different Algorithms, the concept of iterative algorithm in Spark, analyzing with Spark graph processing, introduction to K-Means and machine learning, various variables in Spark like shared variables, broadcast variables, learning about accumulators.Hands-on Exercise – Writing spark code using Mlib.
Spark StreamingIntroduction to Spark streaming, the architecture of Spark Streaming, working with the Spark streaming program, processing data using Spark streaming, requesting count and Dstream, multi-batch and sliding window operations and working with advanced data sources.Hands-on Exercise – Deploying Spark streaming for data in motion and checking the output is as per the requirement.
Hadoop Administration – Multi Node Cluster Setup using Amazon EC2Create a four node Hadoop cluster setup, running the MapReduce Jobs on the Hadoop cluster, successfully running the MapReduce code, working with the Cloudera Manager setup.Hands-on Exercise – The method to build a multi-node Hadoop cluster using an Amazon EC2 instance, working with the Cloudera Manager.
Hadoop Administration – Cluster ConfigurationThe overview of Hadoop configuration, the importance of Hadoop configuration file, the various parameters and values of configuration, the HDFS parameters and MapReduce parameters, setting up the Hadoop environment, the Include’ and Exclude configuration files, the administration and maintenance of Name node, Data node directory structures and files, File system image and Edit logHands-on Exercise – The method to do performance tuning of MapReduce program.
Hadoop Administration – Maintenance, Monitoring and TroubleshootingIntroduction to the Checkpoint Procedure, Name node failure and how to ensure the recovery procedure, Safe Mode, Metadata and Data backup, the various potential problems and solutions, what to look for, how to add and remove nodes.Hands-on Exercise – How to go about ensuring the MapReduce File system Recovery for various different scenarios, JMX monitoring of the Hadoop cluster, how to use the logs and stack traces for monitoring and troubleshooting, using the Job Scheduler for scheduling jobs in the same cluster, getting the MapReduce job submission flow, FIFO schedule, getting to know the Fair Scheduler and its configuration.
Securing Hadoop Cluster with Kerberos and other Advance topicsAdvanced Hadoop administration functions, using the Quorum Journal Manager, configuring the Hadoop federation and security, fundamentals of the Hadoop Platform Security, working with Kerberos authentication, configuring Kerberos on Hadoop cluster.Hands-on Exercise – Detailed procedure for configuring the Kerberos authentication with the Hadoop cluster and checking the results of the configuration.
ETL Connectivity with Hadoop EcosystemHow ETL tools work in Big data Industry, Introduction to ETL and Data warehousing. Working with prominent use cases of Big data in ETL industry, End to End ETL PoC showing big data integration with ETL tool.Hands-on Exercise – Connecting to HDFS from ETL tool and moving data from Local system to HDFS, Moving Data from DBMS to HDFS, Working with Hive with ETL Tool, Creating Map Reduce job in ETL tool
IBM Project Solution Discussion and Cloudera Certification Tips & TricksWorking towards the solution of the Hadoop IBM project solution, its problem statements and the possible solution outcomes, preparing for the Cloudera Certifications, points to focus for scoring the highest marks, tips for cracking Hadoop interview questions.Hands-on Exercise – The IBM project of a real-world high value Big Data Hadoop application and getting the right solution based on the criteria set by the IBM team.
Following topics will be available only in self-paced Mode.Hadoop Application TestingWhy testing is important, Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end to end tests, Functional testing, Release certification testing, Security testing, Scalability Testing, Commissioning and Decommissioning of Data Nodes Testing, Reliability testing, Release testing
Roles and Responsibilities of Hadoop Testing ProfessionalUnderstanding the Requirement, preparation of the Testing Estimation, Test Cases, Test Data, Test bed creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, HIVE, HBASE) while loading the input (logs/files/records etc) using sqoop/flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges etc), Report defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Validating new feature and issues in Core Hadoop.
Framework called MR Unit for Testing of Map-Reduce ProgramsReport defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Responsible for creating a testing Framework called MR Unit for testing of Map-Reduce programs.
Unit TestingAutomation testing using the OOZIE, Data validation using the query surge tool.
Test ExecutionTest plan for HDFS upgrade, Test automation and result
Test Plan Strategy and writing Test Cases for testing Hadoop ApplicationHow to test install and configure