Hadoop Architect, Data Science and Apache Spark, Scala
Intellipaat
Course Summary
Our Hadoop, Data Science, Spark, Scala certification master program lets you gain proficiency in Data Science, Hadoop and high speed processing. You will work on real life projects in Hadoop MapReduce, data analytics, statistical methods, visualizations, HDFS, HBase, Spark, Scala programming.
-
+
Course Description
About Data Science, Hadoop, Spark, Scala Training Course
What you will learn in this training course?
- Hadoop architecture, ecosystem, HDFS & MapReduce
- Integration of MapReduce and HBase
- Learn about the Apache Hadoop Architecture and Framework
- Complex MapReduce programs in MRv1 and MRv2
- Deploy Hadoop clustering for analysis segmentation & prediction
- Integration of R with Hadoop
- Hadoop advanced modules like YARN, Flume, Hive, Oozie, Zookeeper, Hue.
- Hadoop single and multi-node cluster setup by deploying Amazon EC2
- Spark cluster implementation and application development using Java, Python and Scala
- Data Science basics and its various concepts
- Introduction to machine learning algorithms
- Impala installation steps
- Deploying clustering for prediction and analysis segmentation.
Who should take this IBM certified combo training course?
- Big Data Hadoop professionals, Data Scientists, Data Analysts and engineers
- Software, ETL, SQL Developers, Mainframe Architects and administrators.
What are the prerequisites for this training course?
There are no prerequisites for taking this training course. Having a basic knowledge of Java can help.Why should you take this IBM Certified training course?
- Data Scientist is the best job of the 21st century - Harvard Business Review
- Global Big Data market to reach $122B in revenue by 2025 – Frost & Sullivan
- The US alone could face a shortage of 1.4 -1.9 million Big Data Analysts by 2018 – Mckinsey
-
+
Course Syllabus
Big Data Hadoop Course Content
Hadoop Installation & setupHadoop 2.x Cluster Architecture, Federation and High Availability, A Typical Production Cluster setup, Hadoop Cluster Modes, Common Hadoop Shell Commands, Hadoop 2.x Configuration Files, Cloudera Single node cluster, Hive, Pig, Sqoop, Flume, Scala and Spark.Introduction to Big Data Hadoop. Understanding HDFS & MapreduceIntroducing Big Data & Hadoop, what is Big Data and where does Hadoop fits in, two important Hadoop ecosystem componentsnamely Map Reduce and HDFS, in-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability, in-depth YARN – Resource Manager, Node Manager.Hands-on Exercise – Working with HDFS, replicating the data, determining block size, familiarizing with Namenode and Datanode.
Deep Dive in MapreduceDetailed understanding of the working of MapReduce, the mapping and reducing process, the working of Driver, Combiners, Partitioners, Input Formats, Output Formats, Shuffle and SorHands-on Exercise – The detailed methodology for writing the Word Count Program in MapReduce, writing custom partitioner, MapReduce with Combiner, Local Job Runner Mode, Unit Test, ToolRunner, MapSide Join, Reduce Side Join, Using Counters, Joining two datasets using Map-Side Join &Reduce-Side JoinIntroduction to HiveIntroducing Hadoop Hive, detailed architecture of Hive, comparing Hive with Pig and RDBMS, working with Hive Query Language, creation of database, table, Group by and other clauses, the various types of Hive tables, Hcatalog, storing the Hive Results, Hive partitioning and Buckets.Hands-on Exercise – Creating of Hive database, how to drop database, changing the database, creating of Hive table, loading of data, dropping the table and altering it, writing hive queries to pull data using filter conditions, group by clauses, partitioning Hive tablesAdvance Hive & ImpalaThe indexing in Hive, the Map side Join in Hive, working with complex data types, the Hive User-defined Functions, Introduction to Impala, comparing Hive with Impala, the detailed architecture of ImpalaHands-on Exercise – Working with Hive queries, writing indexes, joining table, deploying external table, sequence table and storing data in another table.Introduction to PigApache Pig introduction, its various features, the various data types and schema in Hive, the available functions in Pig, Hive Bags, Tuples and Fields.Hands-on Exercise – Working with Pig in MapReduce and local mode, loading of data, limiting data to 4 rows, storing the data into file, working with Group By,Filter By,Distinct,Cross,Split in Hive.Flume, Sqoop & HBaseIntroduction to Apache Sqoop, Sqoop overview, basic imports and exports, how to improve Sqoop performance, the limitation of Sqoop, introduction to Flume and its Architecture, introduction to HBase, the CAP theorem.Hands-on Exercise – Working with Flume to generating of Sequence Number and consuming it, using the Flume Agent to consume the Twitter data, using AVRO to create Hive Table, AVRO with Pig, creating Table in HBase, deploying Disable, Scan and Enable Table.Writing Spark Applications using ScalaUsing Scala for writing Apache Spark applications, detailed study of Scala, the need for Scala, the concept of object oriented programing, executing the Scala code, the various classes in Scala like Getters,Setters, Constructors, Abstract ,Extending Objects, Overriding Methods, the Java and Scala interoperability, the concept of functional programming and anonymous functions, Bobsrockets package, comparing the mutable and immutable collections.Hands-on Exercise – Writing Spark application using Scala, understanding the robustness of Scala for Spark real-time analytics operation.Spark frameworkDetailed Apache Spark, its various features, comparing with Hadoop, the various Spark components, combining HDFS with Spark, Scalding, introduction to Scala, importance of Scala and RDD.Hands-on Exercise – The Resilient Distributed Dataset in Spark and how it helps to speed up big data processing.
RDD in SparkThe RDD operation in Spark, the Spark transformations, actions, data loading, comparing with MapReduce, Key Value Pair.Hands-on Exercise – How to deploy RDD with HDFS, using the in-memory dataset, using file for RDD, how to define the base RDD from external file, deploying RDD via transformation, using the Map and Reduce functions, working on word count and count log severity.
Data Frames and Spark SQLThe detailed Spark SQL, the significance of SQL in Spark for working with structured data processing, Spark SQL JSON support, working with XML data, and parquet files, creating HiveContext, writing Data Frame to Hive, reading of JDBC files, the importance of Data Frames in Spark, creating Data Frames, schema manual inferring, working with CSV files, reading of JDBC tables, converting from Data Frame to JDBC, the user-defined functions in Spark SQL, shared variable and accumulators, how to query and transform data in Data Frames, how Data Frame provides the benefits of both Spark RDD and Spark SQL, deploying Hive on Spark as the execution engine.Hands-on Exercise – Data querying and transformation using Data Frames, finding out the benefits of Data Frames over Spark SQL and Spark RDD.Machine Learning using Spark (Mlib)Different Algorithms, the concept of iterative algorithm in Spark, analyzing with Spark graph processing, introduction to K-Means and machine learning, various variables in Spark like shared variables, broadcast variables, learning about accumulators.Hands-on Exercise – Writing spark code using Mlib.
Spark StreamingIntroduction to Spark streaming, the architecture of Spark Streaming, working with the Spark streaming program, processing data using Spark streaming, requesting count and Dstream, multi-batch and sliding window operations and working with advanced data sources.Hands-on Exercise – Deploying Spark streaming for data in motion and checking the output is as per the requirement.
Hadoop Administration – Multi Node Cluster Setup using Amazon EC2Create a four node Hadoop cluster setup, running the MapReduce Jobs on the Hadoop cluster, successfully running the MapReduce code, working with the Cloudera Manager setup.Hands-on Exercise – The method to build a multi-node Hadoop cluster using an Amazon EC2 instance, working with the Cloudera Manager.
Hadoop Administration – Cluster ConfigurationThe overview of Hadoop configuration, the importance of Hadoop configuration file, the various parameters and values of configuration, the HDFS parameters and MapReduce parameters, setting up the Hadoop environment, the Include’ and Exclude configuration files, the administration and maintenance of Name node, Data node directory structures and files, File system image and Edit logHands-on Exercise – The method to do performance tuning of MapReduce program.
Hadoop Administration – Maintenance, Monitoring and TroubleshootingIntroduction to the Checkpoint Procedure, Name node failure and how to ensure the recovery procedure, Safe Mode, Metadata and Data backup, the various potential problems and solutions, what to look for, how to add and remove nodes.Hands-on Exercise – How to go about ensuring the MapReduce File system Recovery for various different scenarios, JMX monitoring of the Hadoop cluster, how to use the logs and stack traces for monitoring and troubleshooting, using the Job Scheduler for scheduling jobs in the same cluster, getting the MapReduce job submission flow, FIFO schedule, getting to know the Fair Scheduler and its configuration.
Securing Hadoop Cluster with Kerberos and other Advance topicsAdvanced Hadoop administration functions, using the Quorum Journal Manager, configuring the Hadoop federation and security, fundamentals of the Hadoop Platform Security, working with Kerberos authentication, configuring Kerberos on Hadoop cluster.Hands-on Exercise – Detailed procedure for configuring the Kerberos authentication with the Hadoop cluster and checking the results of the configuration.
ETL Connectivity with Hadoop EcosystemHow ETL tools work in Big data Industry, Introduction to ETL and Data warehousing. Working with prominent use cases of Big data in ETL industry, End to End ETL PoC showing big data integration with ETL tool.Hands-on Exercise – Connecting to HDFS from ETL tool and moving data from Local system to HDFS, Moving Data from DBMS to HDFS, Working with Hive with ETL Tool, Creating Map Reduce job in ETL tool
IBM Project Solution Discussion and Cloudera Certification Tips & TricksWorking towards the solution of the Hadoop IBM project solution, its problem statements and the possible solution outcomes, preparing for the Cloudera Certifications, points to focus for scoring the highest marks, tips for cracking Hadoop interview questions.Hands-on Exercise – The IBM project of a real-world high value Big Data Hadoop application and getting the right solution based on the criteria set by the IBM team.
Following topics will be available only in self-paced Mode.Hadoop Application TestingWhy testing is important, Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end to end tests, Functional testing, Release certification testing, Security testing, Scalability Testing, Commissioning and Decommissioning of Data Nodes Testing, Reliability testing, Release testing
Roles and Responsibilities of Hadoop Testing ProfessionalUnderstanding the Requirement, preparation of the Testing Estimation, Test Cases, Test Data, Test bed creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, HIVE, HBASE) while loading the input (logs/files/records etc) using sqoop/flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges etc), Report defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Validating new feature and issues in Core Hadoop.
Framework called MR Unit for Testing of Map-Reduce ProgramsReport defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Responsible for creating a testing Framework called MR Unit for testing of Map-Reduce programs.
Unit TestingAutomation testing using the OOZIE, Data validation using the query surge tool.
Test ExecutionTest plan for HDFS upgrade, Test automation and result
Test Plan Strategy and writing Test Cases for testing Hadoop ApplicationHow to test install and configure
Data Science Course Content
Introduction to Data Science and Statistical AnalyticsIntroduction to Data Science, Use cases, Need of Business Analytics, Data Science Life Cycle, Different tools available for Data ScienceIntroduction to RInstalling R and R-Studio, R packages, R Operators, if statements and loops (for, while, repeat, break, next), switch caseData Exploration, Data Wrangling and R Data StructureImporting and Exporting data from external source, Data exploratory analysis, R Data Structure (Vector, Scalar, Matrices, Array, Data frame, List), Functions, Apply FunctionsData VisualizationBar Graph (Simple, Grouped, Stacked), Histogram, Pi Chart, Line Chart, Box (Whisker) Plot, Scatter Plot, CorrelogramIntroduction to StatisticsTerminologies of Statistics ,Measures of Centers, Measures of Spread, Probability, Normal Distribution, Binary Distribution, Hypothesis Testing, Chi Square Test, ANOVAPredictive Modeling – 1 ( Linear Regression)Supervised Learning – Linear Regression ,Bivariate Regression, Multiple Regression Analysis, Correlation( Positive, negative and neutral), Industrial Case Study, Machine Learning Use-Cases, Machine Learning Process Flow, Machine Learning CategoriesPredictive Modeling – 2 ( Logistic Regression)Logistic RegressionDecision TreesWhat is Classification and its use cases?, What is Decision Tree?, Algorithm for Decision Tree Induction, Creating a Perfect Decision Tree, Confusion MatrixRandom ForestRandom Forest, What is Naive Bayes?Unsupervised learningWhat is Clustering & its Use Cases?, What is K-means Clustering?, What is Canopy Clustering?, What is Hierarchical Clustering?Association Analysis and Recommendation engineMarket Basket Analysis (MBA), Association Rules, Apriori Algorithm for MBA, Introduction of Recommendation Engine, Types of Recommendation – User-Based and Item-Based, Recommendation Use-caseSentiment AnalysisIntroduction to Text Mining, Introduction to Sentiment, Setting up API bridge, between R and Tweeter Account, Extracting Tweet from Tweeter Acc, Scoring the tweetTime SeriesWhat is Time Series data?, Time Series variables, Different components of Time Series data, Visualize the data to identify Time Series Components, Implement ARIMA model for forecasting, Exponential smoothing models, Identifying different time series scenario based on which different Exponential Smoothing model can be applied, Implement respective ETS model for forecastingScala Course Content
Introduction of ScalaIntroducing Scala and deployment of Scala for Big Data applications and Apache Spark analytics.Pattern MatchingThe importance of Scala, the concept of REPL (Read Evaluate Print Loop), deep dive into Scala pattern matching, type interface, higher order function, currying, traits, application space and Scala for data analysis.Executing the Scala codeLearning about the Scala Interpreter, static object timer in Scala, testing String equality in Scala, Implicit classes in Scala, the concept of currying in Scala, various classes in Scala.Classes concept in ScalaLearning about the Classes concept, understanding the constructor overloading, the various abstract classes, the hierarchy types in Scala, the concept of object equality, the val and var methods in Scala.Case classes and pattern matchingUnderstanding Sealed traits, wild, constructor, tuple, variable pattern, and constant pattern.Concepts of traits with exampleUnderstanding traits in Scala, the advantages of traits, linearization of traits, the Java equivalent and avoiding of boilerplate code.Scala java InteroperabilityImplementation of traits in Scala and Java, handling of multiple traits extending.Scala collectionsIntroduction to Scala collections, classification of collections, the difference between Iterator, and Iterable in Scala, example of list sequence in Scala.Mutable collections vs. Immutable collectionsThe two types of collections in Scala, Mutable and Immutable collections, understanding lists and arrays in Scala, the list buffer and array buffer, Queue in Scala, double-ended queue Deque, Stacks, Sets, Maps, Tuples in Scala.Use Case bobsrockets packageIntroduction to Scala packages and imports, the selective imports, the Scala test classes, introduction to JUnit test class, JUnit interface via JUnit 3 suite for Scala test, packaging of Scala applications in Directory Structure, example of Spark Split and Spark Scala.Spark Course Content
Introduction to SparkIntroduction to Spark, how Spark overcomes the drawbacks of working MapReduce, understanding in-memory MapReduce, Spark Hadoop YARN, HDFS Revision, YARN Revision, the overview of Spark and how it is better Hadoop, deploying Spark without Hadoop.Spark BasicsSpark installation guide, working with Spark Shell, the concept of Resilient Distributed Datasets (RDD), learning to do functional programming in Spark, the architecture of Spark.Working with RDDs in SparkDeep dive into Spark RDDs, the RDD general operations, a read-only partitioned collection of records, using the concept of RDD for faster and efficient data processing.Aggregating Data with Pair RDDsUnderstanding the concept of Key-Value pair in RDDs, learning how Spark makes MapReduce operations faster, various operations of RDD.Writing and Deploying Spark ApplicationsComparing the Spark applications with Spark Shell, creating a Spark application using Scala or Java, deploying a Spark application, the web user interface of Spark application, a real world example of Spark and configuring of Spark.Parallel ProcessingLearning about Spark parallel processing, deploying on a cluster, introduction to Spark partitions, file-based partitioning of RDDs, understanding of HDFS and data locality, mastering the technique of parallel operations.Spark RDD PersistenceUnderstanding the RDD persistence overview, distributed persistence, RDD lineageBasic Spark StreamingUnderstanding the Spark streaming, creating a Spark stream application, processing of Spark stream, streaming request count and DStreams.Advanced Spark StreamingIntroduction to Spark multi-batch operations, state operations, sliding window operations and advanced data sources.Common Patterns in Spark Data ProcessingLearning about the Spark common use cases, the concept of iterative algorithm in Spark, analyzing with Spark graph processing, introduction to K-Means and machine learning.Improving Spark PerformanceIntroduction to various variables in Spark like shared variables, broadcast variables, learning about accumulators, the common performance issues and troubleshooting the performance problems.Spark SQL and Data FramesLearning about Spark SQL, the context of SQL in Spark for providing structured data processing, understanding the Data Frames in Spark, learning to query and transform data in Data Frames, how Data Frame provides the benefit of both Spark RDD and Spark SQL, deploying Hive on Spark as the execution engine.Scheduling/ PartitioningLearning about the scheduling and partitioning in Spark, scheduling within and around applications, static partitioning, dynamic sharing, fair scheduling, Spark master high availability, standby Masters with Zookeeper, Single Node Recovery With Local File System, High Order Functions.Capacity planning in SparkUnderstanding how to design capacity planning in Spark, creation of Maps, Transformations, the concept of concurrency in Java and Scala.Log analysisUnderstanding about log analysis with Spark, first log analyzers in Spark, working with various buffers like array, compact and protocol buffer.What Hadoop Projects You will be working on?Project 1 – Working with MapReduce, Hive, SqoopThis project is involved with working on the various Hadoop components like MapReduce, Apache Hive and Apache Sqoop. Work with Sqoop to import data from relational database management system like MySQL data into HDFS. Deploy Hive for summarizing data, querying and analysis. Convert SQL queries using HiveQL for deploying MapReduce on the transferred data. You will gain considerable proficiency in Hive, and Sqoop after completion of this project.Project 2 – Work on MovieLens data for finding top recordsData – MovieLens datasetIn this project you will work exclusively on data collected through MovieLens available rating data sets. The project involves the following important components:- You will write a MapReduce program in order to find the top 10 movies by working in the data file
- Learn to deploy Apache Pig create the top 10 movies list by loading the data
- Work with Apache Hive and create the top 10 movies list by loading the
- Importing of Movie data
- Appending the data
- Using Sqoop commands to bring the data into HDFS
- End to End flow of transaction data
- Processing data using MapReduce program in terms of the movie data, etc.
- Manual Partitioning
- Dynamic Partitioning
- Bucketing
- Clear hands-on working knowledge of ETL and Business Intelligence
- Configuring Pentaho to work with Hadoop Distribution
- Loading, Transforming and Extracting data into Hadoop cluster
- Running a Hadoop multi-node using a 4 node cluster on Amazon EC2
- Deploying of MapReduce job on the Hadoop cluster
- Hadoop Multi-Node Cluster Setup using Amazon ec2 – Creating 4 node cluster setup
- Running Map Reduce Jobs on Cluster
- Writing JUnit tests using MRUnit for MapReduce applications
- Doing mock static methods using PowerMock&Mockito
- MapReduceDriver for testing the map and reduce pair
- Aggregation of log data
- Processing of the data and generating analytics
- Administration of distributed file system
- Checking the file system
- Working with name node directory structure
- Audit logging, data node block scanner, balancer
- Learning about the properties of safe mode
- Entering and exiting safe mode
- HDFS federation and high availability
- Failover, fencing, DISTCP, Hadoop file format
Data Science ProjectProject 1 – Understanding Cold Start Problem in Data ScienceTopics: This project involves understanding of the cold start problem associated with the recommender systems. You will gain hands-on experience in information filtering, working on systems with zero historical data to refer to, as in the case of launching a new product. You will gain proficiency in working with personalized applications like movies, books, songs, news and such other recommendations. This project includes the following:- Algorithms for Recommender
- Ways of Recommendation
- Types of Recommendation -Collaborative Filtering Based Recommendation, Content-Based Recommendation
- Complete mastery in working with the Cold Start Problem.
- Recommendation for movie
- Two Types of Predictions – Rating Prediction, Item Prediction
- Important Approaches: Memory Based and Model-Based
- Knowing User Based Methods in K-Nearest Neighbor
- Understanding Item Based Method
- Matrix Factorization
- Decomposition of Singular Value
- Data Science Project discussion
- Collaboration Filtering
- Business Variables Overview
Apache Spark – Scala ProjectProject 1: Movie RecommendationTopics – This is a project wherein you will gain hands-on experience in deploying Apache Spark for movie recommendation. You will be introduced to the Spark Machine Learning Library, a guide to MLlib algorithms and coding which is a machine learning library. Understand how to deploy collaborative filtering, clustering, regression, and dimensionality reduction in MLlib. Upon completion of the project you will gain experience in working with streaming data, sampling, testing and statistics.Project 2: Twitter API Integration for tweet AnalysisTopics – With this project you will learn to integrate Twitter API for analyzing tweets. You will write codes on the server side using any of the scripting languages like PHP, Ruby or Python, for requesting the Twitter API and get the results in JSON format. You will then read the results and perform various operations like aggregation, filtering and parsing as per the need to come up with tweet analysis.Project 3: Data Exploration Using Spark SQL – Wikipedia data setTopics – This project lets you work with Spark SQL. You will gain experience in working with Spark SQL for combining it with ETL applications, real time analysis of data, performing batch analysis, deploying machine learning, creating visualizations and processing of graphs.