MyPage is a personalized page based on your interests.The page is customized to help you to find content that matters you the most.


I'm not curious

Big Data Architect Masters Program

Course Summary

Our Big Data Architect masters program lets you gain proficiency in Big Data. You will work on real world projects in Hadoop development, Hadoop administration, Hadoop analyst, Hadoop testing, Spark, Pentaho, Python, MongoDB, Apache Storm, NoSQL databases and more. In this program you will cover 13 courses and 28 industry based projects


  • +

    Course Syllabus

    Big Data Hadoop Course Content

    Hadoop Installation & setup
    Hadoop 2.x Cluster Architecture, Federation and High Availability, A Typical Production Cluster setup, Hadoop Cluster Modes, Common Hadoop Shell Commands, Hadoop 2.x Configuration Files, Cloudera Single node cluster, Hive, Pig, Sqoop, Flume, Scala and Spark.
    Introduction to Big Data Hadoop. Understanding HDFS & Mapreduce

    Introducing Big Data & Hadoop, what is Big Data and where does Hadoop fits in, two important Hadoop ecosystem componentsnamely Map Reduce and HDFS, in-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability, in-depth YARN – Resource Manager, Node Manager.Hands-on Exercise – Working with HDFS, replicating the data, determining block size, familiarizing with Namenode and Datanode.

    Deep Dive in Mapreduce
    Detailed understanding of the working of MapReduce, the mapping and reducing process, the working of Driver, Combiners, Partitioners, Input Formats, Output Formats, Shuffle and SorHands-on Exercise – The detailed methodology for writing the Word Count Program in MapReduce, writing custom partitioner, MapReduce with Combiner, Local Job Runner Mode, Unit Test, ToolRunner, MapSide Join, Reduce Side Join, Using Counters, Joining two datasets using Map-Side Join &Reduce-Side Join
    Introduction to Hive
    Introducing Hadoop Hive, detailed architecture of Hive, comparing Hive with Pig and RDBMS, working with Hive Query Language, creation of database, table, Group by and other clauses, the various types of Hive tables, Hcatalog, storing the Hive Results, Hive partitioning and Buckets.Hands-on Exercise – Creating of Hive database, how to drop database, changing the database, creating of Hive table, loading of data, dropping the table and altering it, writing hive queries to pull data using filter conditions, group by clauses, partitioning Hive tables
    Advance Hive & Impala
    The indexing in Hive, the Map side Join in Hive, working with complex data types, the Hive User-defined Functions, Introduction to Impala, comparing Hive with Impala, the detailed architecture of ImpalaHands-on Exercise – Working with Hive queries, writing indexes, joining table, deploying external table, sequence table and storing data in another table.
    Introduction to Pig
    Apache Pig introduction, its various features, the various data types and schema in Hive, the available functions in Pig, Hive Bags, Tuples and Fields.Hands-on Exercise – Working with Pig in MapReduce and local mode, loading of data, limiting data to 4 rows, storing the data into file, working with Group By,Filter By,Distinct,Cross,Split in Hive.
    Flume, Sqoop & HBase
    Introduction to Apache Sqoop, Sqoop overview, basic imports and exports, how to improve Sqoop performance, the limitation of Sqoop, introduction to Flume and its Architecture, introduction to HBase, the CAP theorem.Hands-on Exercise – Working with Flume to generating of Sequence Number and consuming it, using the Flume Agent to consume the Twitter data, using AVRO to create Hive Table, AVRO with Pig, creating Table in HBase, deploying Disable, Scan and Enable Table.
    Writing Spark Applications using Scala
    Using Scala for writing Apache Spark applications, detailed study of Scala, the need for Scala, the concept of object oriented programing, executing the Scala code, the various classes in Scala like Getters,Setters, Constructors, Abstract ,Extending Objects, Overriding Methods, the Java and Scala interoperability, the concept of functional programming and anonymous functions, Bobsrockets package, comparing the mutable and immutable collections.Hands-on Exercise – Writing Spark application using Scala, understanding the robustness of Scala for Spark real-time analytics operation.
    Spark framework

    Detailed Apache Spark, its various features, comparing with Hadoop, the various Spark components, combining HDFS with Spark, Scalding, introduction to Scala, importance of Scala and RDD.Hands-on Exercise – The Resilient Distributed Dataset in Spark and how it helps to speed up big data processing.

    RDD in Spark

    The RDD operation in Spark, the Spark transformations, actions, data loading, comparing with MapReduce, Key Value Pair.Hands-on Exercise – How to deploy RDD with HDFS, using the in-memory dataset, using file for RDD, how to define the base RDD from external file, deploying RDD via transformation, using the Map and Reduce functions, working on word count and count log severity.

    Data Frames and Spark SQL
    The detailed Spark SQL, the significance of SQL in Spark for working with structured data processing, Spark SQL JSON support, working with XML data, and parquet files, creating HiveContext, writing Data Frame to Hive, reading of JDBC files, the importance of Data Frames in Spark, creating Data Frames, schema manual inferring, working with CSV files, reading of JDBC tables, converting from Data Frame to JDBC, the user-defined functions in Spark SQL, shared variable and accumulators, how to query and transform data in Data Frames, how Data Frame provides the benefits of both Spark RDD and Spark SQL, deploying Hive on Spark as the execution engine.Hands-on Exercise – Data querying and transformation using Data Frames, finding out the benefits of Data Frames over Spark SQL and Spark RDD.
    Machine Learning using Spark (Mlib)

    Different Algorithms, the concept of iterative algorithm in Spark, analyzing with Spark graph processing, introduction to K-Means and machine learning, various variables in Spark like shared variables, broadcast variables, learning about accumulators.Hands-on Exercise – Writing spark code using Mlib.

    Spark Streaming

    Introduction to Spark streaming, the architecture of Spark Streaming, working with the Spark streaming program, processing data using Spark streaming, requesting count and Dstream, multi-batch and sliding window operations and working with advanced data sources.Hands-on Exercise – Deploying Spark streaming for data in motion and checking the output is as per the requirement.

    Hadoop Administration – Multi Node Cluster Setup using Amazon EC2

    Create a four node Hadoop cluster setup, running the MapReduce Jobs on the Hadoop cluster, successfully running the MapReduce code, working with the Cloudera Manager setup.Hands-on Exercise – The method to build a multi-node Hadoop cluster using an Amazon EC2 instance, working with the Cloudera Manager.

    Hadoop Administration – Cluster Configuration

    The overview of Hadoop configuration, the importance of Hadoop configuration file, the various parameters and values of configuration, the HDFS parameters and MapReduce parameters, setting up the Hadoop environment, the Include’ and Exclude configuration files, the administration and maintenance of Name node, Data node directory structures and files, File system image and Edit logHands-on Exercise – The method to do performance tuning of MapReduce program.

    Hadoop Administration – Maintenance, Monitoring and Troubleshooting

    Introduction to the Checkpoint Procedure, Name node failure and how to ensure the recovery procedure, Safe Mode, Metadata and Data backup, the various potential problems and solutions, what to look for, how to add and remove nodes.Hands-on Exercise – How to go about ensuring the MapReduce File system Recovery for various different scenarios, JMX monitoring of the Hadoop cluster, how to use the logs and stack traces for monitoring and troubleshooting, using the Job Scheduler for scheduling jobs in the same cluster, getting the MapReduce job submission flow, FIFO schedule, getting to know the Fair Scheduler and its configuration.

    Securing Hadoop Cluster with Kerberos and other Advance topics

    Advanced Hadoop administration functions, using the Quorum Journal Manager, configuring the Hadoop federation and security, fundamentals of the Hadoop Platform Security, working with Kerberos authentication, configuring Kerberos on Hadoop cluster.Hands-on Exercise – Detailed procedure for configuring the Kerberos authentication with the Hadoop cluster and checking the results of the configuration.

    ETL Connectivity with Hadoop Ecosystem

    How ETL tools work in Big data Industry, Introduction to ETL and Data warehousing. Working with prominent use cases of Big data in ETL industry, End to End ETL PoC showing big data integration with ETL tool.Hands-on Exercise – Connecting to HDFS from ETL tool and moving data from Local system to HDFS, Moving Data from DBMS to HDFS, Working with Hive with ETL Tool, Creating Map Reduce job in ETL tool

    IBM Project Solution Discussion and Cloudera Certification Tips & Tricks

    Working towards the solution of the Hadoop IBM project solution, its problem statements and the possible solution outcomes, preparing for the Cloudera Certifications, points to focus for scoring the highest marks, tips for cracking Hadoop interview questions.Hands-on Exercise – The IBM project of a real-world high value Big Data Hadoop application and getting the right solution based on the criteria set by the IBM team.

    Following topics will be available only in self-paced Mode.
    Hadoop Application Testing

    Why testing is important, Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end to end tests, Functional testing, Release certification testing, Security testing, Scalability Testing, Commissioning and Decommissioning of Data Nodes Testing, Reliability testing, Release testing

    Roles and Responsibilities of Hadoop Testing Professional

    Understanding the Requirement, preparation of the Testing Estimation, Test Cases, Test Data, Test bed creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, HIVE, HBASE) while loading the input (logs/files/records etc) using sqoop/flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges etc), Report defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Validating new feature and issues in Core Hadoop.

    Framework called MR Unit for Testing of Map-Reduce Programs

    Report defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Responsible for creating a testing Framework called MR Unit for testing of Map-Reduce programs.

    Unit Testing

    Automation testing using the OOZIE, Data validation using the query surge tool.

    Test Execution

    Test plan for HDFS upgrade, Test automation and result

    Test Plan Strategy and writing Test Cases for testing Hadoop Application

    How to test install and configure

    Scala Course Content

    Introduction of Scala

    Introducing Scala and deployment of Scala for Big Data applications and Apache Spark analytics.

    Pattern Matching

    The importance of Scala, the concept of REPL (Read Evaluate Print Loop), deep dive into Scala pattern matching, type interface, higher order function, currying, traits, application space and Scala for data analysis.

    Executing the Scala code

    Learning about the Scala Interpreter, static object timer in Scala, testing String equality in Scala, Implicit classes in Scala, the concept of currying in Scala, various classes in Scala.

    Classes concept in Scala

    Learning about the Classes concept, understanding the constructor overloading, the various abstract classes, the hierarchy types in Scala, the concept of object equality, the val and var methods in Scala.

    Case classes and pattern matching

    Understanding Sealed traits, wild, constructor, tuple, variable pattern, and constant pattern.

    Concepts of traits with example

    Understanding traits in Scala, the advantages of traits, linearization of traits, the Java equivalent and avoiding of boilerplate code.

    Scala java Interoperability

    Implementation of traits in Scala and Java, handling of multiple traits extending.

    Scala collections

    Introduction to Scala collections, classification of collections, the difference between Iterator, and Iterable in Scala, example of list sequence in Scala.

    Mutable collections vs. Immutable collections

    The two types of collections in Scala, Mutable and Immutable collections, understanding lists and arrays in Scala, the list buffer and array buffer, Queue in Scala, double-ended queue Deque, Stacks, Sets, Maps, Tuples in Scala.

    Use Case bobsrockets package

    Introduction to Scala packages and imports, the selective imports, the Scala test classes, introduction to JUnit test class, JUnit interface via JUnit 3 suite for Scala test, packaging of Scala applications in Directory Structure, example of Spark Split and Spark Scala.

    Spark Course Content

    Introduction to Spark

    Introduction to Spark, how Spark overcomes the drawbacks of working MapReduce, understanding in-memory MapReduce, Spark Hadoop YARN, HDFS Revision, YARN Revision, the overview of Spark and how it is better Hadoop, deploying Spark without Hadoop.

    Spark Basics

    Spark installation guide, working with Spark Shell, the concept of Resilient Distributed Datasets (RDD), learning to do functional programming in Spark, the architecture of Spark.

    Working with RDDs in Spark

    Deep dive into Spark RDDs, the RDD general operations, a read-only partitioned collection of records, using the concept of RDD for faster and efficient data processing.

    Aggregating Data with Pair RDDs

    Understanding the concept of Key-Value pair in RDDs, learning how Spark makes MapReduce operations faster, various operations of RDD.

    Writing and Deploying Spark Applications

    Comparing the Spark applications with Spark Shell, creating a Spark application using Scala or Java, deploying a Spark application, the web user interface of Spark application, a real world example of Spark and configuring of Spark.

    Parallel Processing

    Learning about Spark parallel processing, deploying on a cluster, introduction to Spark partitions, file-based partitioning of RDDs, understanding of HDFS and data locality, mastering the technique of parallel operations.

    Spark RDD Persistence

    Understanding the RDD persistence overview, distributed persistence, RDD lineage

    Basic Spark Streaming

    Understanding the Spark streaming, creating a Spark stream application, processing of Spark stream, streaming request count and DStreams.

    Advanced Spark Streaming

    Introduction to Spark multi-batch operations, state operations, sliding window operations and advanced data sources.

    Common Patterns in Spark Data Processing

    Learning about the Spark common use cases, the concept of iterative algorithm in Spark, analyzing with Spark graph processing, introduction to K-Means and machine learning.

    Improving Spark Performance

    Introduction to various variables in Spark like shared variables, broadcast variables, learning about accumulators, the common performance issues and troubleshooting the performance problems.

    Spark SQL and Data Frames

    Learning about Spark SQL, the context of SQL in Spark for providing structured data processing, understanding the Data Frames in Spark, learning to query and transform data in Data Frames, how Data Frame provides the benefit of both Spark RDD and Spark SQL, deploying Hive on Spark as the execution engine.

    Scheduling/ Partitioning

    Learning about the scheduling and partitioning in Spark, scheduling within and around applications, static partitioning, dynamic sharing, fair scheduling, Spark master high availability, standby Masters with Zookeeper, Single Node Recovery With Local File System, High Order Functions.

    Capacity planning in Spark

    Understanding how to design capacity planning in Spark, creation of Maps, Transformations, the concept of concurrency in Java and Scala.

    Log analysis

    Understanding about log analysis with Spark, first log analyzers in Spark, working with various buffers like array, compact and protocol buffer.

    Pentaho Course Content

    Introduction to Pentaho Tool

    Pentaho user console, Oveview of Pentaho Business Intelligence and Analytics tools, database dimensional modelling, using Star Schema for querying large data sets, understanding fact tables and dimensions tables, Snowflake Schema, principles of Slowly Changing Dimensions, knowledge of how high availability is supported for the DI server and BA server, managing Pentaho artifacts Knowledge of big data solution architecturesHands-on Exercise – Schedule a report using user console, Create model using database dimensional modeling techniques, create a Star Schema for querying large data sets, Use fact tables and dimensions tables, manage Pentaho artifacts

    Data Architecture
    Designing data models for reporting, Pentaho support for predictive analytics, Design a Streamlined Data Refinery (SDR) solution for a clientHands-on Exercise – Design data models for reporting, Perform predictive analytics on a data set, design a Streamlined Data Refinery (SDR) solution for a dummy client
    Clustering in Pentaho
    Understanding the basics of clustering in Pentaho Data Integration, creating a database connection, moving a CSV file input to table output and Microsoft Excel output, moving from Excel to data grid and log.Hands-on Exercise – Create a database connection, move a csv file input to table output and Microsoft excel output, move data from excel to data grid and log
    Data Transformation
    The Pentaho Data Integration Transformation steps, adding sequence, understanding calculator, Penthao number range, string replace, selecting field value, sorting and splitting rows, string operation, unique row and value mapper, Usage of metadata injectionHands-on Exercise – Practice various steps to perform data integration transformation, add sequence, use calculator, work on number range, selecting field value, sorting and splitting rows, string operation, unique row and value mapper, use metadata injection
    Pentaho Flow
    Working with secure socket command, Pentaho null value and error handling, Pentaho mail, row filter and priorities stream.Hands-on Exercise – Work with secure socket command, Handle null values in the data, perform error handling, send email, get row filtered data, set stream priorities
    Deploying SCD

    Understanding Slowly Changing Dimensions, making ETL dynamic, dynamic transformation, creating folders, scripting, bulk loading, file management, working with Pentaho file transfer, Repository, XML, Utility and File encryption.Hands-on Exercise – Make ETL dynamic transformation, create folders, write scripts, load bulk data, perform file management ops, work with Pentaho file transfer, XML utility and File encryption

    Type of Repository in Pentaho

    Creating dynamic ETL, passing variable and value from job to transformation, deploying parameter with transformation, importance of Repository in Pentaho, database connection, environmental variable and repository import.Hands-on Exercise – Create dynamic ETL, pass variable and value from job to transformation, deploy parameter with transformation, connect to a database, set pentaho environmental variables, import a repository in the pentaho workspace

    Pentaho Repository & Report Designing

    Working with Pentaho dashboard and Report, effect of row bending, designing a report, working with Pentaho Server, creation of line, bar and pie chart in Pentaho, How to achieve localization in reportsHands-on Exercise – Create Pentaho dashboard and report, check effect of row bending, design a report, work with Pentaho Server, create line, bar and pie chart in Pentaho, Implement localization in a report

    Pentaho Dashboard

    Working with Pentaho Dashboard, passing parameters in Report and Dashboard, drill-down of Report, deploying Cubes for report creation, working with Excel sheet, Pentaho data integration for report creation.Hands-on Exercise – Pass parameters in Report and Dashboard, deploy Cubes for report creation, drill-down in report to understand the entries, import data from an excel sheet, Perform data integration for report creation

    Understanding Cube

    What is a Cube? Creation and benefit of Cube, working with Cube, Report and Dashboard creation with Cube.Hands-on Exercise – Create a Cube, create report and dashboard with Cube

    Multi Dimensional Expression

    Understanding the basics of Multi Dimensional Expression (MDX), basics of MDX, understanding Tuple, its implicit dimensions, MDX sets, level, members, dimensions referencing, hierarchical navigation, and meta data.Hands-on Exercise – Work with MDX, Use MDX sets, level, members, dimensions referencing, hierarchical navigation, and meta data

    Pentaho Analyzer

    Pentaho analytics for discovering, blending various data types and sizes, including advanced analytics for visualizing data across multiple dimensions, extending Analyzer functionality, embedding BA server reports, Pentaho REST APIsHands-on Exercise – Blend various data types and sizes, Perform advanced analytics for visualizing data across multiple dimensions, Embed BA server report

    Pentaho Data Integration (PDI) Development
    Knowledge of the PDI steps used to create an ETL job, Describing the PDI steps to create an ETL transformation, Describing the use of property filesHands-on Exercise – Create an ETL transformation using PDI steps, Use property files
    Hadoop ETL Connectivity

    Deploying ETL capabilities for working on the Hadoop ecosystem, integrating with HDFS and moving data from local file to distributed file system, deploying Apache Hive, designing MapReduce jobs, complete Hadoop integration with ETL tool.Hands-on Exercise – Deploy ETL capabilities for working on the Hadoop ecosystem, Integrate with HDFS and move data from local file to distributed file system, deploy Apache Hive, design MapReduce jobs

    Creating dashboards in Pentaho

    Creating interactive dashboards for visualizing highly graphical representation of data for improving key business performance.Hands-on Exercise – Create interactive dashboards for visualizing graphical representation of data

    Performance Tuning
    Managing BA server logging, tuning Pentaho reports, monitoring the performance of a job or a transformation, Auditing in PentahoHands-on Exercise – Manage logging in BA server, Fine tune Pentaho report, Monitor the performance of an ETL job
    Security
    Integrating user security with other enterprise systems, Extending BA server content security, Securing data, Pentaho’s support for multi-tenancy, Using Kerberos with PentahoHands-on Exercise – Configure security settings to implement high level security

    Python Course Content

    Introduction to Python
    What is Python Language and features, Why Python and why it is different from other languages, Installation of Python, Anaconda Python distribution for Windows, Mac, Linux. Run a sample python script, working with Pyhton IDE’s. Running basic python commands – Data types, Variables,Keywords,etcHands-on Exercise – Install Anaconda Python distribution for your OS (Windows/Linux/Mac)
    Basic constructs of Python language
    Indentation(Tabs and Spaces) and Code Comments (Pound # character); Variables and Names; Built-in Data Types in Python – Numeric: int, float, complex – Containers: list, tuple, set, dict – Text Sequence: Str (String) – Others: Modules, Classes, Instances, Exceptions, Null Object, Ellipsis Object – Constants: False, True, None, NotImplemented, Ellipsis, __debug__; Basic Operators: Arithmetic, Comparison, Assignment, Logical, Bitwise, Membership, Indentity; Slicing and The Slice Operator [n:m]; Control and Loop Statements: if, for, while, range(), break, continue, else;Hands-on Exercise – Write your first Python program Write a Python Function (with and without parameters) Use Lambda expression Write a class, create a member function and a variable, Create an object Write a for loop to print all odd numbers
    Wrting Object Oriented Program in Python and connecting with Database
    Classes – classes and objects, access modifiers, instance and class members OOPS paradigm – Inheritance, Polymorphism and Encapsulation in Python. Functions: Parameters and Return Types; Lambda Expressions, Making connection with Database for pulling data.
    File Handling, Exception Handling in Python
    Open a File, Read from a File, Write into a File; Resetting the current position in a File; The Pickle (Serialize and Deserialize Python Objects); The Shelve (Overcome the limitation of Pickle); What is an Exception; Raising an Exception; Catching an Exception;Hands-on Exercise – Open a text file and read the contents, Write a new line in the opened file, Use pickle to serialize a python object, deserialize the object, Raise an exception and catch it
    Mathematical Computing with Python (NumPy)
    Arrays and Matrices, ND-array object, Array indexing, Datatypes, Array math Broadcasting, Std Deviation, Conditional Prob, Covariance and Correlation.Hands-on Exercise – Import numpy module, Create an array using ND-array, Calculate std deviation on an array of numbers, Calculate correlation between two variables
    Scientific Computing with Python (SciPy)
    Builds on top of NumPy, SciPy and its characteristics, subpackages: cluster, fftpack, linalg, signal, integrate, optimize, stats; Bayes Theorem using SciPyHands-on Exercise – Import SciPy, Apply Bayes theorem using SciPy on the given dataset
    Data Visualization (Matplotlib)
    Plotting Grapsh and Charts (Line, Pie, Bar, Scatter, Histogram, 3-D); Subplots; The Matplotlib APIHands-on Exercise – Plot Line, Pie, Scatter, Histogram and other charts using Matplotlib
    Data Analysis and Machine Learning (Pandas) OR Data Manipulation with Python
    Dataframes, NumPy array to a dataframe; Import Data (csv, json, excel, sql database); Data operations: View, Select, Filter, Sort, Groupby, Cleaning, Join/Combine, Handling Missing Values; Introduction to Machine Learning(ML); Linear Regression; Time SeriesHands-on Exercise – Import Pandas, Use it to import data from a json file,,Select records by a group and apply filter on top of that, View the records, Perform Linear Regression analysis, Create a Time Series
    Natural Language Processing, Machine Learning (Scikit-Learn)
    Introduction to Natural Language Processing (NLP); NLP approach for Text Data; Environment Setup (Jupyter Notebook); Sentence Analysis; ML Algorithms in Scikit-Learn; What is Bag of Words Model; Feature Extraction from Text; Model Training; Search Grid; Multiple Parameters; Build a PipelineHands-on Exercise – Setup Jupyter Notebook environment, Load a dataset in Jupyter, Use algorithm in Scikit-Learn package to perform ML techniques, Train a model Create a search grid
    Web Scraping for Data Science
    What is Web Scraping; Web Scraping Libraries (Beautifulsoup, Scrapy); Installation of Beautifulsoup; Install lxml Python Parser; Making a Soup Object using an input html; Navigating Py Objects in the Soup Tree; Searching the Tree; Output Print; Parsing Full or PartialHands-on Exercise – Install Beautifulsoup and lxml Python parser, Make a Soup object using an input html file, Navigate Py objects in the soup tree, Search tree, Print output
    Python on Hadoop
    Understanding Hadoop and its various components; Hadoop ecosystem and Hadoop common; HDFS and MapReduce Architecture; Python scripting for MapReduce Jobs on Hadoop frameworkHands-on Exercise – Write a basic MapReduce Job in Python and connect with Hadoop Framework to perform the task
    Writing Spark code using Python
    What is Spark,understanding RDDs, Spark Libs, writing Spark code using python,Spark Machine Libraries Mlib, Regression, Classification and Clustering using Spark MLlibHands-on Exercise – Implement sandbox, Run a python code in sandbox, Work with HDFS file system from sandbox

    MongoDB Course Content

    Introduction to NoSQL and MongoDB

    RDBMS, types of relational databases, challenges of RDBMS, NoSQL database, its significance, how NoSQL suits Big Data needs, Introduction to MongoDB and its advantages, MongoDB installation, JSON features, data types and examples.

    MongoDB Installation

    Installing MongoDB, basic MongoDB commands and operations, MongoChef (MongoGUI) Installation, MongoDB Data types.Hands-on Exercise – Install MongoDB, Install MongoChef (MongoGUI)

    Importance of NoSQL

    The need for NoSQL, types of NoSQL databases, OLTP, OLAP, limitations of RDBMS, ACID properties, CAP Theorem, Base property, learning about JSON/BSON, database collection & document, MongoDB uses, MongoDB Write Concern – Acknowledged, Replica Acknowledged, Unacknowledged, Journaled, Fsync.Hands-on Exercise – Write a JSON document

    CRUD Operations

    Understanding CRUD and its functionality, CRUD concepts, MongoDB Query & Syntax, read and write queries and query optimization.Hands-on Exercise – Use Insert query to Create a data entry, Use find query to Read data, Use update and replace queris to Update, Use delete query operations on a DB file

    Data Modeling & Schema Design

    Concepts of data modeling, difference between MongoDB and RDBMS modeling, Model tree structure, operational strategies, monitoring and backup.Hands-on Exercise – Write a data model tree structure for a family hierarchy

    Data Management & Administration

    In this module you will learn MongoDB® Administration activities such as Health Check, Backup, Recovery, database sharding and profiling, Data Import/Export, Performance tuning etc.Hands-on Exercise – Use shard key and hashed shard keys, Perform backup and recovery of a dummy dataset, Import data from a csv file, Export data to a csv file

    Data Indexing and Aggregation

    Concepts of data aggregation and types, data indexing concepts, properties and variations.Hands-on Exercise – Do aggregation using pipeline, sort, skip and limit, Create index on data using single key, using multikey

    MongoDB Security

    Understanding database security risks, MongoDB security concept and security approach, MongoDB integration with Java and Robomongo.Hands-on Exercise – MongoDB integration with Java and Robomongo.

    Working with Unstructured Data

    Implementing techniques to work with variety of unstructured data like images, videos, log data, and others, understanding GridFS MongoDB file system for storing data.Hands-on Exercise – Work with variety of unstructured data like images, videos, log data, and others

    Java Programming Course Content

    Core Java Concepts

    Introduction to Java Programming, Defining Java, Need for Java, Platform Independent in Java, Define JRE,JVM, JDK, Important Features and Evolution of Java

    Writing Java Programs using Java Principles

    Overview of Coding basics, Setting up the required environment, Knowing the available IDEs, Writing a Basic-level Java Program, Define Package, What are Java Comments?, Understanding the concept of Reserved Words, Introduction to Java Statements, What are Blocks in Java, Explain a Class, Different Methods

    Language Conceptuals
    Overview of the Language, Defining Identifiers, What are Constraints and Variables, What is an Encoding Set?, Concept of Separators, Define Primitives, How to make Primitive Conversions?, Various Operators in Java
    Operating with Java Statements
    Module Overview, Learn how to write If Statement, Understanding While Statement, Working with Do-while Statement, How to use For Statement?, Using Break Statement, What is Continue Statement, Working of Switch Statement
    Concept of Objects and Classes
    General Review of the Module, Defining Object and Classes in Java, What are Encapsulation, Static Members and Access Control?, Use and importance of ‘this’ Keyword, Deining Method Overloading with an example, ‘By Value’ vs. ‘By Reference’, Loading, Defining Initialization and Linking, How to Compare Objects in Java?, What is Garbage Collector?
    Introduction to Core Classes
    General Review, Concept of Object in Java, Define Core Class, What is System?, Explain String Classes, How do Arrays work?, Concept of Boxing & Unboxing, Use of ‘varargs’, ‘format’ and ‘printf’ Methods
    Inheritance in Java
    Introduction, Define Inheritance with an example, Accessibility concept, Method Overriding, Learning how to call a Superclass’ Constructor, What is Type Casting?, Familiarity with ’instanceof’ Keyword
    Exception Handling in Detail
    Getting started with exception Handling, Defining an Exception, How to use Constructs to deal with exceptions?, Classification of exceptions, Throw Exceptions, How to create an exception class?, stack Trace analysis
    Getting started with Interfaces and Abstract Classes
    General Review, Defining Interface, Use and Create and Interface, Concept of Extending interfaces, How to implement multiple interfaces?, What are abstract classes?, How to create and use abstract classes?, Comparison between interface and abstract classes, Concept of Nested Classes, What are Nested Classes?, Nested Classes Types, Working of an Inner Class, What is a Local Inner Class?, Anonymous Classes in java, What is a Static Nested Class
    Overview of Nested Classes
    What are Nested Classes?, Types of Nested Classes, What is an Inner Class?, Understanding local inner class, Anonymous Inner Class, Nested Class – Static
    Getting started with Java Threads
    What is a Thread?, How to create and start a Thread?, States of a Thread, Blocking the Execution of a Thread, Concept of Sleep Thread, Understanding the priorities in a thread, Synchronisation in Java Threads, Interaction between threads
    Overview of Java Collections
    Introduction to Collection Framework, Preeminent Interfaces, What are Comparable and Comparator?, Working with Lists, Working with Maps, Working with Sets, Working with Queues
    Understanding JDBC
    Define JDBC, Different types of Drivers, How to access the drivers?, What is Connection in Java?, What is a Statement?, Explaining CRUD Operations with examples, Prepared Statement and Callable Statement
    Java Generics
    Overview of important topics included, Important and Frequently-Used Features, Defining Generic List, What is Generic Map in Java?, Java Generic Classes & Methods, For Loop Generic, What is Generic Wild Card?
    Input/Output in Java
    Brief Introduction, Learning about Input and output streams in java, Concept of byte Oriented Streams, Defining Character Oriented Streams?, Explain Object Serialisation, Input and Output Based on Channel
    Getting started with Java Annotations
    Introduction and Definition of Annotations, How they are useful for Java programmers?, Placements in Annotations, What are Built-in Java Annotations, Defining Custom Annotations
    Reflection and its Usage
    Getting started, Define Java Reflection?, What is a Class Object?, Concept of Constructors, Using Fields, Applying Methods, Implementing Annotations in Your Java Program

    Apache Storm Course Content

    Understanding Architecture of Storm

    Big Data characteristics, understanding Hadoop distributed computing, the Bayesian Law, deploying Storm for real time analytics, the Apache Storm features, comparing Storm with Hadoop, Storm execution, learning about Tuple, Spout, Bolt.

    Installation of Apache storm

    Installing the Apache Storm, various types of run modes of Storm.

    Introduction to Apache Storm

    Understanding Apache Storm and the data model.

    Apache Kafka Installation

    Installation of Apache Kakfa and its configuration.

    Apache Storm Advanced

    Understanding of advanced Storm topics like Spouts, Bolts, Stream Groupings, Topology and its Life cycle, learning about Guaranteed Message Processing.

    Storm Topology

    Various Grouping types in Storm, reliable and unreliable messages, Bolt structure and life cycle, understanding Trident topology for failure handling, process, Call Log Analysis Topology for analyzing call logs for calls made from one number to another.

    Overview of Trident

    Understanding of Trident Spouts and its different types, the various Trident Spout interface and components, familiarizing with Trident Filter, Aggregator and Functions, a practical and hands-on use case on solving call log problem using Storm Trident.

    Storm Components & classes

    Various components, classes and interfaces in storm like – Base Rich Bolt Class, i RichBolt Interface, i RichSpout Interface, Base Rich Spout class and the various methodology of working with them.

    Cassandra Introduction

    Understanding Cassandra, its core concepts, its strengths and deployment.

    Boot Stripping

    Twitter Boot Stripping, detailed understanding of Boot Stripping, concepts of Storm, Storm Development Environment.

    Apache HBASE Course Content

    HBase Overview

    Getting started with HBase, Core concepts of HBase, Understanding HBase with an Example

    Architecture of NoSQL

    Why HBase?, Where to use HBase?, What is NoSQL?

    HBase Data Modeling

    HDFS vs.HBase, HBase Use Cases, Data Modeling HBase

    HBase Cluster Components

    HBase Architecture, Main components of HBase Cluster

    HBase API and Advanced Operations

    HBase Shell, HBase API, Primary Operations, Advanced Operations

    Integration of Hive with HBase

    Create a Table and Insert Data into it, Integration of Hive with HBase, Load Utility

    File loading with both load Utility

    Putting Folder to VM, File loading with both load Utility

    Cassandra Course Content

    Advantages and Usage of Cassandra

    Introduction to Cassandra, its strengths and deployment areas

    CAP Theorem and No SQL DataBase

    Significance of NoSQL, RDBMS Replication, Key Challenges, types of NoSQL, benefits and drawbacks, salient features of NoSQL database. CAP Theorem, Consistency.

    Cassandra fundamentals, Data model, Installation and setup

    Installation, introduction to Cassandra, key concepts and deployment of non relational database, column-oriented database, Data Model – column, column family,

    Cassandra Configuration

    Token calculation, Configuration overview, Node tool, Validators, Comparators, Expiring column, QA

    Summarization, node tool commands, cluster, Indexes, Cassandra & MapReduce, Installing Ops-center

    How Cassandra modelling varies from Relational database modelling, Cassandra modelling steps, introduction to Time Series modelling, comparing Column family Vs. Super Column family, Counter column family, Partitioners, Partitioners strategies, Replication, Gossip protocols, Read operation, Consistency, Comparison

    Multi Cluster setup

    Creation of multi node cluster, node settings, Key and Row cache, System Key space, understanding of Read Operation, Cassandra Commands overview, VNodes, Column family

    Thrift/Avro/Json/Hector Client

    JSON, Hector client, AVRO, Thrift, JAVA code writing method, Hector tag

    Datastax installation part,· Secondary index

    Cassandra management, commands of node tool, MapReduce and Cassandra, Secondary index, Datastax Installation

    Advance Modelling

    Rules of Cassandra data modelling, increasing data writes, duplication, and reducing data reads, modelling data around queries, creating table for data queries

    Deploying the IDE for Cassandra applications

    Understanding the Java application creation methodology, learning key drivers, deploying the IDE for Cassandra applications,cluster connection and data query implementation

    Cassandra Administration

    Learning about Node Tool Utility, cluster management using Command Line Interface, Cassandra management and monitoring via DataStax Ops Center.

    Cassandra API and Summarization and Thrift

    Cassandra client connectivity, connection pool internals, API, important features and concepts of Hector client, Thrift, JAVA code, Summarization.

    Kafka Course Content

    What is Kafka – An Introduction

    Understanding what is Apache Kafka, the various components and use cases of Kafka, implementing Kafka on a single node.

    Multi Broker Kafka Implementation

    Learning about the Kafka terminology, deploying single node Kafka with independent Zookeeper, adding replication in Kafka, working with Partitioning and Brokers, understanding Kafka consumers, the Kafka Writes terminology, various failure handling scenarios in Kafka.

    Multi Node Cluster Setup

    Introduction to multi node cluster setup in Kafka, the various administration commands, leadership balancing and partition rebalancing, graceful shutdown of kafka Brokers and tasks, working with the Partition Reassignment Tool, cluster expending, assigning Custom Partition, removing of a Broker and improving Replication Factor of Partitions.

    Integrate Flume with Kafka

    Understanding the need for Kafka Integration, successfully integrating it with Apache Flume, steps in integration of Flume with Kafka as a Source.

    Kafka API

    Detailed understanding of the Kafka and Flume Integration, deploying Kafka as a Sink and as a Channel, introduction to PyKafka API and setting up the PyKafka Environment.

    Producers & Consumers

    Connecting Kafka using PyKafka, writing your own Kafka Producers and Consumers, writing a random JSON Producer, writing a Consumer to read the messages from a topic, writing and working with a File Reader Producer, writing a Consumer to store topics data into a file.

    What Hadoop Projects You will be working on?
    Project 1 – Working with MapReduce, Hive, Sqoop

    This project is involved with working on the various Hadoop components like MapReduce, Apache Hive and Apache Sqoop. Work with Sqoop to import data from relational database management system like MySQL data into HDFS. Deploy Hive for summarizing data, querying and analysis. Convert SQL queries using HiveQL for deploying MapReduce on the transferred data. You will gain considerable proficiency in Hive, and Sqoop after completion of this project.

    Project 2 – Work on MovieLens data for finding top records

    Data – MovieLens dataset

    In this project you will work exclusively on data collected through MovieLens available rating data sets. The project involves the following important components:

    • You will write a MapReduce program in order to find the top 10 movies by working in the data file
    • Learn to deploy Apache Pig create the top 10 movies list by loading the data
    • Work with Apache Hive and create the top 10 movies list by loading the

    Project 3 – Hadoop YARN Project – End to End PoC

    In this project you will work on a live Hadoop YARN project. YARN is part of the Hadoop 2.0 ecosystem that lets Hadoop to decouple from MapReduce and deploy more competitive processing and wider array of applications. You will work on the YARN central Resource Manager. The salient features of this project include:

    • Importing of Movie data
    • Appending the data
    • Using Sqoop commands to bring the data into HDFS
    • End to End flow of transaction data
    • Processing data using MapReduce program in terms of the movie data, etc.

    Project 4 – Partitioning Tables in Hive

    This project involves working with Hive table data partitioning. Ensuring the right partitioning helps to read the data, deploy it on the HDFS, and run the MapReduce jobs at a much faster rate. Hive lets you partition data in multiple ways like:

    • Manual Partitioning
    • Dynamic Partitioning
    • Bucketing

    This will give you hands-on experience in partitioning of Hive tables manually, deploying single SQL execution in dynamic partitioning, bucketing of data so as to break it into manageable chunks.

    Project 5 – Connecting Pentaho with Hadoop Ecosystem

    This project lets you connect Pentaho with the Hadoop ecosystem. Pentaho works well with HDFS, HBase, Oozie and Zookeeper. You will connect the Hadoop cluster with Pentaho data integration, analytics, Pentaho server and report designer. Some of the components of this project include the following:

    • Clear hands-on working knowledge of ETL and Business Intelligence
    • Configuring Pentaho to work with Hadoop Distribution
    • Loading, Transforming and Extracting data into Hadoop cluster

    Project 6 – Multi-node cluster setup

    This is a project that gives you opportunity to work on real world Hadoop multi-node cluster setup in a distributed environment. The major components of this project involve:

    • Running a Hadoop multi-node using a 4 node cluster on Amazon EC2
    • Deploying of MapReduce job on the Hadoop cluster

    You will get a complete demonstration of working with various Hadoop cluster master and slave nodes, installing Java as a prerequisite for running Hadoop, installation of Hadoop and mapping the nodes in the Hadoop cluster.

    • Hadoop Multi-Node Cluster Setup using Amazon ec2 – Creating 4 node cluster setup
    • Running Map Reduce Jobs on Cluster

    Project 7 – Hadoop Testing using MR

    In this project you will gain proficiency in Hadoop MapReduce code testing using MRUnit. You will learn about real world scenarios of deploying MRUnit, Mockito, and PowerMock. Some of the important aspects of this project include:

    • Writing JUnit tests using MRUnit for MapReduce applications
    • Doing mock static methods using PowerMock&Mockito
    • MapReduceDriver for testing the map and reduce pair

    After completion of this project you will be well-versed in test driven development and will be able to write light-weight test units that work specifically on the Hadoop architecture.

    Project 8 – Hadoop Weblog Analytics

    Data – Weblogs

    This project is involved with making sense of all the web log data in order to derive valuable insights from it. You will work with loading the server data onto a Hadoop cluster using various techniques. The various modules of this project include:

    • Aggregation of log data
    • Processing of the data and generating analytics

    The web log data can include various URLs visited, cookie data, user demographics, location, date and time of web service access, etc. In this project you will transport the data using Apache Flume or Kafka, workflow and data cleansing using MapReduce, Pig or Spark. The insight thus derived can be used for analyzing customer behavior and predict buying patterns.

    Project 9 – Hadoop Maintenance

    This project is involved with working on the Hadoop cluster for maintaining and managing it. You will work on a number of important tasks like:

    • Administration of distributed file system
    • Checking the file system
    • Working with name node directory structure
    • Audit logging, data node block scanner, balancer
    • Learning about the properties of safe mode
    • Entering and exiting safe mode
    • HDFS federation and high availability
    • Failover, fencing, DISTCP, Hadoop file format
    Apache Spark – Scala Project
    Project 1: Movie RecommendationTopics – This is a project wherein you will gain hands-on experience in deploying Apache Spark for movie recommendation. You will be introduced to the Spark Machine Learning Library, a guide to MLlib algorithms and coding which is a machine learning library. Understand how to deploy collaborative filtering, clustering, regression, and dimensionality reduction in MLlib. Upon completion of the project you will gain experience in working with streaming data, sampling, testing and statistics.Project 2: Twitter API Integration for tweet AnalysisTopics – With this project you will learn to integrate Twitter API for analyzing tweets. You will write codes on the server side using any of the scripting languages like PHP, Ruby or Python, for requesting the Twitter API and get the results in JSON format. You will then read the results and perform various operations like aggregation, filtering and parsing as per the need to come up with tweet analysis.Project 3: Data Exploration Using Spark SQL – Wikipedia data setTopics – This project lets you work with Spark SQL. You will gain experience in working with Spark SQL for combining it with ETL applications, real time analysis of data, performing batch analysis, deploying machine learning, creating visualizations and processing of graphs.
    Pentaho Projects

    Project 1– Pentaho Interactive Report

    Data– Sales, Customer, Product

    Objective – In this Pentaho project you will be exclusively working on creating Pentaho interactive reports for sales, customer and product data fields. As part of the project you will learn to create a data source, build a Mondrian cube which is represented in an XML file. You will gain advanced experience in managing data sources, building and formatting Pentaho report, change the report template and scheduling of reports.

    Project 2– Pentaho Interactive Report

    Domain– Retail

    Objective – Build complex dashboard with drill down reports and charts for analysing business tren


Course Fee:
USD 1228

Course Type:

Self-Study

Course Status:

Active

Workload:

1 - 4 hours / week

Attended this course?

Back to Top

 
Awards & Accolades for MyTechLogy
Winner of
REDHERRING
Top 100 Asia
Finalist at SiTF Awards 2014 under the category Best Social & Community Product
Finalist at HR Vendor of the Year 2015 Awards under the category Best Learning Management System
Finalist at HR Vendor of the Year 2015 Awards under the category Best Talent Management Software
Hidden Image Url

Back to Top