ETL Tool Training

Intellipaat
Course Summary
Our ETL tools master program will let you gain proficiency in top ETL tools like Informatica, SSIS, OBIEE, Talend, DataStage, Pentaho. You will work on real world projects in data warehousing, data integration, Hadoop connectivity, data modelling, SCD, data schema.
-
+
Course Description
About Course
List of Courses included:
- Informatica
- SSIS
- OBIEE
- Talend
- DataStage
- Pentaho
What you will learn in this training course?
- Introduction to ETL and its importance for data warehousing
- Setting up and Installation of various ETL Tools
- Optimizing the ETL tools for the best results based on specific job requirements
- Learning all about OLAP, ETL and OLTP systems
- Mastering data modeling, dimensional modeling
- Working with star and snowflake schemas
- Dimensions, measures and fact tables in data warehousing
- Data types, performance tuning in ETL
- SCD types and various methods of handling SCD data
- Various types of data transformation techniques
- Source qualifier transformation and mappings.
Who should take this ETL training course?
- ETL & SQL Developers and Business Intelligence Professionals
- Database, Data Warehousing, Mainframe Professionals, Project Managers
What are the prerequisites for taking this ETL training course?
Anybody can take this training course. Having a basic knowledge of SQL can be helpful.Why should you take this training course?
The process of ETL is of absolute importance in any data warehousing and business intelligence scenario. Getting the ETL process right has a direct bearing on the type of data that will be loaded in the data warehouse and this directly affects the quality of business intelligence that is derived and finally the business insights that are got. This Intellipaat training is your one stop for mastering some of the best ETL tools available in the market today. Upon completion of this course you can command the best salaries in the ETL, Data warehousing and Business Intelligence domains in top MNCs around the world.
-
+
Course Syllabus
Informatica Course Content
Data Warehousing and Cleansing ConceptsIntroduction to data warehousing, what is ETL, and overview of data cleansing, data scrubbing, data aggregation, learn what is Informatica PowerCenter ETL.Informatica Installation and ConfigurationOverview of Informatica configuration, Integration Services, Installation of Informatica, operational administration activities.Hands-on Exercise – Install PowerCenterWorking with active and passive transformationLearn what is active and passive transformation and the differences between the two.Working with expression transformationLearning about expression transformation, connected passive transformation to calculate value on a single row.Hands-on Exercise – Calculate value on a single row using connected passive transformationWorking with Sorter, Sequence Generator, Filter transformationThe different types of transformations like Sorter, Sequence Generator and Filter, the characteristics of each and where they are used.Hands-on Exercise – Transform data using Filter technique, Use a sequence generator, Use a SorterWorking with Joiner TransformationJoiner transformation to bring data from heterogeneous data sources.Hands-on Exercise – Use Joiner transformation to bring data from heterogeneous data sourcesWorking with Ranking and Union TransformationUnderstanding the Ranking and Union transformation, the characteristics and deployment.Hands-on Exercise – Perform Ranking and Union transformationSyntax for Rank and Dense RankLearn about the rank and dense rank functions and the syntax for them.Hands-on Exercise – Perform rank and dense rank functionsRouter TransformationUnderstanding how router transformation works and its key features.Hands-on Exercise – Perform router transformationSource Qualifier Transformation and MappingsLookup transformation Overview and different types of lookup transformation:Connected, unconnected, Dynamic and StaticHands-on Exercise – Perform Lookup transformation: Connected, unconnected, Dynamic and StaticSlowly Changing Dimension in InformaticaWhat is SCD?,processing in xml, learn how to handle flat file, list and define various transformations, implementing ‘for loop’ in power center, concepts of pushdown optimization and partitioning, what is constraint based loading?, what is incremental aggregation?Hands-on Exercise – Load data from a flat file,Implement ‘for loop’ in power center, Use pushdown optimization and partitioning, Do constraint based data loading, Use incremental aggregation technique to aggregate dataMapplet and loading to multiple designerDifferent types of designer: Mapplet, Worklet, target load plan, loading to multiple targets, linking propertyHands-on Exercise – Create a mapplet and a worklet, Plan a target load, Load multiple targetsPerformance Tuning in InformaticaObjectives of performance tuning, defining performance tuning, learning the sequence for tuningHands-on Exercise – Do performance tuning by following different techniquesRepository ManagerManaging repository, repository manager – client tool, functionalities of previous versions, important tasks in repository managerHands-on Exercise – Manage tasks in repository managerBest Practices in InformaticaUnderstanding and adopting best practices for managing repository.Workflow InformaticaCommon tasks in workflow manager, creating dependencies, scope of workflow monitorHands-on Exercise – Create workflow with dependencies of nodesParameters & VariablesDefine variable and parameter in Informatica, parameter files and their scope, parameter of mapping, worklet and session parameter, workflow and service variable, basic development errorsHands-on Exercise – Define variables and parameter in functions, Use parameter of mapping, Use worklet and session parameter, Use workflow and service variableError handling and recovery in Informaticasession and workflow log, using debuggers, error handling framework in informatica, failover and high availabilityHands-on Exercise – Debug development errors, Read Workflow logs, Use Error handling frameworkHigh Availability & Failover in Informaticaconfigurations and mechanisms in recovery, checking health of powercenter environmentHands-on Exercise – Configure recovery options, Check health of Powercenter environmentWorking with different utilities in Informaticainfacmd, pmrep, infasetup, processing of flat fileHands-on Exercise – Use commands infacmd, pmrep, infasetupFlat file processing (advance transformations)Fixed length and delimited, expression transformations- sequence numbers, dynamic targeting using transaction control.Hands-on Exercise – Perform expression transformations- sequence numbers, dynamic targeting using transaction control.Dynamic targetingDynamic target with use of transaction control, indirect loading.Hands-on Exercise – Use of transaction control with dynamic target, indirect loading.Working with Java transformationsImportance of Java transformations to extend Power Center capabilities, transforming data, active and passive mode.Hands-on Exercise – Use Java transformations to extend Power Center capabilitiesUnconnected Stored Procedure usageUnderstanding unconnected stored procedure in Informatica, the different scenarios of unconnected stored procedure usage.Hands-on Exercise – Use unconnected stored procedure in Informatica in different scenariosAdvance Concepts in SCDUse of SQL transformation (active and passive)Hands-on Exercise – Use of SQL transformation (active and passive)Incremental Data Loading and AggregationUnderstanding Incremental Loading and aggregation and comparison between them .Hands-on Exercise – Do Incremental Loading and aggregationConstraint based loadingWorking with database constraints using PowerCenter, understanding constraint based loading and target load order.Hands-on Exercise – Perform constraint based loading in a given orderXML Transformation and active look upThe various types of XML transformation in Informatica, configuring a lookup as active.Hands-on Exercise – Perform XML transformation, Configure a lookup as activeProfiling in PowerCenterUnderstanding what is data profiling in Informatica, its significance in validating content, ensuring quality and structure of data as per business requirements.Hands-on Exercise – Create data profiling in Informatica and validate the contentWorkflow Creation and DeletionUnderstanding workflow as a group of instruction/command for integration services, learning how to create and delete workflow in Informatica.Hands-on Exercise – Create and delete workflow in InformaticaDatabase ConnectionUnderstanding the Database Connection, creating a new database connection in Informatica, the various steps involved.Hands-on Exercise – Create a new database connection in InformaticaRelational Database TablesWorking with relational database tables in Informatica, mapping for loading data from flat files to relational database files.Hands-on Exercise – Create mapping for loading data from flat files to relational database filesLinkedIn ConnectionUnderstanding how to deploy PowerCenter for seamless LinkedIn connectivity with Informatica PowerCenter.Hands-on Exercise – Deploy PowerCenter for seamless LinkedIn connectivity with Informatica PowerCenter.Connection with SourcesConnecting Informatica PowerCenter with various data sources like Social media channels like Facebook, Twitter, etc.Hands-on Exercise – Connect Informatica PowerCenter with various data sources like Social media channels like Facebook, Twitter, etc.Push down optimization & PartitioningPush down optimization for load-balancing on the server for better performance, the various types of partitioning for optimizing performance.Hands-on Exercise – Optimize using Push down technique for load-balancing on the server for better performance, Create various types of partitioning for optimizing performanceCache managementUnderstanding session cache, importance of cache creation with, implementing session cache, calculating cache requirementHands-on Exercise – Implement cache creation with, Work with session cacheMSBI SSIS Course Content
What is BI?Introduction to Business Intelligence, understanding the concept of Data Modeling, Data Cleaning, learning about Data Analysis, Data Representation, Data Transformation.
ETL OverviewIntroduction to ETL, the various steps involved Extract, Transform, Load, using a user’s email ID to read a flat file, extracting the User ID from email ID, loading the data into a database table.
Working with Connection ManagersIntroduction to Connection Managers – logical representation of a connection, the various types of Connection Managers – Flat file, database, understanding how to load faster with OLE DB, comparing the performance of OLE DB and ADO.net, learning about Bulk Insert, working with Excel Connection Managers and identifying the problems.
Data TransformationsLearning what is Data Transformation, converting data from one format to another, understanding the concepts of Character Map, Data Column and Copy Column Transformation, import and export column transformation, script and OLEDB Command Transformation, understanding row sampling, aggregate and sort transformation, percentage and row sampling.
Advance Data TransformationUnderstanding Pivot and UnPivot Transformation, understanding Audit and Row Count Transformation, working with Split and Join Transformation, studying Lookup and Cache Transformation.
Slowly Changing DimensionsUnderstanding data that slowly changes over time, learning the process of how new data is written over old data, best practices.Detail explanation of three types of SCDs –Type1, Type2 and Type3, and their differences.
Overview of Fuzzy Look-up Transformation and Lookup and Term ExtractionUnderstanding how Fuzzy Lookup Transformation varies from Lookup Transformation, the concept of Fuzzy matching,
Concepts of Logging & ConfigurationLearning about error rows configuration, package logging, defining package configuration, understanding constraints and event handlers.
OBIEE Course Content
Data Modeling ConceptsIntroduction to OBIEE,installation of OBIEE,What are data models and why you need them? The scope, reach and benefits of data modeling,data warehousing,sample OBIEE Report, the business requirement intrinsic in data modeling, various case studies, the data modeling implications and the impact of data modeling on business intelligence.
Business Intelligence ConceptsIntroduction to Business Intelligence, the architecture of data flow,OBIEE architecture, stack description of BI technology,BI Server,BI Scheduler,displaying report with data,need for reporting in business, distinction between OLTP and OLAP, the BI platform in BI technology stack, the product and dimension hierarchy, multidimensional and relational analytical processing, types of Reports, multidimensional modelling.
Understanding OLAPOnline Analytical Processing,the OBIEE admin tools, RPD, the important concepts & terminology, significance of OLAP in business intelligence life cycle, understanding various data schemas like star,designing with Star Schema,creation of physical layer & simple RPD, enterprise information model, snow flake and constellation, aggregate and calculated measures.
Oracle Business Intelligence SuiteIntroduction to Oracle Business Intelligence Enterprise Edition, overview of the OBIEE product, the Architecture of OBIEE, key features and components,creating a simple report, business model, hierarchy, presentation and mapping.
Oracle BI RepositoryUnderstanding what is Oracle Business Intelligence Repository, installation of OBIEE on Windows system, directory structure installation, services,, analytics and interactive reporting, dashboard creation, multiple report creation, formula editing, column properties, altering
BI Repository Business ModelUnderstanding how to build a Business Model and Mapping Layer in BI Repository, creating the Presentation Layer,formatting of data, conditional formatting, saving the report, creating and sharing folder. Topics – Data format, Conditional format, Removing filters,Like,Advanced ,Save the report , Shared folder and my folder, Creating new folder
Business Model Continuedworking with the Enterprise Manager, testing and validating the Repository, cache disabling,dashboard prompt, filtering, editing of dashboard with action link.Water fall model.
Working with RepositoryWorking with the Repository, creating Test Report, adding calculations, deploying OBIEE analysis, coming up with landing page UI and its features,repository variables, session and presentation variables.
BI Presentation CatalogLearning about the Oracle BI Presentation Catalog, accessing and managing objects, Report archiving and exporting, data grouping and limiting in analyses, data formatting, conditional formatting, master detail report, report creation with multiple subject areas, data mashup, visual analyzer, performance tile, BI functionality, waterfall model, graphs, pivot table, Pie chart, KPI watchList.
Dashboard CreationThe OBIEE dashboard setup, basics of dashboard and dashboard pages, deploying Dashboard Builder for building Dashboards, editing, sharing, and saving Dashboard analysis,cache creation & clearing, ODBC functions in OBIEE, Logical Table Source, summary & detail report.
OBIEE Security & ManagementSecuring the Oracle Business Intelligence Suite with Enterprise Manager, creating alerts, managing grouping and maintenance,administrating, the various types of security in OBIEE, object, task and folder level security, Report scheduling.
Talend For Hadoop Course Content
Getting started with TalendWorking of Talend,Introduction to Talend Open Studio and its Usability,What is Meta Data?
JobsCreating a new Job,Concept and creation of Delimited file,Using Meta Data and its Significance,What is propagation?,Data integration schema,Creating Jobs using t-filter row and string filter,Input delimation file creation
Overview of Schema and AggregationJob design and its features,What is a T map?,Data Aggregation,Introduction to triplicate and its Working,Significance and working of tlog,T map and its properties
Connectivity with Data SourceExtracting data from the source,Source and Target in Database (MySQL),Creating a connection, Importing Schema or Metadata
Getting started with Routines/FunctionsCalling and using Functions,What are Routines?,Use of XML file in Talend,Working of Format data functions,What is type casting?
Data TransformationDefining Context variable,Learning Parameterization in ETL,Writing an example using trow generator,Define and Implement Sorting,What is Aggregator?,Using t flow for publishing data,Running Job in a loop
Connectivity with HadoopLearn to start Trish Server,Connectivity of ETL tool connect with Hadoop,Define ETL method,Implementation of Hive,Data Import into Hive with an example,An example of Partitioning in hive,Reason behind no customer table overwriting?,Component of ETL,Hive vs. Pig,Data Loading using demo customer,ETL Tool,Parallel Data Execution
Introduction to Hadoop and its Ecosystem, Map Reduce and HDFSBig Data, Factors constituting Big Data,Hadoop and Hadoop Ecosystem,Map Reduce -Concepts of Map, Reduce, Ordering, Concurrency, Shuffle, Reducing, Concurrency ,Hadoop Distributed File System (HDFS) Concepts and its Importance,Deep Dive in Map Reduce – Execution Framework, Partitioner Combiner, Data Types, Key pairs,HDFS Deep Dive – Architecture, Data Replication, Name Node, Data Node, Data Flow, Parallel Copying with DISTCP, Hadoop Archives
Hands on ExercisesInstalling Hadoop in Pseudo Distributed Mode, Understanding Important configuration files, their Properties and Demon Threads,Accessing HDFS from Command Line
Map Reduce – Basic Exercises,Understanding Hadoop Eco-system,Introduction to Sqoop, use cases and Installation,Introduction to Hive, use cases and Installation,Introduction to Pig, use cases and Installation,Introduction to Oozie, use cases and Installation,Introduction to Flume, use cases and Installation,Introduction to Yarn
Mini Project – Importing Mysql Data using Sqoop and Querying it using Hive
Deep Dive in Map ReduceHow to develop Map Reduce Application, writing unit test,Best Practices for developing and writing, Debugging Map Reduce applications,Joining Data sets in Map Reduce
HiveA. Introduction to Hive
What Is Hive?,Hive Schema and Data Storage,Comparing Hive to Traditional Databases,Hive vs. Pig,Hive Use Cases,Interacting with Hive
B. Relational Data Analysis with Hive
Hive Databases and Tables,Basic HiveQL Syntax,Data Types ,Joining Data Sets,Common Built-in Functions,Hands-On Exercise: Running Hive Queries on the Shell, Scripts, and Hue
C. Hive Data Management
Hive Data Formats,Creating Databases and Hive-Managed Tables,Loading Data into Hive,Altering Databases and Tables,Self-Managed Tables,Simplifying Queries with Views,Storing Query Results,Controlling Access to Data,Hands-On Exercise: Data Management with Hive
D. Hive Optimization
Understanding Query Performance,Partitioning,Bucketing,Indexing Data
E. Extending Hive
Topics : User-Defined Functions
F. Hands on Exercises – Playing with huge data and Querying extensively.
G. User defined Functions, Optimizing Queries, Tips and Tricks for performance tuning
PigA. Introduction to Pig
What Is Pig?,Pig’s Features,Pig Use Cases,Interacting with Pig
B. Basic Data Analysis with Pig
Pig Latin Syntax, Loading Data,Simple Data Types,Field Definitions,Data Output,Viewing the Schema,Filtering and Sorting Data,Commonly-Used Functions,Hands-On
Exercise: Using Pig for ETL ProcessingC. Processing Complex Data with Pig
Complex/Nested Data Types,Grouping,Iterating Grouped Data,Hands-On Exercise: Analyzing Data with Pig
D. Multi-Data set Operations with Pig
Techniques for Combining Data Sets,Joining Data Sets in Pig,Set Operations,Splitting Data Sets,Hands-On Exercise
E. Extending Pig
Macros and Imports,UDFs,Using Other Languages to Process Data with Pig,Hands-On Exercise: Extending Pig with Streaming and UDFs
F. Pig Jobs
ImpalaA. Introduction to Impala
What is Impala?,How Impala Differs from Hive and Pig,How Impala Differs from Relational Databases,Limitations and Future Directions Using the Impala Shell
B. Choosing the best (Hive, Pig, Impala)
Major Project – Putting it all together and Connecting DotsPutting it all together and Connecting Dots,Working with Large data sets, Steps involved in analyzing large data
ETL Connectivity with Hadoop EcosystemHow ETL tools work in big data Industry,Connecting to HDFS from ETL tool and moving data from Local system to HDFS,Moving Data from DBMS to HDFS,Working with Hive with ETL Tool,Creating Map Reduce job in ETL tool,End to End ETL PoC showing Hadoop integration with ETL tool.
Job and Certification SupportMajor Project, Hadoop Development, cloudera Certification Tips and Guidance and Mock Interview Preparation, Practical Development Tips and Techniques, certification preparation
DataStage Course Content
Information ServerIntroduction to the IBM Information Server Architecture, the Server Suite components, the various tiers in the Information Server.
InfoSphere DataStageUnderstanding the IBM InfoSphere DataStage, the Job life cycle to develop, test, deploy and run data jobs, high performance parallel framework, real-time data integration.
DataStage FeaturesIntroduction to the design elements, various DataStage jobs, creating massively parallel framework, scalable ETL features, working with DataStage jobs.
DataStage JobUnderstanding the DataStage Job, creating a Job that can effectively extract, transform and load data, cleansing and formatting data to improve its quality.
Parallelism, Partitioning and CollectingLearning about data parallelism – pipeline parallelism and partitioning parallelism, the two types of data partitioning – Key-based partitioning and Keyless partitioning, detailed understanding of partitioning techniques like round robin, entire, hash key, range, DB2 partitioning, data collecting techniques and types like round robin, order, sorted merge and same collecting methods.
Job Stages of InfoSphere DataStageUnderstanding the various job stages – data source, transformer, final database, the various parallel stages – general objects, debug and development stages, processing stage, file stage types, database stage, real time stage, restructure stage, data quality and sequence stages of InfoSphere DataStage.
Stage EditorUnderstanding the parallel job stage editors, the important types of stage editors in DataStage.
Sequential FileWorking with the Sequential file stages, understanding runtime column propagation, working with RCP in sequential file stages, using the sequential file stage as a source stage and target stage.
Dataset and FilesetUnderstanding the difference between dataset and fileset and how DataStage works in each scenario.
Sample Job CreationCreating of a sample DataStage job using the dataset and fileset types of data.
Properties of Sequential File stage and Data Set StageLearning about the various properties of Sequential File Stage and Dataset stage.
Lookup File Set StageCreating a lookup file set, working in parallel or sequential stage, learning about single input and output link.
Transformer StageStudying the Transformer Stage in DataStage, the basic working of this stage, characteristics -single input, any number of outputs and reject link, how it differs from other processing stages, the significance of Transformer Editor, and evaluation sequence in this stage.
Transformer Stage Functions & FeaturesDeep dive into Transformer functions – String, type conversion, null handling, mathematical, utility functions, understanding the various features like constraint, system variables, conditional job aborting, Operators and Trigger Tab.
Looping FunctionalityUnderstanding the looping functionality in Transformer Stage, output with multiple rows for single input row, the procedure for looping, loop variable properties.
Teradata Enterprise StageConnecting to the Teradata Enterprise Stage, properties of connection.
Single partition and parallel executionGenerating data using Row Generator sequentially in a single partition, configuring to run in parallel.
Aggregator StageUnderstanding the Aggregator Stage in DataStage, the two types of aggregation – hash mode and sort mode.
Different Stages Of ProcessingDeep learning of the various stages in DataStage, the importance of Copy, Filter and Modify stages to reduce number of Transformer Stages.
Parameters and Value FileUnderstanding Parameter Set, storing DataStage and Quality Stage job parameters and default values in files, the procedure to deploy Parameter Sets function and its advantages.
Funnel StageIntroduction to Funnel Stage, copying multiple input data sets into single output data set, the three modes – continuous funnel, sort funnel and sequence.
Join StageTopics – Understanding the Join Stage and its types, Join Stage Partitioning, performing various Join operations.
Lookup StageUnderstanding the Lookup Stage for processing using lookup operations, knowing when to use Lookup Stage, partitioning method for Lookup Stage, comparing normal and sparse lookup, doing lookup for a range of values using Range Lookup.
Merge StageLearning about the Merge Stage, multiple input links and single output link, need for key partitioned and sorted input data set, specifying several reject links in Merge Stage, comparing the Join vs. Lookup vs. Merge Stages of processing.
FTP Enterprise StageStudying the FTP Enterprise Stage, transferring multiple files in parallel, invoking the FTP client, transferring to or from remote host using FTP protocol, FTP Enterprise Stage properties.
Sort StageUnderstanding the Sort Stage, performing complex sort operations, learning about Stable Sort, removing duplicates.
Teradata ConnectorWorking with Teradata Connector in DataStage, configuring as a source, target or parallel in a lookup context for parallel or server jobs, learning about Teradata Parallel Transporter direct API for bulk operations and the Operators deployed.
Connector StagesLearning about the various Database Connector Stages for working with Balanced Optimization Tool.
ABAP Extract StageUnderstanding the ABAP Extract Stage, extracting data from SAP data repositories, generating ABAP extraction programs, executing SQL query and sending data to DataStage Server.
Development / Debug StagesThe various Stages for debugging the parallel job designs, controlling flow of multiple activities in a job sequence, understanding the various data sampling stages in a Debug/Development Stage like Head Stage, Tail Stage and Sample Stage.
Job Activity StageLearning about Job Activity Stage which specifies a DataStage Server or parallel job to execute.
Pentaho Course Content
Introduction to Pentaho ToolPentaho user console, Oveview of Pentaho Business Intelligence and Analytics tools, database dimensional modelling, using Star Schema for querying large data sets, understanding fact tables and dimensions tables, Snowflake Schema, principles of Slowly Changing Dimensions, knowledge of how high availability is supported for the DI server and BA server, managing Pentaho artifacts Knowledge of big data solution architecturesHands-on Exercise – Schedule a report using user console, Create model using database dimensional modeling techniques, create a Star Schema for querying large data sets, Use fact tables and dimensions tables, manage Pentaho artifacts
Data ArchitectureDesigning data models for reporting, Pentaho support for predictive analytics, Design a Streamlined Data Refinery (SDR) solution for a clientHands-on Exercise – Design data models for reporting, Perform predictive analytics on a data set, design a Streamlined Data Refinery (SDR) solution for a dummy clientClustering in PentahoUnderstanding the basics of clustering in Pentaho Data Integration, creating a database connection, moving a CSV file input to table output and Microsoft Excel output, moving from Excel to data grid and log.Hands-on Exercise – Create a database connection, move a csv file input to table output and Microsoft excel output, move data from excel to data grid and logData TransformationThe Pentaho Data Integration Transformation steps, adding sequence, understanding calculator, Penthao number range, string replace, selecting field value, sorting and splitting rows, string operation, unique row and value mapper, Usage of metadata injectionHands-on Exercise – Practice various steps to perform data integration transformation, add sequence, use calculator, work on number range, selecting field value, sorting and splitting rows, string operation, unique row and value mapper, use metadata injectionPentaho FlowWorking with secure socket command, Pentaho null value and error handling, Pentaho mail, row filter and priorities stream.Hands-on Exercise – Work with secure socket command, Handle null values in the data, perform error handling, send email, get row filtered data, set stream prioritiesDeploying SCDUnderstanding Slowly Changing Dimensions, making ETL dynamic, dynamic transformation, creating folders, scripting, bulk loading, file management, working with Pentaho file transfer, Repository, XML, Utility and File encryption.Hands-on Exercise – Make ETL dynamic transformation, create folders, write scripts, load bulk data, perform file management ops, work with Pentaho file transfer, XML utility and File encryption
Type of Repository in PentahoCreating dynamic ETL, passing variable and value from job to transformation, deploying parameter with transformation, importance of Repository in Pentaho, database connection, environmental variable and repository import.Hands-on Exercise – Create dynamic ETL, pass variable and value from job to transformation, deploy parameter with transformation, connect to a database, set pentaho environmental variables, import a repository in the pentaho workspace
Pentaho Repository & Report DesigningWorking with Pentaho dashboard and Report, effect of row bending, designing a report, working with Pentaho Server, creation of line, bar and pie chart in Pentaho, How to achieve localization in reportsHands-on Exercise – Create Pentaho dashboard and report, check effect of row bending, design a report, work with Pentaho Server, create line, bar and pie chart in Pentaho, Implement localization in a report
Pentaho DashboardWorking with Pentaho Dashboard, passing parameters in Report and Dashboard, drill-down of Report, deploying Cubes for report creation, working with Excel sheet, Pentaho data integration for report creation.Hands-on Exercise – Pass parameters in Report and Dashboard, deploy Cubes for report creation, drill-down in report to understand the entries, import data from an excel sheet, Perform data integration for report creation
Understanding CubeWhat is a Cube? Creation and benefit of Cube, working with Cube, Report and Dashboard creation with Cube.Hands-on Exercise – Create a Cube, create report and dashboard with Cube
Multi Dimensional ExpressionUnderstanding the basics of Multi Dimensional Expression (MDX), basics of MDX, understanding Tuple, its implicit dimensions, MDX sets, level, members, dimensions referencing, hierarchical navigation, and meta data.Hands-on Exercise – Work with MDX, Use MDX sets, level, members, dimensions referencing, hierarchical navigation, and meta data
Pentaho AnalyzerPentaho analytics for discovering, blending various data types and sizes, including advanced analytics for visualizing data across multiple dimensions, extending Analyzer functionality, embedding BA server reports, Pentaho REST APIsHands-on Exercise – Blend various data types and sizes, Perform advanced analytics for visualizing data across multiple dimensions, Embed BA server report
Pentaho Data Integration (PDI) DevelopmentKnowledge of the PDI steps used to create an ETL job, Describing the PDI steps to create an ETL transformation, Describing the use of property filesHands-on Exercise – Create an ETL transformation using PDI steps, Use property filesHadoop ETL ConnectivityDeploying ETL capabilities for working on the Hadoop ecosystem, integrating with HDFS and moving data from local file to distributed file system, deploying Apache Hive, designing MapReduce jobs, complete Hadoop integration with ETL tool.Hands-on Exercise – Deploy ETL capabilities for working on the Hadoop ecosystem, Integrate with HDFS and move data from local file to distributed file system, deploy Apache Hive, design MapReduce jobs
Creating dashboards in PentahoCreating interactive dashboards for visualizing highly graphical representation of data for improving key business performance.Hands-on Exercise – Create interactive dashboards for visualizing graphical representation of data
Performance TuningManaging BA server logging, tuning Pentaho reports, monitoring the performance of a job or a transformation, Auditing in PentahoHands-on Exercise – Manage logging in BA server, Fine tune Pentaho report, Monitor the performance of an ETL jobSecurityIntegrating user security with other enterprise systems, Extending BA server content security, Securing data, Pentaho’s support for multi-tenancy, Using Kerberos with PentahoHands-on Exercise – Configure security settings to implement high level securityInformatica ProjectData– Working with Admin Console
This is project that will help you understand the Informatica admin console. You will work on a real world scenario of creating and managing roles and responsibilities for various users and designing the various sessions and workflow in the Informatica ETL environment. This project also includes assigning users to groups, creating a collaborative environment, lock handling on repository objects of Informatica in order to develop a complete mastery of working with the Informatica tool.
SSIS ProjectProject – Configuration and Logging
Topics – In this SQL Server Integration Services (SSIS) project you will extensively work on multiple data from heterogeneous source into SQL Server. As part of the project you will learn to clean and standardize data and automate the administrative work. Some of the tasks that you will be performing are adding logs to SSIS package, configuration and saving it to an XML file. Upon completion of the project you will have hands-on experience in handling constraints, error row configuration and event handlers.
OBIEE ProjectProject – Report Formatting
Data – Revenue
Topics – This is an Oracle Business Intelligence project that is associated with creating complex dashboards and performing formatting of the report. You will gain hands-on experience in filtering and sorting of the report data depending on the business requirements. This project will also help you understand how to convert the data into graphs for easy visualization and analysis. As part of the project you will gain experience in calculating the subtotal and grand total in a business scenario while finding the revenue generated.
Talend For Hadoop ProjectProject Work
1. Project – Jobs
Problem Statement – It describes that how to create a job using metadata. For this it includes following actions:
Create XML File,Create Delimited File,Create Excel File,Create Database Connection
2. Hadoop Projects
A. Project – Working with Map Reduce, Hive, Sqoop
Problem Statement – It describes that how to import mysql data using sqoop and querying it using hive and also describes that how to run the word count mapreduce job.
B. Project – Connecting Pentaho with Hadoop Eco-system
Problem Statement – It includes:
Quick Overview of ETL and BI,Configuring Pentaho to work with Hadoop Distribution,Loading data into Hadoop cluster,Transforming data into Hadoop cluster
Extracting data from Hadoop ClusterData Stage ProjectProject – SCD2 Implementation
Data –Supplier data
Topics – This project is associated with working on the Slowly Changing Dimensions type 2 where entire history is stored in the database. You will learn how to create a surrogate key generator for implementing SCD. This involves creating additional dimensions and segmenting old and new values for extraction.
Pentaho ProjectsProject 1– Pentaho Interactive Report
Data– Sales, Customer, Product
Objective – In this Pentaho project you will be exclusively working on creating Pentaho interactive reports for sales, customer and product data fields. As part of the project you will learn to create a data source, build a Mondrian cube which is represented in an XML file. You will gain advanced experience in managing data sources, building and formatting Pentaho report, change the report template and scheduling of reports.
Project 2– Pentaho Interactive Report
Domain– Retail
Objective – Build complex dashboard with drill down reports and charts for analysing business trends.
Project 3– Pentaho Interactive Report
Domain– BI
Objective – To do automation testing in ETL environment, Check the correctness of data transformation, Data loading in datawarehouse without any loss or truncation, Rejecting, Replacing and Reporting invalid data, Creation of unit tests to target exceptions