Kafka Training

Intellipaat
Course Summary
Our Apache Kafka training course will give you hands-on experience to master the real-time stream processing platform. We provide the best online classes to help you learn Kafka API, architecture, configuration, installation, integration with Hadoop, Spark, Storm. Work on real life projects.
-
+
Course Description
About Kafka Training Course
What you will learn in this Kafka training?
- Kafka characteristics and salient features
- Kafka cluster deployment on Hadoop and YARN
- Understanding real-time Kafka streaming
- Introduction to the Kafka API
- Storing of records using Kafka in fault-tolerant way
- Producing and consuming message from feeds like Twitter
- Solving Big Data problems in messaging systems
- Kafka high throughput, scalability, durability and fault-tolerance
- Deploying Kafka in real world business scenarios
Who should take this Kafka training course?
- Big Data Hadoop Developers, Architects and other professionals
- Testing Professionals, Project Managers, Messaging and Queuing System professionals.
What are the prerequisites for taking this training course?
Anybody can take this training course. Having a background in Java is beneficial.Why should you take this Apache Kafka training?
Apache Kafka is a powerful distributed streaming platform for working with extremely huge volumes of data. An individual Kafka broker can manage hundreds of megabytes of read/write per second on large number of clients. It is highly scalable and has exceptionally high throughput making it ideal for enterprises working on Big Data problems involved in messaging systems. This Intellipaat training will make you fully equipped to work in challenging roles in the Apache Kafka domain for top salaries.
-
+
Course Syllabus
Kafka Course Content
What is Kafka – An IntroductionUnderstanding what is Apache Kafka, the various components and use cases of Kafka, implementing Kafka on a single node.Multi Broker Kafka ImplementationLearning about the Kafka terminology, deploying single node Kafka with independent Zookeeper, adding replication in Kafka, working with Partitioning and Brokers, understanding Kafka consumers, the Kafka Writes terminology, various failure handling scenarios in Kafka.Multi Node Cluster SetupIntroduction to multi node cluster setup in Kafka, the various administration commands, leadership balancing and partition rebalancing, graceful shutdown of kafka Brokers and tasks, working with the Partition Reassignment Tool, cluster expending, assigning Custom Partition, removing of a Broker and improving Replication Factor of Partitions.Integrate Flume with KafkaUnderstanding the need for Kafka Integration, successfully integrating it with Apache Flume, steps in integration of Flume with Kafka as a Source.Kafka APIDetailed understanding of the Kafka and Flume Integration, deploying Kafka as a Sink and as a Channel, introduction to PyKafka API and setting up the PyKafka Environment.Producers & ConsumersConnecting Kafka using PyKafka, writing your own Kafka Producers and Consumers, writing a random JSON Producer, writing a Consumer to read the messages from a topic, writing and working with a File Reader Producer, writing a Consumer to store topics data into a file.Kafka ProjectType : Multi Broker Kafka ImplementationTopics : In this project you will learn about the Apache Kakfa which is a platform for handling real-time data feeds. You will exclusively work with Kafka brokers, understand partitioning, Kafka consumers, the terminology used for Kafka writes and failure handling in Kafka, understand how to deploy a single node Kafka with independent Zookeeper. Upon completion of the project you will gain considerable experience in working in a real world scenario for processing streaming data within an enterprise infrastructure.
This course is listed under
Open Source
, Data Centre Management
, Development & Implementations
, Data & Information Management
, IT Strategy & Management
, Project & Service Management
and Quality Assurance & Testing
Community
Related Posts: