Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a practical and structured introduction to constructing real-time data streaming systems. It explores core concepts, architectural patterns, and industry-standard tools essential for processing continuous data at scale. Participants will acquire the skills to design, implement, and optimize streaming pipelines using modern frameworks. The curriculum advances from foundational principles to practical applications, empowering learners to confidently develop production-ready real-time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs accompanied by real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A sessions
Course Objectives
• Gain a solid understanding of real-time data streaming concepts and system architecture
• Distinguish between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Utilize distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimize real-time data solutions tailored to business use cases
This course is available as onsite live training in Uzbekistan or online live training.Course Outline
Course Outline Day 1
• Introduction to data streaming concepts
• Fundamentals of batch vs. real-time processing
• Basics of event-driven architecture
• Common industry use cases
• Overview of the streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Concepts and frameworks for stream processing
• Event time vs. processing time
• Windowing techniques and their applications
• Stateful stream processing
• Basics of fault tolerance and checkpointing
Day 4
• Data transformation within streaming pipelines
• ETL and ELT in real-time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Basics of security and access control
• Performance tuning and optimization
• End-to-end pipeline design review
• Real-world use cases such as fraud detection and IoT processing
Need help picking the right course?
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Related Courses
Administrator Training for Apache Hadoop
35 HoursAudience:
This course is designed for IT specialists seeking solutions to store and process large datasets within a distributed system environment.
Goal:
To develop in-depth expertise in Hadoop cluster administration.
Big Data Analytics in Health
21 HoursBig data analytics refers to the process of analysing large and diverse datasets to uncover correlations, hidden patterns, and other valuable insights.
The healthcare sector contains vast quantities of complex, heterogeneous medical and clinical data. Applying big data analytics to health data offers significant potential for generating insights that can improve healthcare delivery. However, the sheer size and complexity of these datasets present considerable challenges in terms of analysis and practical implementation within clinical environments.
In this instructor-led, live remote training, participants will learn how to conduct big data analytics in the health domain by progressing through a series of hands-on, live-lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the key characteristics of medical data
- Apply big data techniques to manage and analyse medical data
- Examine big data systems and algorithms in the context of healthcare applications
Audience
- Developers
- Data Scientists
Course Format
- A combination of lecture, discussion, exercises, and extensive hands-on practice.
Note
- To request a customised training session for this course, please contact us to make arrangements.
Hadoop For Administrators
21 HoursApache Hadoop is the most widely adopted framework for processing Big Data across clusters of servers. In this three-day course (optionally extended to four days), participants will explore the business benefits and real-world use cases of Hadoop and its ecosystem. They will learn how to plan cluster deployment and growth, as well as how to install, maintain, monitor, troubleshoot, and optimize Hadoop environments. The course also includes hands-on practice with bulk data loading, familiarisation with various Hadoop distributions, and managing tools within the Hadoop ecosystem. The programme concludes with a discussion on securing the cluster using Kerberos.
“…The materials were exceptionally well-prepared and comprehensively covered. The lab sessions were highly beneficial and meticulously organised.”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
Lectures and hands-on labs, with an approximate balance of 60% lectures and 40% labs.
Hadoop for Developers (4 days)
28 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. This course provides developers with an introduction to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase. These advanced programming techniques will be beneficial to experienced Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursTarget Audience:
This course aims to simplify big data and Hadoop technologies, demonstrating that they are accessible and easy to understand.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in Uzbekistan (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL database built on top of Hadoop. It is designed for developers who intend to build applications using HBase, as well as administrators responsible for managing HBase clusters.
The course guides developers through HBase architecture, data modeling, and application development. It also covers the integration of MapReduce with HBase and addresses key administrative topics related to performance optimization. The training is highly practical, featuring numerous lab exercises.
Duration: 3 days
Audience: Developers & Administrators
Infomatica with Big Data (BDM)
7 HoursInformatica with Big Data (BDM) is a program designed to equip data professionals with the skills to develop, manage, and analyze large datasets, utilizing the latest technologies and architectures in the Big Data field. The course covers the entire data lifecycle, from ingestion, integration, cleansing, and curation to analytics and the exposure and consumption of big data services.
In this course, participants will explore solutions that process large datasets using Big Data technologies and architectures such as Apache Hive, Apache Hadoop, and Apache Spark. The course also provides hands-on experience with Informatica tools like Bloombox, Big Data Management, and iData Fabric to understand big data technologies such as MapReduce and Hadoop. By the end of the course, learners will be capable of creating end-to-end data solutions using Informatica and its associated Big Data offerings.
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source, flow-based data integration and event-processing platform. It enables automated, real-time data routing, transformation, and system mediation between disparate systems, with a web-based UI and fine-grained control.
This instructor-led, live training (onsite or remote) is aimed at intermediate-level administrators and engineers who wish to deploy, manage, secure, and optimize NiFi dataflows in production environments.
By the end of this training, participants will be able to:
- Install, configure, and maintain Apache NiFi clusters.
- Design and manage dataflows from varied sources and sinks.
- Implement flow automation, routing, and transformation logic.
- Optimize performance, monitor operations, and troubleshoot issues.
Format of the Course
- Interactive lecture with real-world architecture discussion.
- Hands-on labs: building, deploying, and managing flows.
- Scenario-based exercises in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Apache NiFi for Developers
7 HoursIn this instructor-led, live training in Uzbekistan, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
Python and Spark for Big Data for Banking (PySpark)
14 HoursRenowned for its clear syntax and code readability, Python is a high-level programming language. Spark serves as a powerful engine for processing big data, enabling efficient querying, analysis, and transformation. PySpark bridges the two, allowing users to interface Spark with Python.
Target Audience: This course is designed for intermediate-level banking professionals who are already familiar with Python and Spark and wish to enhance their expertise in big data processing and machine learning.
PySpark and Machine Learning
21 HoursThis training offers a hands-on introduction to developing scalable data processing and Machine Learning workflows with PySpark. Participants will learn how Apache Spark functions within contemporary Big Data ecosystems and how to efficiently manage large datasets using distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led live training in Uzbekistan, participants will learn how to leverage Python and Spark together to analyze big data while completing hands-on exercises.
By the end of this training, participants will be able to:
- Master the use of Spark with Python to analyze Big Data.
- Complete exercises that simulate real-world scenarios.
- Apply various tools and techniques for big data analysis using PySpark.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio is a data-centric platform that integrates big data, AI, and governance into a single solution. Its Rocket and Intelligence modules enable rapid data exploration, transformation, and advanced analytics in enterprise environments.
This instructor-led, live training (online or onsite) is aimed at intermediate-level data professionals who wish to use the Rocket and Intelligence modules in Stratio effectively with PySpark, focusing on looping structures, user-defined functions, and advanced data logic.
By the end of this training, participants will be able to:
- Navigate and work within the Stratio platform using Rocket and Intelligence modules.
- Apply PySpark in the context of data ingestion, transformation, and analysis.
- Use loops and conditional logic to control data workflows and feature engineering tasks.
- Create and manage user-defined functions (UDFs) for reusable data operations in PySpark.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.