In data engineering we often default to processing in nightly or hourly batches, but that pattern is not enough anymore. Our customers know information is created all the time and realize it should be available much sooner. While the move to stream processing adds complexity, the tools we have available make it achievable for teams of any size.
This session exposes beginners and experienced tech professionals to techniques on how to implement real-time stream processes using Azure Event Hubs or Apache Kafka. We’ll dive into three options to build streaming data pipelines: Azure Databricks, Azure Stream Analytics, and Confluent Cloud.