Google Dataflow: The new open model for batch and stream processing

Scale
06/07/2016 - 16:30 to 17:10
Maschinenhaus
long talk (40 min)
Intermediate

Session abstract: 

In 2004 Google published the MapReduce paper, a programming model that kick-started big data as we know it. Ten years later, Google introduced Dataflow - a new paradigm for big data, integrating batch and stream processing in one common abstraction. This time the offer was more than a paper, but also an open source Java SDK and a cloud managed service to run it. In 2016 big data players like Cask, Cloudera, Data Artisans, PayPal, Slack, Talend joined Google to propose Dataflow for incubation at the Apache Software Foundation - now accepted as Apache Beam. Dataflow is here, not only unifying batch and streaming, but also the big data world.

In this talk we are going to review Dataflow's differentiating elements and why they matter.  We’ll demonstrate Dataflow’s capabilities through a real-time demo with practical insights on how to manage and visualize streaming data flows.

Video: 

Slide: