Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is Apache Beam? [closed]

Tags:

apache-beam

I was going through the Apache posts and found a new term called Beam. Can anybody explain what exactly Apache Beam is? I tried to google out but unable to get a clear answer.

like image 825
Viswa Avatar asked Feb 08 '16 07:02

Viswa


People also ask

What does Apache Beam do?

Apache Beam is an open source, unified model for defining both batch- and streaming-data parallel-processing pipelines. The Apache Beam programming model simplifies the mechanics of large-scale data processing. Using one of the Apache Beam SDKs, you build a program that defines the pipeline.

Is Apache Beam open source?

Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines.

Is Apache Beam the future?

Conclusion. We firmly believe Apache Beam is the future of streaming and batch data processing. We hope it will lead to a healthy ecosystem of sophisticated runners that compete by making users happy, not by maximizing market share via API lock in.

What is the difference between Apache Beam and airflow?

Airflow shines in data orchestration and pipeline dependency management, while Beam is a unified tool for building big data pipelines, which can be executed in the most popular data processing systems such as Spark or Flink.


2 Answers

Apache Beam is an open source, unified model for defining and executing both batch and streaming data-parallel processing pipelines, as well as a set of language-specific SDKs for constructing pipelines and runtime-specific Runners for executing them.

History: The model behind Beam evolved from a number of internal Google data processing projects, including MapReduce, FlumeJava, and Millwheel. This model was originally known as the “Dataflow Model” and first implemented as Google Cloud Dataflow -- including a Java SDK on GitHub for writing pipelines and fully managed service for executing them on Google Cloud Platform. Others in the community began writing extensions, including a Spark Runner, Flink Runner, and Scala SDK. In January 2016, Google and a number of partners submitted the Dataflow Programming Model and SDKs portion as an Apache Incubator Proposal, under the name Apache Beam (unified Batch + strEAM processing). Apache Beam graduated from incubation in December 2016.

Additional resources for learning the Beam Model:

  • The Apache Beam website
  • The VLDB 2015 paper (using the original naming Dataflow model)
  • Streaming 101 and Streaming 102 posts on O’Reilly’s Radar site
  • A Beam podcast on Software Engineering Radio
like image 145
Frances Avatar answered Oct 08 '22 21:10

Frances


Apache Beam (Batch + strEAM) is a model and set of APIs for doing both batch and streaming data processing. It was open-sourced by Google (with Cloudera and PayPal) in 2016 via an Apache incubator project.

The page Dataflow/Beam & Spark: A Programming Model Comparison - Cloud Dataflow contrasts the Beam API with Apache Spark, which has been hugely successful at bringing a modern, flexible API and set of optimization techniques for both batch and streaming to the Hadoop world and beyond.

Beam tries to take all that a step further via a model that makes it easy to describe the various aspects of the out-of-order processing that often is an issue when combining batch and streaming processing, as described in that Programming Model Comparison.

In particular, to quote from the comparison, The Dataflow model is designed to address, elegantly and in a way that is more modular, robust and easier to maintain:

... the four critical questions all data processing practitioners must attempt to answer when building their pipelines:

  • What results are calculated? Sums, joins, histograms, machine learning models?
  • Where in event time are results calculated? Does the time each event originally occurred affect results? Are results aggregated in fixed windows, sessions, or a single global window?
  • When in processing time are results materialized? Does the time each event is observed within the system affect results? When are results emitted? Speculatively, as data evolve? When data arrive late and results must be revised? Some combination of these?
  • How do refinements of results relate? If additional data arrive and results change, are they independent and distinct, do they build upon one another, etc.?

The pipelines described in Beam can in turn be run on Spark, Flink, Google's Dataflow offering in the cloud, and other "runtimes", including a "Direct" local machine option.

A variety of languages are supported by the architecture. The Java SDK is available now. A Dataflow Python SDK is nearing release, and others are envisioned for Scala etc.

See the source at Mirror of Apache Beam

like image 33
nealmcb Avatar answered Oct 08 '22 22:10

nealmcb