Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark Streaming: stateless overlapping windows vs. keeping state

Tags:

What would be some considerations for choosing stateless sliding-window operations (e.g. reduceByKeyAndWindow) vs. choosing to keep state (e.g. via updateStateByKey or the new mapStateByKey) when handling a stream of sequential, finite event sessions with Spark Streaming?

For example, consider the following scenario:

A wearable device tracks physical exercises performed by the wearer. The device automatically detects when an exercise starts, and emits a message; emits additional messages while the exercise is undergoing (e.g. heart rate); and finally, emits a message when the exercise is done.

The desired result is a stream of aggregated records per exercise session. i.e. all events of the same session should be aggregated together (e.g. so that each session could be saved in a single DB row). Note that each session has a finite length, but the entire stream from multiple devices is continuous. For convenience, let's assume the device generates a GUID for each exercise session.

I can see two approaches for handling this use-case with Spark Streaming:

  1. Using non-overlapping windows, and keeping state. A state is saved per GUID, with all events matching it. When a new event arrives, the state is updated (e.g. using mapWithState), and in case the event is "end of exercise session", an aggregated record based on the state will be emitted, and the key removed.

  2. Using overlapping sliding windows, and keeping only the first sessions. Assume a sliding window of length 2 and interval 1 (see diagram below). Also assume that the window length is 2 X (maximal possible exercise time). On each window, events are aggreated by GUID, e.g. using reduceByKeyAndWindow. Then, all sessions which started at the second half of the window are dumped, and the remaining sessions emitted. This enables using each event exactly once, and ensures all events belonging to the same session will be aggregated together.

Diagram for approach #2:

Only sessions starting in the areas marked with \\\ will be emitted.  ----------- |window 1 | |\\\\|    | -----------      ----------      |window 2 |      |\\\\|    |        -----------           ----------           |window 3 |           |\\\\|    |           ----------- 

Pros and cons I see:

Approach #1 is less computationally expensive, but requires saving and managing state (e.g. if the number of concurrent sessions increases, the state might get larger than memory). However if the maximal number of concurrent sessions is bounded, this might not be an issue.

Approach #2 is twice as expensive (each event is processed twice), and with higher latency (2 X maximal exercise time), but more simple and easily manageable, as no state is retained.

What would be the best way to handle this use case - is any of these approaches the "right" one, or are there better ways?

What other pros/cons should be taken into consideration?

like image 423
etov Avatar asked Jan 06 '16 09:01

etov


People also ask

Is Spark stateful or stateless?

Stateful Streaming in SparkApache Spark is a fast and general-purpose cluster computing system. In Spark, we can do the batch processing and stream processing as well. It does near real-time processing. It means that it processes the data in micro-batches.

Why does state information need to be stored in Spark streaming?

The purpose of the state store is to provide a reliable place from where the engine can read the intermediary result of Structured Streaming aggregations. Thanks to this place Spark can, even in the case of driver failure, recover the processing state to the point before the failure. In the analyzed version (2.2.

What is windowing in Spark streaming?

The simplest windowing function is a window, which lets you create a new DStream, computed by applying the windowing parameters to the old DStream. You can use any of the DStream operations on the new stream, so you get all the flexibility you want.

Is Spark streaming stateful?

One of the most powerful features of Spark Streaming is the simple API for stateful stream processing and the associated native, fault-tolerant, state management.


1 Answers

Normally there is no right approach, each has tradeoffs. Therefore I'd add additional approach to the mix and will outline my take on their pros and cons. So you can decide which one is more suitable for you.

External state approach (approach #3)

You can accumulate state of the events in external storage. Cassandra is quite often used for that. You can handle final and ongoing events separately for example like below:

val stream = ...  val ongoingEventsStream = stream.filter(!isFinalEvent) val finalEventsStream = stream.filter(isFinalEvent)  ongoingEventsStream.foreachRDD { /*accumulate state in casssandra*/ } finalEventsStream.foreachRDD { /*finalize state in casssandra, move to final destination if needed*/ } 

trackStateByKey approach (approach #1.1)

It might be potentially optimal solution for you as it removes drawbacks of updateStateByKey, but considering it is just got released as part of Spark 1.6 release, it could be risky as well (since for some reason it is not very advertised). You can use the link as starting point if you want to find out more

Pros/Cons

Approach #1 (updateStateByKey)

Pros

  • Easy to understand or explain (to rest of the team, newcomers, etc.) (subjective)
  • Storage: Better usage of memory stores only latest state of exercise
  • Storage: Will keep only ongoing exercises, and discard them as soon as they finish
  • Latency is limited only by performance of each micro-batch processing

Cons

  • Storage: If number of keys (concurrent exercises) is large it may not fit into memory of your cluster
  • Processing: It will run updateState function for each key within the state map, therefore if number of concurrent exercises is large - performance will suffer

Approach #2 (window)

While it is possible to achieve what you need with windows, it looks significantly less natural in your scenario.

Pros

  • Processing in some cases (depending on the data) might be more effective than updateStateByKey, due to updateStateByKey tendency to run update on every key even if there are no actual updates

Cons

  • "maximal possible exercise time" - this sounds like a huge risk - it could be pretty arbitrary duration based on a human behaviour. Some people might forget to "finish exercise". Also depends on kinds of exercise, but could range from seconds to hours, when you want lower latency for quick exercises while would have to keep latency as high as longest exercise potentially could exist
  • Feels like harder to explain to others on how it will work (subjective)
  • Storage: Will have to keep all data within the window frame, not only the latest one. Also will free the memory only when window will slide away from this time slot, not when exercise is actually finished. While it might be not a huge difference if you will keep only last two time slots - it will increase if you try to achieve more flexibility by sliding window more often.

Approach #3 (external state)

Pros

  • Easy to explain, etc. (subjective)
  • Pure streaming processing approach, meaning that spark is responsible to act on each individual event, but not trying to store state, etc. (subjective)
  • Storage: Not limited by memory of the cluster to store state - can handle huge number of concurrent exercises
  • Processing: State is updated only when there are actual updates to it (unlike updateStateByKey)
  • Latency is similar to updateStateByKey and only limited by the time required to process each micro-batch

Cons

  • Extra component in your architecture (unless you already use Cassandra for your final output)
  • Processing: by default is slower than processing just in spark as not in-memory + you need to transfer the data via network
  • you'll have to implement exactly once semantic to output data into cassandra (for the case of worker failure during foreachRDD)

Suggested approach

I'd try the following:

  • test updateStateByKey approach on your data and your cluster
  • see if memory consumption and processing is acceptable even with large number of concurrent exercises (expected on peak hours)
  • fall back to approach with Cassandra in case if not
like image 140
Alex Larikov Avatar answered Oct 18 '22 18:10

Alex Larikov