Latency goes subsecond in Apache Spark Structured Streaming

Apache Spark Structured Streaming is the main open supply stream processing platform. Additionally it is the core know-how that powers streaming on the Databricks Lakehouse Platform and supplies a unified API for batch and stream processing. Because the adoption of streaming is rising quickly, numerous purposes need to make the most of it for actual time determination making. A few of these purposes, particularly these operational in nature, demand decrease latency. Whereas Spark’s design permits excessive throughput and ease-of-use at a decrease price, it has not been optimized for sub-second latency.

On this weblog, we are going to deal with the enhancements we’ve got made round offset administration to decrease the inherent processing latency of Structured Streaming. These enhancements primarily goal operational use circumstances resembling actual time monitoring and alerting which might be easy and stateless.

Intensive analysis of those enhancements signifies that the latency has improved by 68-75% – or as a lot as 3Xfrom 700-900 ms to 150-250 ms for throughputs of 100K occasions/sec, 500K occasions/sec and 1M occasions/sec. Structured Streaming can now obtain latencies decrease than 250 ms, satisfying SLA necessities for a big proportion of operational workloads.

This text assumes that the reader has a primary understanding of Spark Structured Streaming. Seek advice from the next documentation to study extra:


Apache Spark Structured Streaming is a distributed stream processing engine constructed on prime of the Apache Spark SQL engine. It supplies an API that enables builders to course of knowledge streams by writing streaming queries in the identical method as batch queries, making it simpler to motive about and check streaming purposes. In accordance with Maven downloads, Structured Streaming is essentially the most broadly used open supply distributed streaming engine immediately. One of many major causes for its recognition is efficiency – excessive throughput at a decrease price with an end-to-end latency beneath just a few seconds. Structured Streaming provides customers the flexibleness to steadiness the tradeoff between throughput, price and latency.

Because the adoption of streaming grows quickly within the enterprise, there’s a want to allow a various set of purposes to make use of streaming knowledge structure. In our conversations with many shoppers, we’ve got encountered use circumstances that require constant sub-second latency. Such low latency use circumstances come up from purposes like operational alerting and actual time monitoring, a.ok.a “operational workloads.” In an effort to accommodate these workloads into Structured Streaming, in 2022 we launched a efficiency enchancment initiative beneath Venture Lightspeed. This initiative recognized potential areas and strategies that can be utilized to enhance processing latency. On this weblog, we define one such space for enchancment intimately – offset administration for progress monitoring and the way it achieves sub-second latency for operational workloads.

What are Operational Workloads?

Streaming workloads may be broadly categorized into analytical workloads and operational workloads. Determine 1 illustrates each analytical and operational workloads. Analytical workloads usually ingest, remodel, course of and analyze knowledge in actual time and write the outcomes into Delta Lake backed by object storage like AWS S3, Azure Information Lake Gen2 and Google Cloud Storage. These outcomes are consumed by downstream knowledge warehousing engines and visualization instruments.

Analytical Workloads
Analytical Workloads
Operational Workloads
Operational Workloads

Determine 1. Analytical vs Operational Workloads

Some examples of analytical workloads embody:

  • Buyer Habits Evaluation: A advertising and marketing agency could use streaming analytics to investigate buyer habits in real-time. By processing clickstream knowledge, social media feeds, and different sources of knowledge, the system can detect patterns and preferences that can be utilized to focus on prospects extra successfully.
  • Sentiment Evaluation: An organization would possibly use streaming knowledge from its social media accounts to investigate buyer sentiment in actual time. For instance, the corporate would possibly search for prospects who’re expressing optimistic or unfavourable sentiment in regards to the firm’s services or products.
  • IoT Analytics: A sensible metropolis could use streaming analytics to observe visitors circulation, air high quality, and different metrics in real-time. By processing knowledge from sensors embedded all through town, the system can detect developments and make choices about visitors patterns or environmental insurance policies.

Alternatively, operational workloads, ingest and course of knowledge in actual time and routinely set off a enterprise course of. Some examples of such workloads embody:

  • Cybersecurity: An organization would possibly use streaming knowledge from its community to observe for safety or efficiency issues. For instance, the corporate would possibly search for spikes in visitors or for unauthorized entry to networks and ship an alert to the safety division.
  • Personally Identifiable Info Leaks: An organization would possibly monitor the microservice logs, parse and detect if any personally identifiable info (PII) is being leaked and whether it is, inform by e mail the proprietor of the microservice.
  • Elevator Dispatch: An organization would possibly use the streaming knowledge from the elevator to detect when an elevator alarm button is activated. If activated, it would search for extra elevator info to reinforce the information and ship a notification to safety personnel.
  • Proactive Upkeep: Utilizing the streaming knowledge from an influence generator monitor the temperature and when it exceeds a sure threshold inform the supervisor.

Operational streaming pipelines share the next traits:

  • Latency expectations are often sub-second
  • The pipelines learn from a message bus
  • The pipelines often do easy computation with both knowledge transformation or knowledge enrichment
  • The pipelines write to a message bus like Apache Kafka or Apache Pulsar or quick key worth shops like Apache Cassandra or Redis for downstream integration to enterprise course of

For these use circumstances, once we profiled Structured Streaming, we recognized that the offset administration to trace the progress of micro-batches consumes substantial time. Within the subsequent part, allow us to evaluate the prevailing offset administration and description how we improved in subsequent sections.

What’s Offset Administration?

To trace the progress of as much as which level the information has been processed, Spark Structured Streaming depends on persisting and managing offsets that are used as progress indicators. Sometimes, an offset is concretely outlined by the supply connector as completely different techniques have alternative ways to characterize progress or areas in knowledge. For instance, a concrete implementation of an offset may be the road quantity in a file to point how far the information within the file has been processed. Sturdy logs (as depicted in Determine 2) are used to retailer these offset and mark completion of micro-batches.

Figure 2: Offset log
Determine 2: Offset log

In Structured Streaming, knowledge is processed in items of micro-batches. There are two offset administration operations executed for every micro-batch. One in the beginning of each micro-batch and one on the finish.

  • At the start of each micro-batch (earlier than any knowledge processing truly begins), an offset is calculated based mostly on what new knowledge may be learn from the goal system. This offset is endured to a sturdy log known as the “offsetLog” within the checkpoint listing. This offset is used to calculate the vary of knowledge that will likely be processed in “this” micro-batch.
  • On the finish of each micro-batch, an entry is endured within the sturdy log known as the “commitLog” to point that “this” micro-batch has been efficiently processed.

Determine 3 under depicts the present offset administration operations that happen.

Figure 3. Offset Management in Structured Streaming
Determine 3. Offset Administration in Structured Streaming

One other offset administration operation is carried out on the finish of each micro-batch. This operation is a clear up operation to delete / truncate outdated and pointless entries from each the offsetLog and commitLog in order that these logs do not develop in an unbounded style.

Figure 4. Log Purging
Determine 4. Log Purging

These offset administration operations are carried out on the important path and inline with the precise processing of the information. Which means that the period of those operations straight impacts processing latency and no knowledge processing can happen till these operations are full. This straight impacts cluster utilization as properly.

By our benchmarking and efficiency profiling efforts, we’ve got recognized these offset administration operations can take up a majority of the processing time particularly for stateless single state pipelines which might be usually used within the operation alerting and real-time monitoring use circumstances.

Efficiency Enhancements in Structured Streaming

Asynchronous Progress Monitoring

This function was created to deal with the latency overhead of persisting offsets for progress monitoring functions. This function, when enabled, will enable Structured Streaming pipelines to checkpoint progress, i.e. replace the offsetLog and commitLog, asynchronously and in parallel to the precise knowledge processing inside a micro-batch. In different phrases, the precise knowledge processing won’t be blocked by these offset administration operations which can considerably enhance the latency of purposes. Determine 5 under depicts this new habits for offset administration.

Figure 5. Async Progress Tracking
Determine 5. Async Progress Monitoring

Together with asynchronously performing updates, customers can configure the frequency at which the progress is checkpointed. This will likely be useful for situations wherein offset administration operations happen at a better price than they are often processed. This occurs in pipelines when the time spent truly processing knowledge is considerably much less in comparison with these offset administration operations. In such situations, an ever rising backlog of offset administration operations will happen. To stem this rising backlog, knowledge processing should be blocked or slowed down which can basically revert the processing habits to being the identical as if these offset administration operations have been executed inline with the information processing. A consumer will usually not have to configure or set the checkpoint frequency as an sufficient default worth will likely be set. It is very important be aware that failure restoration time will improve with the rise in checkpoint interval time. In case of failure, a pipeline has to reprocess all the information earlier than the earlier profitable checkpoint. Customers can contemplate this trade-off between decrease latency throughout common processing and restoration time in case of failure.

Following configurations are launched to allow and configure this function:

  • asyncProgressTrackingEnabled - allow or disable asynchronous progress monitoring
  • asyncProgressCheckpointingInterval - the interval wherein we commit offsets and completion commits

Following code pattern illustrates the right way to allow this function:

val stream = spark.readStream
      .possibility("kafka.bootstrap.servers", "host1:port1,host2:port2")
      .possibility("subscribe", "in")

val question = stream.writeStream
	.possibility("matter", "out")
     .possibility("checkpointLocation", "/tmp/checkpoint")
	.possibility("asyncProgressTrackingEnabled", "true")

Word that this function won’t work with Set soon as or Set off.availableNow as these triggers execute pipelines in guide/scheduled style. Due to this fact, asynchronous progress monitoring won’t be related. Question will fail whether it is submitted utilizing any of the aforementioned triggers.

Applicability and Limitations

There are a few limitations within the present model(s) which may change as we evolve the function:

  • Presently, asynchronous progress monitoring is barely supported in stateless pipelines utilizing Kafka Sink.
  • Precisely as soon as end-to-end processing won’t be supported with this asynchronous progress monitoring as a result of offset ranges for a batch may be modified in case of failure. Nonetheless, many sinks, such because the Kafka sink, solely help at-least as soon as ensures, so this might not be a brand new limitation.

Asynchronous Log Purging

This function was created to deal with the latency overhead of the log cleanups that have been executed in line inside a micro-batch. By making this log cleanup/purge operation asynchronous and carried out within the background, we will take away the latency overhead this operation will incur on precise knowledge processing. Additionally, these purges don’t must be executed with each micro-batch and might happen on a extra relaxed schedule.

Word that this function / enchancment doesn’t have any limitations on what kind of pipelines or workloads can use this, thus this function will likely be enabled within the background by default for all Structured Streaming pipelines.


In an effort to perceive the efficiency of async progress monitoring and async log purging, we created just a few benchmarks. Our purpose with the benchmarks is to know the distinction in efficiency that the improved offset administration supplies in an end-to-end streaming pipeline. The benchmarks are divided into two classes:

  • Price Supply to Stat Sink – On this benchmark, we used a primary, stateless, stats-collecting supply and sink which is beneficial in figuring out the distinction in core engine efficiency with none exterior dependencies.
  • Kafka Supply to Kafka Sink – For this benchmark, we transfer knowledge from a Kafka supply to Kafka sink. That is akin to a real-world situation to see what the distinction can be in a manufacturing situation.

For each these benchmarks, we measured the tip to finish latency (fiftieth percentile, 99th percentile) at completely different knowledge enter charges (100K occasions/sec, 500K occasions/sec, 1M occasions/sec).

Benchmark Methodology

The principle methodology was to generate knowledge from a supply at a selected fixed throughput. The generated data comprise details about when the data have been created. On the sink facet, we use the Apache DataSketches library to gather the distinction between the time the sink processes the file and the time that it was created in every batch. That is used to calculate the latency. We used the identical cluster with the identical variety of nodes for all experiments.

Word: For the Kafka benchmark, we put apart some nodes of a cluster for operating Kafka and producing the information for feeding to Kafka. We calculate the latency of a file solely after the file has been efficiently printed into Kafka (on the sink)

Price Supply to Stat Sink Benchmark

For this benchmark, we used a Spark cluster of seven employee nodes (i3.2xlarge – 4 cores, 61 GiB reminiscence) utilizing the Databricks runtime (11.3). We measured the tip to finish latency for the next situations to quantify the contribution of every enchancment.

  • Present Structured Streaming – that is the baseline latency with none of the aforementioned enhancements
  • Async Log Purge – this measures the latency after making use of asynchronous log purging solely
  • Async Progress – this measures the latency after making use of asynchronous progress monitoring
  • Async Progress + Async Log Purge – this measures the latency after making use of each the enhancements

The outcomes of those experiments are proven in Figures 6, 7 and eight. As you may see, async log purging persistently reduces the latency roughly by 50%. Equally, async progress monitoring alone improves latency by roughly 65%. Mixed collectively, the latency reduces by 85-86% and the latency goes under 100 ms.

Figure 6. Chart showing performance improvement using async configs at a throughput of 100K events/sec
Determine 6. Chart displaying efficiency enchancment utilizing async configs at a throughput of 100K occasions/sec
Figure 7. Chart showing performance improvement using async configs at a throughput of 500k events/sec
Determine 7. Chart displaying efficiency enchancment utilizing async configs at a throughput of 500k occasions/sec
Figure 8. Chart showing performance improvement using async configs at a throughput of  1M events/sec
Determine 8. Chart displaying efficiency enchancment utilizing async configs at a throughput of  1M occasions/sec

Kafka Supply to Kafka Sink Benchmark

For the Kafka benchmarks, we used a Spark cluster of 5 employee nodes (i3.2xlarge – 4 cores, 61 GiB reminiscence), a separate cluster of three nodes to run Kafka and a further 2 nodes to generate knowledge added to the Kafka supply. Our Kafka matter has 40 partitions and a replication issue of three.

The information generator publishes the information right into a Kafka matter and the structured streaming pipeline consumes knowledge and republishes into one other Kafka matter. The outcomes of the efficiency analysis are proven in Figures 9, 10 and 11. As one can see, after making use of async progress and async log purging, the latency reduces by 65-75% or 3-3.5X throughout completely different throughputs.

Figure 9. Chart showing performance improvement using async configs at a  throughput of 100K events/sec
Determine 9. Chart displaying efficiency enchancment utilizing async configs at a  throughput of 100K occasions/sec
Figure 10. Chart showing performance improvement using async configs at a throughput of 500K events/sec
Determine 10. Chart displaying efficiency enchancment utilizing async configs at a throughput of 500K occasions/sec
Figure 11. Chart showing performance improvement using async configs at a throughput of 1M events/sec
Determine 11. Chart displaying efficiency enchancment utilizing async configs at a throughput of 1M occasions/sec

Abstract of Efficiency Outcomes

With the brand new asynchronous progress monitoring and asynchronous log purge, we will see that each configs cut back latency as a lot as 3X. Working collectively, latency is significantly decreased throughout all throughputs. The charts additionally present that the period of time saved is often a relentless period of time (200 – 250 ms for every config) and collectively they’ll shave off round 500 ms throughout the board (leaving sufficient time for batch planning and question processing).


These efficiency enhancements can be found in Databricks Lakehouse Platform from DBR 11.3 onwards. Async log purging is enabled by default in DBR 11.3 and subsequent releases. Moreover, these enhancements have been contributed to Open Supply Spark and is obtainable from Apache Spark 3.4 onwards.

Future Work

There are at present some limitations to the varieties of workloads and sinks supported by the asynchronous progress monitoring function. We will likely be trying into supporting extra varieties of workloads with this function sooner or later.

That is solely the start of the predictable low latency options we’re constructing in Structured Streaming as a part of Venture Lightspeed. As well as, we are going to proceed to benchmark and profile Structured Streaming to seek out extra areas of enchancment. Keep tuned!

Be a part of us on the Information and AI Summit in San Francisco, June 26-29 to study extra about Venture Lightspeed and knowledge streaming on the Databricks Lakehouse Platform.

Related Articles


Please enter your comment!
Please enter your name here

Latest Articles