Frequent streaming knowledge enrichment patterns in Amazon Managed Service for Apache FlinkStream knowledge processing lets you act on knowledge in actual time. Actual-time knowledge analytics will help you’ve got on-time and optimized responses whereas enhancing total buyer expertise.
Apache Flink is a distributed computation framework that enables for stateful real-time knowledge processing. It offers a single set of APIs for constructing batch and streaming jobs, making it simple for builders to work with bounded and unbounded knowledge. Apache Flink offers completely different ranges of abstraction to cowl quite a lot of occasion processing use instances.
Amazon Managed Service for Apache Flink (Amazon MSF) is an AWS service that gives a serverless infrastructure for operating Apache Flink purposes. This makes it simple for builders to construct extremely accessible, fault tolerant, and scalable Apache Flink purposes with no need to turn out to be an professional in constructing, configuring, and sustaining Apache Flink clusters on AWS.
Knowledge streaming workloads usually require knowledge within the stream to be enriched by way of exterior sources (equivalent to databases or different knowledge streams). For instance, assume you might be receiving coordinates knowledge from a GPS machine and want to know how these coordinates map with bodily geographic areas; it is advisable to enrich it with geolocation knowledge. You should utilize a number of approaches to counterpoint your real-time knowledge in Amazon MSF in your use case and Apache Flink abstraction degree. Every technique has completely different results on the throughput, community site visitors, and CPU (or reminiscence) utilization. On this put up, we cowl these approaches and talk about their advantages and disadvantages.
Knowledge enrichment patterns
Knowledge enrichment is a course of that appends extra context and enhances the collected knowledge. The extra knowledge usually is collected from quite a lot of sources. The format and the frequency of the info updates might vary from as soon as in a month to many instances in a second. The next desk exhibits a couple of examples of various sources, codecs, and replace frequency.
| Knowledge | Format | Replace Frequency |
| IP handle ranges by nation | CSV | As soon as a month |
| Firm group chart | JSON | Twice a 12 months |
| Machine names by ID | CSV | As soon as a day |
| Worker data | Desk (Relational database) | A number of instances a day |
| Buyer data | Desk (Non-relational database) | A number of instances an hour |
| Buyer orders | Desk (Relational database) | Many instances a second |
Primarily based on the use case, your knowledge enrichment software might have completely different necessities when it comes to latency, throughput, or different components. The rest of the put up dives deeper into completely different patterns of knowledge enrichment in Amazon MSF, that are listed within the following desk with their key traits. You possibly can select the perfect sample based mostly on the trade-off of those traits.
| Enrichment Sample | Latency | Throughput | Accuracy if Reference Knowledge Adjustments | Reminiscence Utilization | Complexity |
| Pre-load reference knowledge in Apache Flink Activity Supervisor reminiscence | Low | Excessive | Low | Excessive | Low |
| Partitioned pre-loading of reference knowledge in Apache Flink state | Low | Excessive | Low | Low | Low |
| Periodic Partitioned pre-loading of reference knowledge in Apache Flink state | Low | Excessive | Medium | Low | Medium |
| Per-record asynchronous lookup with unordered map | Medium | Medium | Excessive | Low | Low |
| Per-record asynchronous lookup from an exterior cache system | Low or Medium (Relying on Cache storage and implementation) | Medium | Excessive | Low | Medium |
| Enriching streams utilizing the Desk API | Low | Excessive | Excessive | Low – Medium (relying on the chosen be part of operator) | Low |
Enrich streaming knowledge by pre-loading the reference knowledge
When the reference knowledge is small in dimension and static in nature (for instance, nation knowledge together with nation code and nation identify), it’s beneficial to counterpoint your streaming knowledge by pre-loading the reference knowledge, which you are able to do in a number of methods.
To see the code implementation for pre-loading reference knowledge in varied methods, check with the GitHub repo. Observe the directions within the GitHub repository to run the code and perceive the info mannequin.
Pre-loading of reference knowledge in Apache Flink Activity Supervisor reminiscence
The only and likewise quickest enrichment technique is to load the enrichment knowledge into every of the Apache Flink activity managers’ on-heap reminiscence. To implement this technique, you create a brand new class by extending the RichFlatMapFunction summary class. You outline a world static variable in your class definition. The variable might be of any sort, the one limitation is that it ought to lengthen java.io.Serializable; for instance, java.util.HashMap. Throughout the open() technique, you outline a logic that hundreds the static knowledge into your outlined variable. The open() technique is all the time known as first, in the course of the initialization of every activity in Apache Flink’s activity managers, which makes positive the entire reference knowledge is loaded earlier than the processing begins. You implement your processing logic by overriding the processElement() technique. You implement your processing logic and entry the reference knowledge by its key from the outlined world variable.
The next structure diagram exhibits the total reference knowledge load in every activity slot of the duty supervisor:

This technique has the next advantages:
- Straightforward to implement
- Low latency
- Can assist excessive throughput
Nonetheless, it has the next disadvantages:
- If the reference knowledge is massive in dimension, the Apache Flink activity supervisor might run out of reminiscence.
- Reference knowledge can turn out to be stale over a time period.
- A number of copies of the identical reference knowledge are loaded in every activity slot of the duty supervisor.
- Reference knowledge needs to be small to slot in the reminiscence allotted to a single activity slot. In Amazon MSF, every Kinesis Processing Unit (KPU) has 4 GB of reminiscence, out of which 3 GB can be utilized for heap reminiscence. If
ParallelismPerKPUin Amazon MSF is ready to 1, one activity slot runs in every activity supervisor, and the duty slot can use the entire 3 GB of heap reminiscence. IfParallelismPerKPUis ready to a worth larger than 1, the three GB of heap reminiscence is distributed throughout a number of activity slots within the activity supervisor. In the event you’re deploying Apache Flink in Amazon EMR or in a self-managed mode, you possibly can tunetaskmanager.reminiscence.activity.heap.dimensionto extend the heap reminiscence of a activity supervisor.
Partitioned pre-loading of reference knowledge in Apache Flink State
On this strategy, the reference knowledge is loaded and stored within the Apache Flink state retailer in the beginning of the Apache Flink software. To optimize the reminiscence utilization, first the principle knowledge stream is split by a specified area by way of the keyBy() operator throughout all activity slots. Moreover, solely the portion of the reference knowledge that corresponds to every activity slot is loaded within the state retailer.That is achieved in Apache Flink by creating the category PartitionPreLoadEnrichmentData, extending the RichFlatMapFunction summary class. Throughout the open technique, you override the ValueStateDescriptor technique to create a state deal with. Within the referenced instance, the descriptor is called locationRefData, the state key sort is String, and the worth sort is Location. On this code, we use ValueState in comparison with MapState as a result of we solely maintain the situation reference knowledge for a selected key. For instance, once we question Amazon S3 to get the situation reference knowledge, we question for the precise position and get a selected location as a worth.
In Apache Flink, ValueState is used to carry a particular worth for a key, whereas MapState is used to carry a mix of key-value pairs. This method is helpful when you’ve got a big static dataset that’s tough to slot in reminiscence as an entire for every partition.
The next structure diagram exhibits the load of reference knowledge for the precise key for every partition of the stream.

For instance, our reference knowledge within the pattern GitHub code has roles that are mapped to every constructing. As a result of the stream is partitioned by roles, solely the precise constructing data per position is required to be loaded for every partition because the reference knowledge.This technique has the next advantages:
- Low latency.
- Can assist excessive throughput.
- Reference knowledge for particular partition is loaded within the keyed state.
- In Amazon MSF, the default state retailer configured is RocksDB. RocksDB can make the most of a good portion of 1 GB of managed reminiscence and 50 GB of disk area offered by every KPU. This offers sufficient room for the reference knowledge to develop.
Nonetheless, it has the next disadvantages:
- Reference knowledge can turn out to be stale over a time period
Periodic partitioned pre-loading of reference knowledge in Apache Flink State
This strategy is a fine-tune of the earlier method, the place every partitioned reference knowledge is reloaded on a periodic foundation to refresh the reference knowledge. That is helpful in case your reference knowledge modifications sometimes.
The next structure diagram exhibits the periodic load of reference knowledge for the precise key for every partition of the stream:

On this strategy, the category PeriodicPerPartitionLoadEnrichmentData is created, extending the KeyedProcessFunction class. Just like the earlier sample, within the context of the GitHub instance, ValueState is beneficial right here as a result of every partition solely hundreds a single worth for the important thing. In the identical manner as talked about earlier, within the open technique, you outline the ValueStateDescriptor to deal with the worth state and outline a runtime context to entry the state.
Throughout the processElement technique, load the worth state and fix the reference knowledge (within the referenced GitHub instance, we connected buildingNo to the client knowledge). Additionally register a timer service to be invoked when the processing time passes the given time. Within the pattern code, the timer service is scheduled to be invoked periodically (for instance, each 60 seconds). Within the onTimer technique, replace the state by making a name to reload the reference knowledge for the precise position.
This technique has the next advantages:
- Low latency.
- Can assist excessive throughput.
- Reference knowledge for particular partitions is loaded within the keyed state.
- Reference knowledge is refreshed periodically.
- In Amazon MSF, the default state retailer configured is RocksDB. Additionally, 50 GB of disk area offered by every KPU. This offers sufficient room for the reference knowledge to develop.
Nonetheless, it has the next disadvantages:
- If the reference knowledge modifications often, the applying nonetheless has stale knowledge relying on how often the state is reloaded
- The applying can face load spikes throughout reload of reference knowledge
Enrich streaming knowledge utilizing per-record lookup
Though pre-loading of reference knowledge offers low latency and excessive throughput, it is probably not appropriate for sure kinds of workloads, equivalent to the next:
- Reference knowledge updates with excessive frequency
- Apache Flink must make an exterior name to compute the enterprise logic
- Accuracy of the output is vital and the applying shouldn’t use stale knowledge
Usually, for a lot of these use instances, builders trade-off excessive throughput and low latency for knowledge accuracy. On this part, you find out about a couple of of frequent implementations for per-record knowledge enrichment and their advantages and downsides.
Per-record asynchronous lookup with unordered map
In a synchronous per-record lookup implementation, the Apache Flink software has to attend till it receives the response after sending each request. This causes the processor to remain idle for a major interval of processing time. As an alternative, the applying can ship a request for different components within the stream whereas it waits for the response for the primary ingredient. This manner, the wait time is amortized throughout a number of requests and due to this fact it will increase the method throughput. Apache Flink offers asynchronous I/O for exterior knowledge entry. Whereas utilizing this sample, you must resolve between unorderedWait (the place it emits the outcome to the following operator as quickly because the response is obtained, disregarding the order of the ingredient on the stream) and orderedWait (the place it waits till all inflight I/O operations full, then sends the outcomes to the following operator in the identical order as unique components had been positioned on the stream). Normally, when downstream shoppers disregard the order of the weather within the stream, unorderedWait offers higher throughput and fewer idle time. Go to Enrich your knowledge stream asynchronously utilizing Managed Service for Apache Flink to be taught extra about this sample.
The next structure diagram exhibits how an Apache Flink software on Amazon MSF does asynchronous calls to an exterior database engine (for instance Amazon DynamoDB) for each occasion in the principle stream:

This technique has the next advantages:
- Nonetheless fairly easy and straightforward to implement
- Reads essentially the most up-to-date reference knowledge
Nonetheless, it has the next disadvantages:
- It generates a heavy learn load for the exterior system (for instance, a database engine or an exterior API) that hosts the reference knowledge
- Total, it may not be appropriate for techniques that require excessive throughput with low latency
Per-record asynchronous lookup from an exterior cache system
A strategy to improve the earlier sample is to make use of a cache system to reinforce the learn time for each lookup I/O name. You should utilize Amazon ElastiCache for caching, which accelerates software and database efficiency, or as a main knowledge retailer to be used instances that don’t require sturdiness like session shops, gaming leaderboards, streaming, and analytics. ElastiCache is appropriate with Redis and Memcached.
For this sample to work, you should implement a caching sample for populating knowledge within the cache storage. You possibly can select between a proactive or reactive strategy relying your software targets and latency necessities. For extra data, check with Caching patterns.
The next structure diagram exhibits how an Apache Flink software calls to learn the reference knowledge from an exterior cache storage (for instance, Amazon ElastiCache for Redis). Knowledge modifications should be replicated from the principle database (for instance, Amazon Aurora) to the cache storage by implementing one of many caching patterns.

Implementation for this knowledge enrichment sample is much like the per-record asynchronous lookup sample; the one distinction is that the Apache Flink software makes a connection to the cache storage, as a substitute of connecting to the first database.
This technique has the next advantages:
- Higher throughput as a result of caching can speed up software and database efficiency
- Protects the first knowledge supply from the learn site visitors created by the stream processing software
- Can present decrease learn latency for each lookup name
- Total, may not be appropriate for medium to excessive throughput techniques that wish to enhance knowledge freshness
Nonetheless, it has the next disadvantages:
- Further complexity of implementing a cache sample for populating and syncing the info between the first database and the cache storage
- There’s a probability for the Apache Flink stream processing software to learn stale reference knowledge relying on what caching sample is applied
- Relying on the chosen cache sample (proactive or reactive), the response time for every enrichment I/O might differ, due to this fact the general processing time of the stream might be unpredictable
Alternatively, you possibly can keep away from these complexities through the use of the Apache Flink JDBC connector for Flink SQL APIs. We talk about enrichment stream knowledge by way of Flink SQL APIs in additional element later on this put up.
Enrich stream knowledge by way of one other stream
On this sample, the info in the principle stream is enriched with the reference knowledge in one other knowledge stream. This sample is sweet to be used instances through which the reference knowledge is up to date often and it’s doable to carry out change knowledge seize (CDC) and publish the occasions to an information streaming service equivalent to Apache Kafka or Amazon Kinesis Knowledge Streams. This sample is helpful within the following use instances, for instance:
- Buyer buy orders are revealed to a Kinesis knowledge stream, after which be part of with buyer billing data in a DynamoDB stream
- Knowledge occasions captured from IoT units ought to enrich with reference knowledge in a desk in Amazon Relational Database Service (Amazon RDS)
- Community log occasions ought to enrich with the machine identify on the supply (and the vacation spot) IP addresses
The next structure diagram exhibits how an Apache Flink software on Amazon MSF joins knowledge in the principle stream with the CDC knowledge in a DynamoDB stream.

To counterpoint streaming knowledge from one other stream, we use a typical stream to stream be part of patterns, which we clarify within the following sections.
Enrich streams utilizing the Desk API
Apache Flink Desk APIs present greater abstraction for working with knowledge occasions. With Desk APIs, you possibly can outline your knowledge stream as a desk and fix the info schema to it.
On this sample, you outline tables for every knowledge stream after which be part of these tables to attain the info enrichment targets. Apache Flink Desk APIs assist several types of be part of situations, like internal be part of and outer be part of. Nonetheless, you wish to keep away from these if you happen to’re coping with unbounded streams as a result of these are useful resource intensive. To restrict the useful resource utilization and run joins successfully, it is best to use both interval or temporal joins. An interval be part of requires one equi-join predicate and a be part of situation that bounds the time on either side. To raised perceive easy methods to implement an interval be part of, check with Get began with Amazon Managed Service for Apache Flink (Desk API).
In comparison with interval joins, temporal desk joins don’t work with a time interval inside which completely different variations of a file are stored. Information from the principle stream are all the time joined with the corresponding model of the reference knowledge on the time specified by the watermark. Subsequently, fewer variations of the reference knowledge stay within the state. Word that the reference knowledge might or might not have a time ingredient related to it. If it doesn’t, chances are you’ll want so as to add a processing time ingredient for the be part of with the time-based stream.
Within the following instance code snippet, the update_time column is added to the currency_rates reference desk from the change knowledge seize metadata equivalent to Debezium. Moreover, it’s used to outline a watermark technique for the desk.
CREATE TABLE currency_rates (
forex STRING,
conversion_rate DECIMAL(32, 2),
update_time TIMESTAMP(3) METADATA FROM `values.supply.timestamp` VIRTUAL,
WATERMARK FOR update_time AS update_time,
PRIMARY KEY(forex) NOT ENFORCED
) WITH (
'connector' = 'kafka',
'worth.format' = 'debezium-json',
/* ... */
);
This technique has the next advantages:
- Straightforward to implement
- Low latency
- Can assist excessive throughput when reference knowledge is a knowledge stream
SQL APIs present greater abstractions over how the info is processed. For extra advanced logic round how the be part of operator ought to course of, we advocate you all the time begin with SQL APIs first and use DataStream APIs if you really want to.
Conclusion
On this put up, we demonstrated completely different knowledge enrichment patterns in Amazon MSF. You should utilize these patterns and discover the one which addresses your wants and shortly develop a stream processing software.
For additional studying on Amazon MSF, go to the official product web page.
