This visitor put up was co-authored with Kostas Diamantis from Skroutz.
At Skroutz, we’re enthusiastic about our product, and it’s all the time our prime precedence. We’re continually working to enhance and evolve it, supported by a big and gifted group of software program engineers. Our product’s steady innovation and evolution result in frequent updates, typically necessitating adjustments and additions to the schemas of our operational databases.
Once we determined to construct our personal knowledge platform to satisfy our knowledge wants, similar to supporting reporting, enterprise intelligence (BI), and decision-making, the principle problem—and likewise a strict requirement—was to ensure it wouldn’t block or delay our product improvement.
We selected Amazon Redshift to advertise knowledge democratization, empowering groups throughout the group with seamless entry to knowledge, enabling quicker insights and extra knowledgeable decision-making. This alternative helps a tradition of transparency and collaboration, as knowledge turns into available for evaluation and innovation throughout all departments.
Nonetheless, maintaining with schema adjustments from our operational databases, whereas updating the info warehouse with out continually coordinating with improvement groups, delaying releases, or risking knowledge loss, grew to become a brand new problem for us.
On this put up, we share how we dealt with real-time schema evolution in Amazon Redshift with Debezium.
Resolution overview
Most of our knowledge resides in our operational databases, similar to MariaDB and MongoDB. Our strategy includes utilizing the change knowledge seize (CDC) approach, which mechanically handles the schema evolution of the info shops being captured. For this, we used Debezium together with a Kafka cluster. This answer allows schema adjustments to be propagated with out disrupting the Kafka customers.
Nonetheless, dealing with schema evolution in Amazon Redshift grew to become a bottleneck, prompting us to develop a method to handle this problem. It’s essential to notice that, in our case, adjustments in our operational databases primarily contain including new columns moderately than breaking adjustments like altering knowledge varieties. Subsequently, we have now carried out a semi-manual course of to resolve this difficulty, together with a compulsory alerting mechanism to inform us of any schema adjustments. This two-step course of consists of dealing with schema evolution in actual time and dealing with knowledge updates in an asynchronous guide step. The next architectural diagram illustrates a hybrid deployment mannequin, integrating each on-premises and cloud-based parts.

The info circulate begins with knowledge from MariaDB and MongoDB, captured utilizing Debezium for CDC in close to real-time mode. The captured knowledge is streamed to a Kafka cluster, the place Kafka customers (constructed on the Ruby Karafka framework) learn and write them to the staging space, both in Amazon Redshift or Amazon Easy Storage Service (Amazon S3). From the staging space, DataLoaders promote the info to manufacturing tables in Amazon Redshift. At this stage, we apply the slowly altering dimension (SCD) idea to those tables, utilizing Kind 7 for many of them.
In knowledge warehousing, an SCD is a dimension that shops knowledge, and although it’s typically secure, it would change over time. Varied methodologies deal with the complexities of SCD administration. SCD Kind 7 locations each the surrogate key and the pure key into the very fact desk. This permits the person to pick the suitable dimension information based mostly on:
- The first efficient date on the very fact file
- The newest or present info
- Different dates related to the very fact file
Afterwards, analytical jobs are run to create reporting tables, enabling BI and reporting processes. The next diagram offers an instance of the info modeling course of from a staging desk to a manufacturing desk.

The structure depicted within the diagram exhibits solely our CDC pipeline, which fetches knowledge from our operational databases and doesn’t embrace different pipelines, similar to these for fetching knowledge by APIs, scheduled batch processes, and lots of extra. Additionally observe that our conference is that dw_* columns are used to catch SCD metadata info and different metadata normally. Within the following sections, we focus on the important thing parts of the answer in additional element.
Actual-time workflow
For the schema evolution half, we deal with the column dw_md_missing_data, which captures schema evolution adjustments in close to actual time that happen within the supply databases. When a brand new change is produced to the Kafka cluster, the Kafka shopper is accountable for writing this modification to the staging desk in Amazon Redshift. For instance, a message produced by Debezium to the Kafka cluster could have the next construction when a brand new store entity is created:
The Kafka shopper is accountable for getting ready and executing the SQL INSERT assertion:
After that, let’s say a brand new column is added to the supply desk known as new_column, with the worth new_value.
The brand new message produced to the Kafka cluster could have the next format:
Now the SQL INSERT assertion executed by the Kafka shopper will likely be as follows:
The buyer performs an INSERT as it will for the identified schema, and something new is added to the dw_md_missing_data column as key-value JSON. After the info is promoted from the staging desk to the manufacturing desk, it’s going to have the next construction.

At this level, the info circulate continues operating with none knowledge loss or the necessity for communication with groups accountable for sustaining the schema within the operational databases. Nonetheless, this knowledge may not be simply accessible for the info customers, analysts, or different personas. It’s price noting that dw_md_missing_data is outlined as a column of the SUPER knowledge sort, which was launched in Amazon Redshift to retailer semistructured knowledge or paperwork as values.
Monitoring mechanism
To trace new columns added to a desk, we have now a scheduled course of that runs weekly. This course of checks for tables in Amazon Redshift with values within the dw_md_missing_data column and generates a listing of tables requiring guide motion to make this knowledge accessible by a structured schema. A notification is then despatched to the group.
Handbook remediation steps
Within the aforementioned instance, the guide steps to make this column accessible could be:
- Add the brand new columns to each staging and manufacturing tables:
- Replace the Kafka shopper’s identified schema. On this step, we simply want so as to add the brand new column identify to a easy array checklist. For instance:
- Replace the DataLoader’s SQL logic for the brand new column. A DataLoader is accountable for selling the info from the staging space to the manufacturing desk.
- Switch the info that has been loaded within the meantime from the
dw_md_missing_dataSUPER column to the newly added column after which clear up. On this step, we simply have to run a knowledge migration like the next:
To carry out the previous operations, we make it possible for nobody else performs adjustments to the manufacturing.retailers desk as a result of we wish no new knowledge to be added to the dw_md_missing_data column.
Conclusion
The answer mentioned on this put up enabled Skroutz to handle schema evolution in operational databases whereas seamlessly updating the info warehouse. This alleviated the necessity for fixed improvement group coordination and eliminated dangers of knowledge loss throughout releases, finally fostering innovation moderately than stifling it.
Because the migration of Skroutz to the AWS Cloud approaches, discussions are underway on how the present structure may be tailored to align extra carefully with AWS-centered rules. To that finish, one of many adjustments being thought-about is Amazon Redshift streaming ingestion from Amazon Managed Streaming for Apache Kafka (Amazon MSK) or open supply Kafka, which is able to make it potential for Skroutz to course of massive volumes of streaming knowledge from a number of sources with low latency and excessive throughput to derive insights in seconds.
For those who face related challenges, focus on with an AWS consultant and work backward out of your use case to supply essentially the most appropriate answer.
Concerning the authors
Konstantina Mavrodimitraki is a Senior Options Architect at Amazon Net Companies, the place she assists prospects in designing scalable, sturdy, and safe techniques in international markets. With deep experience in knowledge technique, knowledge warehousing, and large knowledge techniques, she helps organizations remodel their knowledge landscapes. A passionate technologist and other people individual, Konstantina loves exploring rising applied sciences and helps the native tech communities. Moreover, she enjoys studying books and taking part in together with her canine.
Kostas Diamantis is the Head of the Information Warehouse at Skroutz firm. With a background in software program engineering, he transitioned into knowledge engineering, utilizing his technical experience to construct scalable knowledge options. Captivated with data-driven decision-making, he focuses on optimizing knowledge pipelines, enhancing analytics capabilities, and driving enterprise insights.
