Using Kappa Architecture to Reduce Data Integration Costs

Kappa Architectures are becoming a popular way of unifying real-time (streaming) and historical (batch) analytics giving you a faster path to realizing business value with your pipelines.

Treating batch and streaming as separate pipelines for separate use cases drives up complexity, cost, and ultimately deters data teams from solving business problems that truly require data streaming architectures.

Kappa Architecture combines streaming and batch while simultaneously turning data warehouses and data lakes into near real-time sources of truth.

Showing how Kappa unifies batch and streaming pipelines

The development of Kappa architecture has revolutionized data processing by allowing users to quickly and cost-effectively reduce data integration costs. Kappa architecture is a powerful data processing architecture that enables near-real-time data processing, making it ideal for companies needing to quickly process large amounts of data. Striim offers an easy-to-use platform with drag-and-drop functionality and pre-built components that make it simple to build a kappa architecture. In this article, we will take a look at the benefits and drawbacks of kappa architecture, how Striim makes it easier to use, what infrastructure you need for your kappa architecture, and how you can start designing your own kappa architecture with a free version of Striim’s unified data integration and streaming platform.

Overview of kappa architecture

Kappa architecture is a powerful data processing architecture that enables near-real-time data processing. By combining batch and stream processing techniques, companies are able to process large volumes of data quickly and efficiently, even with frequent changes in the data structure. Two different systems are required for creating a kappa architecture: one for streaming data and another for batch processing. Stream processors, storage layers, message brokers, and databases make up the basic components of this architecture.

The goal of kappa architecture is to reduce the cost of data integration by providing an efficient and real-time way of managing large datasets. By eliminating manual processes such as ETL (extract-transform-load) systems, companies can save time and money while still leveraging advanced technologies like machine learning and artificial intelligence (AI). Striim offers an intuitive UI with drag-and-drop functionality as well as prebuilt components to help users design their own custom kappa architectures. With its free version also available, businesses can start building their own system right away without needing expensive consultants or weeks spent configuring complex systems.

In conclusion, kappa architectures have revolutionized the way businesses approach big data solutions – allowing them to take advantage of cutting edge technologies while reducing costs associated with manual processes like ETL systems. With Striim’s unified platform making it easier than ever before to build a custom kappa architecture tailored exactly towards your business needs – you can get started designing your own system today!

Benefits of kappa architecture for data integration

Kappa architecture is quickly gaining popularity due to its ability to enable near-real-time data processing and reduce the complexity associated with data integration. By utilizing a single codebase for both streaming and batch processing, businesses can reap multiple benefits from this solution. This simplification drastically cuts down on development resources needed as well as infrastructure setup and maintenance costs. Additionally, it allows for efficient processing of both real-time and historical data which eliminates the need for multiple versions of the same dataset or manually managed systems.

The versatility offered by kappa architectures makes them suitable for many industries such as healthcare, finance, retail, telecoms energy and more. Companies can leverage this technology to create analytics solutions that are tailored to their individual needs that are capable of handling substantial amounts of streaming data in real-time without any latency issues. Moreover, users can design their own system with Striim’s unified platform which features an intuitive UI with drag-and-drop functionality – plus they offer a free version so businesses can get started straight away!

In summation, kappa architectures offer immense advantages for those looking to reduce their data integration costs while using cutting edge technologies. With Striim’s unified platform businesses have access to a range of features that make designing their own system easy and straightforward – all at an affordable cost or even free!

Drawbacks of kappa architecture

Kappa architecture has revolutionized the way businesses process and store data, allowing them to take advantage of cutting edge technologies while reducing costs associated with manual processes. However, this technology is not without its drawbacks.

The complexity of setting up and maintaining a kappa architecture can be very high, requiring specialized engineers to ensure that all components are properly configured and functioning correctly. Additionally, without a centralized system for managing data, it can be difficult for businesses to maintain data governance across their organization. This lack of centralization also means that each component must be independently managed, leading to higher costs in terms of additional computing resources.

Another limitation of kappa architecture is scalability. As more data is processed through the system, it will require more computing resources in order to remain efficient and effective. This makes scaling the architecture complex and costly, as businesses will need to invest in additional hardware or cloud computing services in order to handle larger volumes of data processing.

Finally, kappa architectures are not suitable for all types of data processing tasks. While they are well suited for near-real-time analytics applications, they may not be the best choice for batch processing jobs or those that require intensive computation or machine learning algorithms. It’s important for businesses to assess their individual needs before deciding if kappa architectures are the right choice for reducing their data integration costs.

How Striim overcomes these drawbacks to make Kappa simple and affordable

Kappa architecture is an incredibly powerful tool for businesses looking to quickly and cost-effectively reduce data integration costs, but it does have some drawbacks that can make it difficult to use. Striim’s platform overcomes these drawbacks by making it easy and affordable to build a Kappa architecture.

Striim’s real-time streaming capabilities allow users to capture data from over 150 sources in near-real time, which eliminates the need for manual processes. Striim users can also see cost reduction of over 90% when using its smart data pipelines.

In addition, Striim has a range of pricing plans available, so businesses can find the plan that best suits their needs from its free Striim Developer tier to the Mission Critical offering which is the industry’s only horizontally scalable, unified data streaming platform as a managed service for maximum uptime SLAs and performance.

The intuitive UI, drag-and-drop functionality, and pre-built components make building a Kappa architecture quick and easy. This reduces the complexity associated with configuration and maintenance, allowing users to get up and running in no time. Plus, Striim’s free version allows users to start designing their kappa architecture without any upfront cost – making it perfect for businesses of all sizes. It also provides granular control for data contracts for data delivery and schema SLAs.With its real-time streaming capabilities, cloud integration options, pricing plans that fit various budgets, intuitive UI with drag-and drop functionality and pre-built components – as well as its free version – Striim makes building a Kappa architecture simple and affordable. This makes it the ideal tool for businesses looking to reduce their data integration costs while taking advantage of cutting edge technologies.

Choosing the right infrastructure for kappa architecture

When setting up a kappa architecture, businesses have to choose between cloud and on-premise solutions. Cloud-based architectures are more cost-effective but lack the control of an on-premise setup. On the other hand, an on-premise architecture provides more control but can be more expensive and difficult to manage. Each option has its own advantages and disadvantages, so companies should carefully weigh their needs before deciding which type of infrastructure is right for them.

The components needed to create a successful kappa architecture vary depending on the setup chosen, but generally include storage, compute, networking resources, and some form of data integration software. Companies should ensure they have enough resources available in order to avoid any performance issues as data volumes increase over time. Additionally, businesses should plan for scalability and high availability in order to ensure that their system can handle large amounts of data without disruption or loss of service.

Cost optimization is also an important consideration when building a kappa architecture. Companies need to balance performance requirements with financial constraints in order to get the most out of their investment while still ensuring reliability and stability. Additionally, they should follow industry best practices such as using containerized workloads for portability and leveraging managed services such as databases and message brokers whenever possible. Finally, companies should keep abreast of emerging trends in kappa architectures such as serverless computing or streaming automation tools that could help them further reduce costs while improving efficiency and scalability.

Ultimately, choosing the right infrastructure for a kappa architecture requires careful consideration of individual needs while keeping cost optimization in mind. Businesses should assess their performance requirements alongside financial constraints in order to build a reliable system that meets both goals while taking advantage of industry best practices and emerging trends wherever possible.

Leveraging Striim’s unified data integration and streaming platform to build your kappa architecture

Building a kappa architecture with Striim’s unified data integration and streaming platform is an easy and cost-effective solution that can help businesses reduce their data integration costs. With its intuitive UI, drag-and-drop functionality and pre-built components, Striim’s platform makes it simple to construct the architecture quickly.

The platform is optimized to support a wide range of data sources, including both structured and unstructured data. This allows users to easily manage all their data in one place, while also allowing them to scale up or down as needed for peak performance. Additionally, Striim’s platform provides cloud integration options for popular cloud platforms like Amazon Web Services and Microsoft Azure.

Striim’s platform is designed with scalability in mind, making it easy for businesses to handle large volumes of real-time streaming data without any latency issues or downtime. Additionally, the platform provides automated monitoring capabilities that enable companies to ensure their architecture remains reliable and stable. Furthermore, the platform also offers several other features that make it easier for businesses to manage their kappa architectures such as advanced analytics tools, machine learning algorithms, security features and more.

In addition to its powerful features, Striim’s unified data integration and streaming platform comes with a free version that allows users to get started quickly and cost-effectively – without having to pay any upfront costs. This makes it an ideal choice for businesses looking for ways to reduce their data integration costs while taking advantage of cutting edge technologies like kappa architectures.

Start architecting your Kappa Architecture today by talking to one of our specialists or trying Striim for free.

John Kutay, Head of Products at Striim joins theCUBE hosts Lisa Martin and Dustin Kirkland at Google Cloud Next 2023

Transcript

Lisa Martin 0:06
Good morning, everyone and welcome to the cubes j one coverage of Google Cloud Next live at Moscone south in San Francisco. I’m Lisa Martin. Dustin Kirkland is my cube analyst. co host. We’re here with about 20,000 people, you can hear the din of the bus behind us. There was a tremendous amount of announcements this morning. Lot of great Google Cloud execs, customers, partners, we’re here with Striim joining us next, John Kutay, the head of product joins us, John, great to have you. Thank you so much for joining us on The Cube.

John Kutay 0:36
Thanks so much for having me. Super excited for this discussion.

Lisa Martin 0:39
Yeah, I would love to share with the audience more about Striim . What do you guys do mission vision help us understand that?

John Kutay 0:46
Striim is unified data streaming. For generative AI analytics and operations. We love helping our customers infuse real time data into their decisions into their operations. And now generative AI, which is becoming a top priority for many of the enterprise data teams that we’re working with.

Lisa Martin 1:04
It is Gen AI is probably the hottest topic on the planet, or one of you talked about real time. And I think one of the things we’ve learned in the last few years is that access to real time data is no longer nice to have for companies. It’s an imperative. It’s really, for every industry, it’s really hard to do that. But I’m curious what some of the gaps in the market were, when Striim was launched that you guys saw the thought we can solve this?

John Kutay 1:27
So the company’s CEO and CTO came from Golden Gate software, which at the time of its acquisition by Oracle was the number one database replication product in the market. But it was very pigeonholed into just copying data between databases. And there was this obvious demand in the market to not only move data in real time, but to analyze it. And now with this big wave of generative AI, it’s not about data going into some warehouse and you wait for someone to pull up a report. Now you want data automatically making decisions for you. You want your customers to talk to a smart, AI driven bot that knows everything about them, and can answer questions for them. And this all requires real time data.

Lisa Martin 2:07
Absolutely. And every company whether I always think of whether it’s the grocery store, or the gas station, or Starbucks has my data, and I expect that they not only use it responsibly and securely, but also use it to give me that real time relevant, personalized experience that I want every company has to be really I’ve heard people say data driven. And I heard someone last week say no, not data driven, Insight driven. Difference. Yeah, there’s a difference there. Talk to us about how Striim is working with enterprise data teams to really help them extract the value of data, and especially working with Gen AI,

John Kutay 2:40
Macys.com, who we presented with previously at Google Cloud, next session, you know, they power remember, they’re not in the business of doing data, right. They’re trying to sell clothes, and they had a digital first initiative with Striim , help them go from their existing investments in their on premise, you can call legacy infrastructure, and help make sure that that data is in Google Cloud within seconds. Because if they’re building new digital applications, that data has to be there. So we’re really proud to have customers like that. And then we have other examples of airlines, for example, they want to run their operations on time, they need good customer experiences, they need to make sure the aircraft’s are safe. We help American airlines do exactly that. We were presenting with them at a data and AI Summit. And with Striim, Databricks MongoDB. They were able to again, take their aircraft telemetry, action it for their operational teams that are there to maintain the aircraft, make sure that everything’s safe, everything’s ready to go. And best of all, everything’s on time.

Lisa Martin 3:45
Yeah, that’s what it’s all about, right? Being on time these days. Yeah. And

Dustin Kirkland 3:48
along those lines, talk to me a little bit about the velocity in terms of, you know, how teams integrate this, how fast how long does it take how long till we see results from integrating Striim ?

John Kutay 3:55
Absolutely, we’re really proud of being able to get our customers into production in a matter of weeks. Even when it’s complex. It’s breaking down long standing data silos within the enterprise, a lot of technical complexity. For instance, at our presentation with American Airlines at data in AI Summit, they were really proud of the fact that they went to production at global scale, within 12 weeks of Striim and it’s because Striim’s a unified data streaming platform, meaning connectors, the data movement, the modeling, the processing, streaming into your target systems, meaning whether it’s Google Cloud, infrastructure, data, bricks, snowflake, all that data has to be there with quality and uptime SLA is that are that the business can trust?

Lisa Martin 4:39
Where are your customer conversations these days? Are you talking with Chief Data Officers, CIOs, is all of the above does it I’ve mentioned it can vary depending on the organization. But every company is so data rich, but they have to be able to figure out how do we get access to this now,

John Kutay 4:53
it’s really important to be a catalyst for internal collaboration, meaning you have to work with the CIOs the Chief Data officers all the way down to the people who are in the trenches, building the pipelines and build alignment there. And that’s something that we’re also really proud of. And, you know, because at the end of the day, yeah, you’re solving technical problems, but you’re delivering on business use cases and initiatives. And that’s the most critical thing.

Lisa Martin 5:16
What are some of the key use cases that you see that maybe have more horizontal play across industries that Striim is involved in?

John Kutay 5:24
Yeah, that’s an amazing question. So right now, data teams, you know, they already had a year’s worth of initiatives on their play. And now a generative AI, all the innovation that’s happening here at Google Cloud Next, and across the various platforms, there is a very high priority mandate for data teams to adopt generative AI, and really bring their data into generative AI and then do the reverse, which is bring generative AI to where their data is today. So those are some some of the use cases that they’re looking at in terms of making sure that data is making decision on its own. Yeah.

Lisa Martin 5:59
Can you share a little bit about the partnership with Google what you guys are doing together? How you’re helping customers really unlock the value of AI and Gennai?

John Kutay 6:06
Absolutely, we’re really proud of our partnership with Google. If you’re a big query user, you go into the Add Data button Striim’s right there, you can launch it from your console stream as a Google Cloud native products, our CTO alo Pareek, presented up here at Google Cloud Next, since the beginning, when they were doing these shows, and you know, we’re really proud of helping enterprises quickly realize the value of Google Cloud by complementing their existing enterprise investments, getting that data into Google Cloud and making sure that it’s reliable and the business can build on top of that, using the the modern infrastructure that Google is providing.

Lisa Martin 6:44
Yeah, that modern infrastructure, they talked a lot about that this morning. And providers, it’s was probably like, music to their ears.

John Kutay 6:50
Yes. Frames, clearly a important piece of that for sure. How do your customers think about the return on investment, you know, the Striim, Google Cloud, all that making their investment in in you and seeing a return? Look, when I work with the data team, and I tried to work with them on their goals and OKRs and things along those lines, if their goal is to move data from A to B, that’s not good enough, we have to talk about what your actual business initiatives are and how this data project or you know tactic is going to help you there. So the example like I brought up with Macy’s, right, they can tie that to customer experiences having more fresh, reliable data is critical American Airlines, their their aircrafts, you know, moving, making sure that those are operating with the as fast as possible. aircrafts are taking off on time well maintained. And that’s really where you see the ROI is like, how is data helping your business meet their mission statements?

Lisa Martin 7:51
When you’re in customer or prospect conversations, John? And they say, why Striim? What do you say? What are those key differentiators that really shine a light on value prop?

John Kutay 8:02
Yeah, absolutely. The fact that it’s simply a unified platform, but just in a couple of clicks, we’re spinning up a lot of complex infrastructure that you don’t have to know about as an end user, making sure that it’s very reliable, it’s very fast. You know, instead of Striim vendors were you know, I mean, sorry, companies are pulling in six, seven vendors do the same thing. Now you get the whole thing in one single pane of glass, you get your connectors, you get your data processing your data delivery, monitoring data quality and freshness, so that the data stakeholders know that there’s ultimately trust in that data.

Lisa Martin 8:37
And that trust is currency these days, right? It’s absolutely has to be there. But sounds like what Striim is doing to me as you’re really, are you helping companies to like kick out six to seven other vendors so that from what I’m hearing workforce productivity cost efficiencies are why as Dustin was talking about, it seems like those are some of the big outcomes in general that organizations can achieve with Striim.

John Kutay 8:59
I always think about is very purpose driven. You know, you have a specific business problem you’re trying to solve, rather than it taking years of development and expensive investments, you can get your initiatives off the ground and into production very quickly. And you know, that’s just with the power of the platform and the way that we can partner with data teams as well to make sure that they’re tying it to their business initiatives and getting that value out of it.

Lisa Martin 9:22
Yeah, it’s all about getting trusting making sure the data is trustworthy responsible, secure and extracting that value. Last question, John, for us before we wrap here anything new exciting coming up, first thing that we should be looking for any events, any webinars, things that you want to plug?

John Kutay 9:35
Yeah, in fact, tonight, we’re doing a what’s new and data live. This is a thought leader ship session that I run, really excited to have Bruno Aziza was formerly Yeah, head of data analytics. Now he’s at capital G alphabets, capital G. And we have Sanji Mohan, who was previously at Gartner. And we have Ridhima Khan, VP of dapper Labs is going to talk about modern digital consumer experiences. So that’s Tonight at Salesforce sour, we’re going to record it. So it’ll be made available to everyone. And we’re going on score with what’s new and data and bringing all the data practitioners, data leaders to really talk about how they’re innovating with data and meeting all these business goals that they’re trying to deliver.

Lisa Martin 10:16
Awesome. Lots of stuff going on. Yeah. Best of luck tonight. Sanjeev is a is a cube analyst from time to time. We know Bruno. He’s been on the show. So lots of great folks that’s that we were talking about before we went live like tech, it’s just like two degrees of separation. John, it’s great to have you You’re now officially a CUBE alumni, I probably can get you a sticker. So appreciate you sharing with us what’s going on at Striim with Google and how you’re really enabling those data teams to maximize value and use Gen AI. Thank you so much.

John Kutay 10:41
Thank you for having me.

Lisa Martin 10:42
Our pleasure for John Kutay and Dustin Kirkland. I’m Lisa Martin, and you’re watching The Cube live day one of our three days of coverage of Google Cloud Next. Dustin and I are going to be right back with our next guest. So don’t go anywhere.

 

Real-Time Data for Generative AI

Tutorial

Real-Time Data for Generative AI

Power AI models by capturing, transforming, and delivering real-time data

Benefits

Pave the way for informed decision-making and data-driven insights

Capture, transform, and cleanse data for model ingest 

Refine raw ML data and securely store it in Google Cloud Storage (GCS)

On this page

Striim’s unified data streaming platform empowers organizations to infuse real-time data into AI, analytics, customer experiences and operations. In this blog post, we’ll delve into how Striim’s real-time ingestion and transformation capabilities can be leveraged to refine raw ML data and securely store it in Google Cloud Storage (GCS). This guide will walk you through the steps needed to create a data pipeline that refines and enriches data before storing it in GCS for further analysis and training. 

Prerequisite: Before we embark on our data transformation journey, ensure you have a running instance of Striim and access to its console. 

Striim Developer: https://signup-developer.striim.com/

Step-by-Step Guide: Transforming Raw ML Data with Striim

The transformation pipeline consists of four key components, each performing a critical role in reading the incoming data, and enabling the transformation of raw into refined ML data. However, prior to creating the Striim pipeline, we will begin by examining the ML Postgres table that serves as the data repository.

Iris Dataset Table:

Table "dms_sample.iris_dataset"

id         | integer |           

sepal_length | integer |       

sepal_width  | integer |       

petal_length | integer |   

petal_width  | integer |

species      | text    |

This table is named “iris_dataset”, and it contains information about various characteristics of iris species, like sepal length, sepal width, petal width, and petal length. These are the measurements of the iris plants. The purpose of collecting this information is to use it later to train a classification model and accurately categorize different types of iris species. Unfortunately, the application responsible for ingesting these records into the “iris_dataset” table contains NULL values and provides species codes rather than species names. For example:

In this scenario, Striim is used for real-time data transformation from the ‘iris_dataset’ table. This involves replacing NULL values with 0 and mapping species codes to their respective names. After this cleansing process, the data is formatted into Delimited Separated Values (DSV), securely stored in GCS, and used to train a classification model, such as a Random Forest Classification Model. This model’s main goal is to predict iris species based on the provided characteristics.

Now that we have a clear understanding of the overall use case, we can proceed to creating our data pipeline within Striim.

Component 1: PostgreSQL Reader

Start by creating a PostgreSQL Reader in Striim. This component establishes a connection to the source PostgreSQL database, capturing real-time data as it’s generated using Striim’s log-based Change Data Capture (CDC) technology.

Component 2: Continuous Query – Replacing NULL Values

Attach a Continuous Query to the PostgreSQL Reader. This step involves writing a query that replaces any NULL values in the data with ‘0’.

SELECT * FROM pg_output_ml 
MODIFY(
   data[1] = CASE WHEN data[1] IS NULL THEN TO_FLOAT(0.0) ELSE data[1] END, 
   data[2] = CASE WHEN data[2] IS NULL THEN TO_FLOAT(0.0) ELSE data[2] END, 
   data[3] = CASE WHEN data[3] IS NULL THEN TO_FLOAT(0.0) ELSE data[3] END, 
   data[4] = CASE WHEN data[4] IS NULL THEN TO_FLOAT(0.0) ELSE data[4] END);

This code retrieves raw data from the “pg_output_ml” output/stream and replaces any NULL values in the specified columns (sepal_length, sepal_width, petal_length, petal_width) with 0, while retaining the original values for non-NULL entries using the MODIFY Striim function. More information: Click Here

Component 3: Label Transformation

After transforming our data as explained earlier, we proceed to create an additional Continuous Query. This query is pivotal—it replaces numeric labels (1, 2, 3) in the dataset with their corresponding iris species names: setosa, versicolor, and virginica. The labels “setosa,” “versicolor,” and “virginica” are used to denote different iris flower types. This change serves two essential purposes. Firstly, it makes the dataset easier to understand, helping users and stakeholders in intuitively comprehending the data and engaging with model outputs. Secondly, this transformation significantly enhances machine learning model training. By using familiar iris species names instead of numeric codes, models can adeptly capture species distinctions, leading to improved pattern recognition and generalization.

SELECT replaceString(replaceString(
replaceString(t,'1','setosa'),'2','virginica'),'3','versicolor')
FROM pg_ml_data_output t;

Within this query, we leverage Striim’s replaceString function to seamlessly replace any iris code with its corresponding actual name. More information: https://www.striim.com/docs/en/modifying-the-waevent-data-array-using-replace-functions.html 

Component 4: Storing in GCS

Lastly, attach a GCS Writer to the previous step’s output/stream. Configure this component to store the transformed data as a DSV file in your designated GCS bucket. What’s more, the UPLOAD POLICY ensures that a new DSV file is generated either after capturing 10,000 events or every five seconds.

After creating the pipeline, we can proceed to deploy and start it.

Right after that, Striim began capturing new data in real-time and transforming it on-the-fly:

In the screenshot above, we’re previewing the cleaned data and observing how Striim is transforming NULL values to ‘0’ and converting all the iris species codes to their respective names.

Since the Total Input and Total Output values match, it indicates that Striim has successfully generated files in our GCS bucket (striim-ml-bucket). Now, let’s proceed to your Google Cloud Storage account and locate the bucket.

Step 4: Verification and Visualization

Within the bucket, you’ll find the DSV files created by the GCS Writer. 

To verify the contents, we’ll leverage the power of Vertex AI and the Pandas Python library. Upload the DSV file to the JupyterLab instance, load the DSV file using Pandas, and explore its contents. This verification step ensures that the transformations have been successfully carried out, paving the way for subsequent machine learning training and analyses.

Conclusion: Transforming Possibilities with Striim and GCS

Striim’s real-time capabilities open doors to limitless possibilities in data transformation. Constructing a streamlined pipeline that captures, cleanses, and enriches data paves the way for Generative AI and machine learning. For additional details regarding Striim and its data processing capabilities, please refer to:

Striim Cloud product information page: https://www.striim.com/product/striim-cloud/

Striim Continuous Query documentation: https://www.striim.com/docs/en/create-cq–query-.html

Striim Open Processor documentation: https://www.striim.com/docs/platform/en/using-striim-open-processors.html 

 

More Recipes

Striim Achieves Google Cloud Ready — Cloud SQL Designation

We are proud to announce that Striim has successfully achieved Google Cloud Ready –  Cloud SQL Designation for Google Cloud’s fully managed relational database service for MySQL, PostgreSQL, and SQL Server. This exciting new designation recognizes Striim’s unwavering partnership efforts with Google Cloud and the joint commitment to be part of a customer’s cloud adoption and app modernization journey and become instrumental in their business innovations.

Alok Pareek the Co-founder and Executive Vice President of Products and Engineering at Striim shared that: “Striim is excited to be part of the Google Cloud Ready — Cloud SQL designation. Major enterprise customers leverage Striim to continuously move data from on-premise and cloud-based mission-critical databases into Google Cloud SQL for digital transformation. Striim seamlessly connects to Cloud SQL and enables operational data to be synced via snapshot and incremental CDC workloads in real time. This helps our joint customers innovate for example by feeding ML models in real time and leveraging Cloud SQL’s generative AI capabilities such as using the new pgvector PostgreSQL extension for storing vector embeddings.”

The Google Cloud Ready – Cloud SQL designation is designed to help businesses get started quickly with their cloud-based projects. Through this program, customers can deploy applications on the cloud with confidence knowing that they are backed by a trusted partner who has been through rigorous testing and certification processes. Our team is excited about this opportunity to continue to work closely with Google Cloud —and we’re eager to help customers leverage their existing investments in cloud technologies while leveraging our expertise in data streaming to Cloud SQL targets.

Being part of the program, Striim continues to collaborate closely with Google Cloud partner engineering and Cloud SQL teams to develop joint roadmaps and provide Google-approved and industry-standard solutions for integration use cases.

Striim is committed to providing comprehensive support for Google Cloud services across all industries. Our team of experienced engineers will work closely with customers to ensure successful deployments on Google Cloud while preserving their current data architecture. We are thrilled about this new partnership with Google Cloud and look forward to helping our customers take advantage of all its features for efficient database management.

If you’re interested in learning more about Striim’s launch partnership with Google Cloud Ready — Cloud SQL designation, please visit us at booth 532 during Google Next 2023 from August 29-31 in San Francisco!

 

Back to top