Unleash the power of real-time data streaming to Google BigQuery

 

To address the demand for a simple way to build, manage, and monitor streaming data pipelines, Striim is introducing a series of purpose-built data pipeline solutions for popular data warehouses. We’re very pleased to introduce our first solution, Striim for BigQuery, for Google BigQuery users. Striim for BigQuery enables users to load data to BigQuery with maximum performance, simplicity, and operational ease.

We’d like to invite you to join us for a live webinar showcasing Striim for BigQuery and its capabilities including:

  • How Striim for BigQuery enables you to extract and load data from various sources at high scale and low latency, while ensuring security and maintaining compliance
  • A simple, no-code UX that requires minimal user intervention and eliminates complexities associated with setting up streaming data pipelines
  • Intuitive monitoring and dashboards that provide transparency into the health and status of your pipelines, so you can understand what’s happening and fine-tune your data pipelines

Presented by:

John Kutay
Director of Product, Striim

How to Build Decentralized Data Products with Data Streaming

In this period of economic uncertainty, data leaders are under increasing pressure to supply useful data to their teams so they can make quick decisions and provide exceptional customer experiences.

Data streaming and Change Data Capture can help decentralize data management and provide the necessary flexibility to meet the needs of various stakeholders.

In this live webinar, we will demonstrate how to source data from operational systems (OLTP databases, sensors, API data) and transform it into actionable Data Products, while meeting SLAs for uptime and delivery speeds. The presentation will be followed by a Q&A session. Save your spot and join us live!

 

Presented by:

John Kutay
Director of Product, Striim

How Change Data Capture Enables Real-Time Data Streaming From Oracle Databases

 

Oracle CDC Webinar Replay

All businesses rely on data. Historically, this data resided in monolithic, on-premises databases, and for many enterprises, Oracle was the database of choice. As businesses modernize they are looking to the cloud for analytics and striving for real-time data insights. While they often find their legacy databases difficult to completely replace, the data and transactions happening within them are essential for analytics.

In order to unlock the full value of their data, companies need to stream critical transactions from their Oracle databases to their cloud provider, in real time. An easy way to do this is by using Change Data Capture (CDC). But not all CDC is created equal, and not all CDC solutions can handle mission-critical workloads.

Watch our on-demand technical webinar and demo, where we provide you with an overview of Striim’s Oracle CDC capabilities. We cover topics including:

  • Introduction to OJet, Striim’s new Oracle reader, that can read up to 150+ gigabytes of data per hour from Oracle (up to version-21c)
  • Demo: how to set up zero-downtime migrations/replications from Oracle to Google BigQuery from initial load to on-going, real-time synch. We also demo unique capabilities of Striim’s BigQuery writer including partition pruning and a streaming API.
  • An overview of Striim’s features that support high-volume, mission-critical enterprise environments

Presented by:

Sai Natarajan
VP Chief Field Technologist, Striim

 

How to Stream Data to Azure Synapse Analytics with Striim

 

Azure_Synapse_Webinar_July_2022_On-Demand

There is significant demand for zero downtime database migrations and continuous data replication as workloads shift to the cloud. Modernizing databases by offloading workloads to the cloud requires building real-time data pipelines from legacy systems.

The Striim® platform is an enterprise-grade cloud data integration solution that continuously ingests, processes, and delivers high volumes of streaming data from diverse sources, on-premises or in the cloud.

In this joint session with Striim partner Microsoft, we discuss why and how cloud and data architects/engineers use Striim to move data into Azure in a consumable form, quickly and with sub-second latency to easily run critical transactional and analytical workloads.

We also demonstrate how to use Striim to migrate or continuously replicate enterprise databases to Azure Synapse Analytics with no downtime:

  • Prepare Azure data targets or data integration with table creation that reflects the source database
  • Set up in-flight transformations right in the GUI to minimize end-to-end latency and enable real-time analytics & operational reporting
  • Deploy zero-downtime migrations to Azure from existing enterprise databases anywhere

Presented by:

Edward Bell
Senior Solutions Architect, Striim

 

Karlien Vanden Eynde
Director of Product Marketing Cloud Analytics, Microsoft

How to Stream Data to Snowflake Using Change Data Capture

On DemandSnowflake CDC Dark Blue 1

Data is the new oil, but it’s only useful if you can move, analyze, and act on it quickly. A Nucleus Research study shows that tactical data loses half its value 30 minutes after it’s generated, while operational data loses half its value after eight hours.

Change data capture (CDC) plays a vital role in the efforts to ensure that data in IT systems is quickly ingested, transformed, and used by analytics and other types of platforms. Striim is a unified data streaming and integration platform that offers non-intrusive, high-performance CDC from production databases to a wide range of targets.

In this live technical demo, we walk you through a use case where data is replicated from PostgreSQL to Snowflake in real time, using CDC. We also show examples of more complex use cases, including a data mesh with multiple data consumers.

How to Stream Data to Google Cloud with Striim

 

06_16_Tech Webinar_Dark_Blue

The move to Google Cloud is an attractive path for data modernization and for achieving a solid foundation for digital transformation. Real-time data integration allows you to run high-value workloads in the cloud and reap the full benefits of your cloud environment to improve your business operations and embrace innovation. As with adopting any new technology, there is complexity in the move and a number of things to consider, especially when dealing with mission-critical systems.

In this on-demand technical demo, Fahad Ansari and Srdan Dvanajscak show you how to stream data from an Oracle database to Google BigQuery and other Google Cloud targets with Striim. They demonstrate how Striim enables you to:

  • Ingest data from in-production sources with negligible impact
  • Make your operational data available immediately for applications and services on the Google Cloud
  • Process and analyze in-flight data using SQL queries and UI-based operators

How to Stream Data to Kafka and Snowflake with Striim

Technical_Demo_April_27_Recorded-v2

 Adopting a data warehouse in the cloud with Snowflake requires a modern approach to the movement of enterprise data. This data is often generated by diverse data sources deployed in various locations – including on-premise data centers, major public clouds, and devices.
In this technical demo, Fahad Ansari and Srdan Dvanajscak show you two ways to stream data from an Oracle database to Snowflake:
  • Directly, with Striim’s native integration with Snowflake that gives users granular control over how their data is uploaded to Snowflake
  • Via Kafka, using Striim to stream data to a Kafka topic and load it to Snowflake

How to Build Streaming Data Pipelines for Real-Time Analytics

webinar_streaming_data_pipelines_v3a (1)

Whether you’re a traveler waiting for your ride-share, or a large retailer keeping an eye on potential supply chain disruptions, hours-old or days-old data is obsolete. For real-time insights and experiences you need real-time analytics, powered by streaming data pipelines. But how can you build your first streaming data pipeline, as quickly and seamlessly as possible?

Join us for a live webinar with Steve Wilkes (Striim Co-Founder and CTO), where he will demystify streaming data pipelines and cover topics including:

  • What’s behind the explosive growth in real-time analytics (and why market-leading companies have adopted real time as the status quo)
  • How to build real-time data streaming pipelines quickly, reliably, and at unlimited scale
  • Why real-time data integration is an essential component of a streaming data pipeline
  • Customer examples showing how streaming data pipelines enable companies to make informed decisions in real time

Enterprise Data Streaming Patterns: Integration, Governance and Data Mesh

webinar_streaming_patterns_ondemand_orange

Watch our on-demand webinar on data streaming patterns that enable enterprises to analyze and act on data in real time.

Alok Pareek and Sanjeev Mohan cover topics including:

  • How to deliver real-time data and insights in complex enterprise environments
  • The latest opinions and best practices for enterprise streaming data patterns, including data mesh
  • A real-world example of a mesh architecture in retail

Speakers

Alok Pareek

Founder, EVP Products Striim, Former VP of Engineering at Oracle and GoldenGate

 

Sanjeev Mohan

Principal, SanjMo & Former Gartner Research VP, Big Data and Advanced Analytics

Building a Multi-Cloud Data Fabric for Real-Time Analytics

DataFabric_LinkedIn_Rect_Final_Video_Thumbnail-1

Data is increasingly siloed, making it harder for companies to extract the most value from their data. This is compounded by the fact that over 90% of companies plan on having hybrid cloud and or multi-cloud operations by 2022.

Watch this on-demand webinar with James Serra (Data Platform Architecture Lead at EY) where we demystify building a multi-cloud data environment for operations and real-time analytics. We cover the following topics:

  • Pros and cons of multi-cloud vs doubling down on a single cloud
  • Enterprise data patterns such as Data Fabric, Data Mesh, and The Modern Data Stack
  • Data ingestion and data transformation in a multi-cloud/hybrid cloud environment
  • Comparison of data warehouses (Snowflake, Synapse, Redshift, BigQuery) for real-time workloads
Back to top