Parcel Protection: Inside UPS Capital’s Defensive Strategy with Striim & Google

Amidst the pandemic-fueled surge in online shopping, porch piracy emerged as a prevalent concern, with over one in 10 adults falling victim to package theft within the previous year, according to a 2021 Consumer Reports survey. This modern-day menace, epitomized by the term “porch pirate,” underscores the vulnerability of unattended packages to opportunistic thieves. UPS Capital recognizes the challenges faced by its customers in securing their package delivery ecosystem and is harnessing digital capabilities and data access to redefine traditional approaches, ensuring improved customer experiences and combating shipping loss.

About UPS Capital

UPS Capital, a subsidiary of UPS, specializes in providing financial and insurance solutions tailored to businesses engaged in shipping and logistics. Established in 1999, it offers risk management services including cargo insurance and trade credit insurance, along with trade finance solutions such as supply chain finance and export financing. UPS Capital provides customs brokerage services to navigate import/export processes, supply chain optimization tools like supply chain analytics and inventory management, and technology solutions like the UPS Capital Merchant Services platform and UPS Capital Cargo Finance platform. These offerings collectively support businesses in mitigating risks, optimizing operations, and facilitating smoother transactions within the global trade and logistics landscape.

Challenges

The surge in online shopping has led to an unprecedented rise in package deliveries and, correspondingly, package theft. This upsurge has dramatically outpaced traditional security measures, exposing significant operational vulnerabilities within UPS Capital. The sheer volume of data generated from the increasing package deliveries overwhelmed existing data management systems, underscoring a critical need for more advanced data handling capabilities. The absence of real-time data processing capabilities hindered UPS Capital’s risk management and rapid response efforts. This deficiency affected not only operational efficiency but also eroded consumer trust and impacted the financial performance of the company. These multifaceted challenges highlighted the urgent need for a sophisticated solution capable of addressing the complexities of modern package delivery and logistics.

Solution

In response, UPS Capital integrated Striim’s real-time data streaming technology with Google BigQuery’s analytics capabilities to enhance delivery security. Striim’s platform enabled the immediate ingestion and integration of data from various sources, facilitating real-time risk assessments and proactive decision-making. This seamless data flow into Google BigQuery allowed for advanced analytics, leveraging AI and machine learning to predict potential delivery risks and optimize logistics strategies effectively. Additionally, the innovative DeliveryDefense™ Address Confidence system utilized this integrated data to assign confidence scores to each delivery location based on real-time and historical data, enhancing predictive accuracy. This system empowered businesses to proactively manage delivery

risks by rerouting packages or adjusting delivery protocols based on the calculated confidence scores, thereby streamlining operations and enhancing security.

The UPS DeliveryDefense program utilizes a sophisticated technical setup, starting with the direct upload of varied datasets into BigQuery. This platform acts as the primary structured data repository in Google Cloud. Concurrently, SQL Server data is thoroughly cleaned in Link Data, which also extracts images and email attachments from different systems, ensuring data integrity and availability. These enriched datasets are merged in BigQuery for seamless Google Cloud integration. Vertex AI then becomes pivotal, running advanced machine learning models like route anomaly detection and fraud detection for shipping transactions. Using Vertex AI’s extensive tools, these models are trained, refined, and implemented to discover insights, predict trends, and extract valuable information. Firestore, a flexible database suitable for various development environments, stores insights, confidence scores, and analytical details, all accessible via the Looker API.

Results

  • Improved Customer Experience: The integration of Striim not only secures deliveries but also optimizes routing and delivery strategies, resulting in heightened reliability. This reliability, in turn, boosts customer trust and satisfaction, as customers receive their packages safely and on time.
  • Cost-Savings: UPS achieved significant cost reductions by implementing advanced strategies to minimize losses from theft and optimize delivery routes, employing proactive risk management alongside sophisticated analytics and route optimization algorithms.
  • Advanced AI and ML Implementations: Utilizing Striim in conjunction with Google Cloud technologies like BigQuery and Vertex AI, UPS can deploy complex machine learning models. These models are crucial for detecting routing anomalies and preventing shipping fraud, thereby enhancing the security and efficiency of the delivery network.
  • Improved Data Processing and Analytical Accuracy: Striim’s implementation of AI-driven innovations, such as embedding vectors into streaming data, markedly improves the efficiency and accuracy of data processing. This technology allows UPS to perform real-time analytics, yielding quicker and more accurate decision-making in logistics.
  • Upgraded Protection Against Evolving Threats: Striim enables UPS to continuously adapt and enhance its defense models through ongoing analysis of real-time data and dynamic vector generation. This approach significantly strengthens UPS’s capabilities to mitigate evolving threats such as package theft and delivery fraud.

https://vimeo.com/954036063?share=copy

Elevating Logistics Solutions: UPS Capital’s Strategic Partnership with Striim and Google BigQuery

UPS Capital’s adoption of Striim and Google BigQuery represents a proactive and strategic approach to managing the complexities of modern logistics. Through this technological integration, UPS Capital has enhanced its ability to secure packages, optimize delivery routes, and maintain a competitive edge in the logistics industry. The initiative demonstrates how leveraging cutting-edge technology can address the challenges of modern package delivery, ensuring safety and reliability for customers globally.

Discover how Striim on Google Cloud can empower real-time intelligence for AI, just like UPS’s DeliveryDefense Address Confidence. 

Sign up for a free trial today!


View Case Study

Causal Artificial Intelligence, potential AI pitfalls, getting executive buy-in

John K Thompson is co-author of “Causal Artificial Intelligence: The Next Step in Effective Business AI” and Global Head of Artificial Intelligence (AI) at EY.

John’s career path went from being an assembler programmer, to creating the first neural network utility at IBM, and now running the AI group at Ernst & Young. We’ll unfold the pages of his acclaimed book, Causal Artificial Intelligence, and gain insights into his fascinating writing process. A relentless seeker of the ‘why’ behind data and analytics, John’s insights are sure to fuel your curiosity.

Fasten your seat belts as we navigate through the multifaceted world of artificial intelligence. With the rise of AI, we are looking at a portfolio approach, focusing on several types such as generative and causal AI. Understand how these AI types generate context-specific responses, and the role of retrieval augmented generation in enhancing AI models. We’ll also uncover how John masterly built a production generative AI infrastructure for UI, and some smart ways to sidestep pitfalls while implementing AI.

We examine how AI can be a game changer for businesses. John delivers invaluable advice on team collaboration, secure data management, and the crucial link between data, analytics, and measurable business outcomes. In an era where AI is revolutionizing industries, John’s practical insights are the compass you need to chart a successful course.

Check out John’s book on Amazon: Causal Artificial Intelligence: The Next Step in Effective Business AI Follow John K Thompson on LinkedIn

Reimagining Business Intelligence Through AI: A Conversation with Zenlytic’s CEO Ryan Janssen

Unlock the potential of AI in the world of data analytics with Zenlytic’s CEO, Ryan Janssen, as he takes us through a journey from collectible DataMons to the sophisticated integration of AI in business intelligence. Imagine transforming industry pros into trading cards – that’s the kind of innovation we chat about, highlighting the whimsical yet calculated steps towards making data not just informative but downright engaging. Ryan recounts the evolution of Zenlytic, from its machine learning beginnings to its current status as a conversational analytics platform, opening up new avenues for how we interact with data.

Data is the new gold, but only if you know how to mine it. This episode peels back the layers of complexity surrounding data modeling and the resurgence of semantic layers, unraveling the intricate dance of accessibility, maintenance, and user experience that businesses must perform. We discuss when your organization might be ready to embrace a semantic layer and the unmistakable signs that it’s time to elevate your BI tools for a self-serve experience. Ryan and I also tackle the importance of iteration and soft skills in delivering successful data projects that are not just functional but mission-critical.

As we wrap up, we cast an eye on the horizon of data analytics, where AI isn’t merely a trend but a series of incremental innovations shaping the future of data products. From the significance of trust and compliance in AI adoption to the debate between building versus buying AI solutions, we cover the strategic moves companies need to consider. Listen in for a candid discussion on the dynamic roles of data teams, the transformative power of AI like Zenlytic’s Zoe, and how different data structures can cater to the divergent engagement levels of users by 2025. After all, the future of data isn’t just about numbers—it’s about the stories they tell and the decisions they drive.

What’s New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What’s New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

A Cost Optimized Data Ecosystem with AI and FinOps Expert Kunal Agarwal

Embark on a journey with Kunal Agarwal, CEO of Unraveled Data, as he unravels the complexities of managing escalating cloud costs with the sharp tools of FinOps and data AI. If you’ve ever grappled with the challenge of scaling data operations without breaking the bank, this episode is your playbook for turning those daunting costs into a mastered art. Kunal, with his deep expertise in B2B enterprise technology, shares his insights on the inception of Unraveled Data and the crucial role of AI in streamlining cloud data management. It’s not just about the tech; it’s about the smarts in employing it, and Kunal’s tales from the trenches of data observability will guide you through the labyrinth of efficiency and optimization.

Dive into the tech mosaic of today’s data platforms, where consistency is king despite the varied landscapes of Databricks, Snowflake, and others. We tackle the nitty-gritty of providing a seamless user experience and the prowess of Unravel’s AI-powered insights engine in standardizing performance across systems. The chess game of maximizing ROI from AI investments also takes center stage as we dissect the importance of a cost-conscious culture supported by FinOps. Listen as we dissect the fine balance between innovation and investment, and learn how to wield the double-edged sword of customization versus cost. Kunal’s strategic vision paves the way for a future where automation and economic savvy coalesce, propelling data-driven enterprises to new heights. What’s New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim.

What’s New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

5 Key Principles of Effective Data Modeling for AI

Artificial intelligence (AI) uses information to make important choices in different industries. Just like a tall building needs a strong plan and base, successful AI requires good data models. While traditional business intelligence and reporting use cases allow some margin of error, hallucinating AI in customer experiences can be costly for a brand. So how do we create these models for complex algorithms and systems? Organizing, storing, and accessing data is important for AI. It affects how well AI programs and apps work. Incorporating AI into data modeling relies on fundamental techniques and principles that enhance the synergy between data and AI models.

These five key principles will show us how. Data modeling for AI involves making a structured framework that helps AI systems efficiently process, analyze, and understand data to make smart decisions:

The 5 Fundamentals

  • Data Cleansing and Validation: Provide data accuracy and consistency by addressing errors, missing values, and inconsistencies. Techniques like outlier detection and imputation help make sure your data is reliable and ready for analysis. 
  • Feature Engineering: Craft the right features – the building blocks of your model – by selecting the most relevant and informative ones. Techniques like correlation analysis and feature importance scores guide this crucial step.
  • Data Transformation: Prepare your data for analysis by handling scaling issues and addressing skewed distributions. Techniques like normalization and min-max scaling ensure all features contribute equally to model training. Striim’s intuitive platform streamlines data transformation processes, saving you time and resources.
  • Model Explainability: Build models that not only give you an answer but also explain how they arrived at it. Techniques like LIME and SHAP shed light on the decision-making process, fostering trust and ethical considerations.
  • Scalability and Performance Optimization: Design your model to handle growing datasets and evolving needs. Consider distributed computing frameworks and cloud solutions for efficient processing and future-proofing your AI project.

One excellent source for delving deeper into data modeling is the podcast episode by Joe Reis and Ben Rogojan. Data Modeling With Joe Reis – Understanding What Data Modeling Is And Where It’s Going

Key Principles in Action

  • Data Quality: According to a study by Deloitte, following good ways to clean and check data can save money and make information more accurate. This, in turn, leads to enhanced model performance, unlocking the true potential of your AI investments. Additionally, addressing data biases and inconsistencies guarantees ethical considerations are embedded within your AI development process.
  • Feature Selection: One advantage of concentrating on important features is that it enhances model efficiency. By using feature engineering tools, you can improve efficiency and accuracy by cutting unnecessary noise. Additionally, having fewer features results in better interpretability, making your AI models more transparent and reliable.
  • Normalization: A research paper published by the University of Utrecht explains how normalization improves model convergence, leading to faster and more stable model training. This ultimately translates to reduced bias, as normalization prevents features with larger scales from dominating the learning process.
  • Model Interpretability: Building trust with users is crucial for AI adoption. A survey by PwC found that 73% of respondents would be less likely to trust an AI system if they couldn’t understand how it makes decisions. Using methods like LIME and SHAP can help you create trust and transparency in your AI development, promoting ethical thinking and responsible AI practices.
  • Scalability: As your AI ambitions grow, your data models need to keep pace. A scalable and flexible architecture provides your models can handle growing data volumes and adapt to evolving needs. This, combined with cloud-based and distributed technologies, future-proofs your AI project, guaranteeing it remains effective and efficient even as your data landscape expands.

Business Value

By following these principles and employing the capabilities, you tap into the full potential of AI efforts and achieve:

  • Improved decision-making: Gain deeper insights from data-driven predictions, leading to informed business decisions that drive growth and competitive advantage across various domains.
  • Enhanced operational efficiency: Use reliable AI models to save time and work more efficiently. These models can handle tasks, make procedures easier, and improve resource allocation. This frees up human resources to focus on more important activities.
  • Increased customer satisfaction: Deliver personalized experiences and tailored offerings based on accurate AI-powered predictions, fostering deeper customer engagement and loyalty.
  • Reduced risks and improved compliance: Promote ethical AI development and address potential biases by implementing responsible data modeling techniques, protecting your organization’s reputation, and following changing regulatory standards.

Striim: Your Partner in Effective Data Modeling

Striim’s platform provides the means for integrating and analyzing data in real-time, greatly improving the data modeling process when applying AI. Here are four instances where Striim’s solution can aid in the successful incorporation of AI in data modeling:

  • Real-Time Data Integration and Streaming: Striim enables the ongoing collection of data from a diverse range of origins, such as databases, logs, message queues, and cloud platforms. This feature guarantees that the data used for AI modeling remains current, a critical element for applications that rely on real-time analytics or decision-making. By supplying fresh data, AI models can make more precise forecasts and choices based on the most recent information, improving their dependability and efficiency.
  • Data Preprocessing and Transformation: Before creating models, data needs to be prepared by cleaning, transforming, or combining it. Striim provides real-time data processing to take care of these tasks as data flows into the system. This makes the modeling process faster and enhances model accuracy by providing that high-quality, properly formatted data is used in AI models.
  • Scalability and Efficiency: AI and machine learning models need a scalable infrastructure for efficient data processing and analysis. Striim’s platform can handle large volumes of data in real-time, providing efficient and effective data modeling as data grows.
  • Enhanced Decision Making with Real-Time Analytics: Striim’s real-time analytics can inform decision-making with immediate insights and AI integration for adjusting strategies, optimizing operations, and predicting future trends. This allows for a dynamic and responsive approach to data modeling with continuous updates and improvements based on real-time data and analytics.

With Striim’s real-time data integration, processing, and analysis capabilities, we’re basically giving AI a shot of espresso – making it faster, stronger, and more accurate than ever before. Say goodbye to slow and clunky data modeling, and hello to a sleek, efficient, and scalable AI-driven solution.

Ready to unlock the power of AI with effective data modeling?

Get a free demo of Striim today and experience the difference for yourself!

Crafting Intuitive AI Experiences for Everyday Life with Abi Aryan

Embark on a captivating voyage into the intricacies of AI with Abi Aryan, the financial wizard turned tech trailblazer, who unveils the transformative power of machine learning in our latest episode. Witness the metamorphosis of data pipelines and video classification as Abi elucidates her groundbreaking research at UCLA and her influential work within the entertainment industry. Her commitment to social impact and democratizing AI is palpable as she offers a glimpse into her mission of miniaturizing large language models to fit into the palm of your hand, ensuring the future of tech is not only responsive but delightfully anticipatory.

As we unravel the complexities of operationalizing Large Language Models, Abi’s insights illuminate the shifting landscape of product experiences, where AI is not just a component but the orchestrator. She expertly navigates the technical finesse required to tailor AI for IoT devices, merging the realms of luxury and practicality for the ultimate smart living experience. Tune in to discover how Abi’s pioneering work is crafting a future where technology doesn’t just blend into our lives; it enhances them with an intuitive touch, anticipating our every need with intelligence and grace. What’s New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim.

What’s New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

The Vanguard of AI and Data Strategies for Competitive Edge with Ryan Wexler

Prepare to unlock the secrets of successful data infrastructure investment with the guidance of Ryan Wexler, VP at Unusual Ventures. Transitioning from the meticulous realm of data engineering right into the heart of venture capitalism, Ryan offers an unparalleled perspective on pinpointing the most promising data companies. This episode is a treasure trove of insights, where we uncover the critical ingredients that elevate a startup from a mere niche player to a scalable powerhouse in the competitive data sector, all thanks to the strategic support Unusual Ventures provides.

As we navigate the intricate evolution of the modern data stack, it becomes clear that while data warehouses once lured enterprises with their cost-effectiveness, burgeoning scales of operation have led to some sleepless nights over soaring expenses. This is where our discussion takes a turn into the groundbreaking realm of data lakes and independent storage solutions – the silent disruptors offering a respite by decoupling storage from compute costs. Listen in to understand how businesses are strategizing to harness these technologies for optimized data management, marking a seismic shift in the tech landscape.

And then there’s the undeniable surge of AI – a tidal wave of innovation that’s transforming the face of industry after industry. This episode peeks behind the curtain of AI integration, highlighting how trailblazers like Druva and ThoughtSpot are embedding AI to revolutionize their offerings. As we dissect the proliferation of AI tools, our dialogue serves as a compass for startups and enterprises alike, emphasizing the importance of a laser focus on ROI and the wisdom of keeping those burn rates low amidst an ever-changing economic backdrop. Join us for a journey that not only demystifies the complexities of data ROI but also navigates the myriad choices in the expanding universe of AI adoption.

What’s New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What’s New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

Back to top