Skip to main content

Episode #052: AWS re:Invent 2025 - Data & AI Wrap-Up (Series Finale)

Duration: 24 minutes | Speakers: Jordan & Alex Series: AWS re:Invent 2025 (Part 4 of 4 - Finale) Target Audience: Platform Engineers, SREs, DevOps Engineers, Cloud Architects

📝 Read the complete AWS re:Invent 2025 guide: Comprehensive coverage of all four episodes with actionable takeaways.

Episode Overview

The finale of our AWS re:Invent 2025 series covers the data and AI services that tie everything together. S3 Tables with Apache Iceberg goes GA with Intelligent-Tiering and cross-region replication. Aurora DSQL delivers distributed SQL with GPS atomic clocks for global consistency. S3 Vectors supports 2 billion vectors per index at 90% lower cost than specialized databases. Clean Rooms ML enables privacy-enhanced synthetic datasets for collaborative ML. Plus a comprehensive wrap-up connecting 50+ announcements across all four episodes.


Key Announcements Covered

S3 Tables (Apache Iceberg)

  • Scale: 400,000+ tables created since launch
  • Intelligent-Tiering: Up to 80% storage cost savings
  • Replication: Cross-region and cross-account automatic replication
  • Performance: 3x faster queries, 10x higher TPS vs DIY Iceberg

Aurora DSQL

  • Availability: 99.999% multi-region SLA
  • Architecture: Disaggregated components (query processor, adjudicator, journal, crossbar)
  • Technology: Amazon Time Sync Service with GPS atomic clocks
  • Performance: 4x faster reads/writes vs other distributed SQL
  • Implementation: Started 100% JVM, shipped 100% Rust

S3 Vectors

  • Scale: 2 billion vectors per index (40x increase from preview)
  • Cost: Up to 90% savings vs Pinecone, Weaviate, Qdrant
  • Performance: 100ms query latency, 1,000 vectors/sec write
  • Growth: 40 billion vectors ingested during 4-month preview

Clean Rooms ML

  • Feature: Privacy-enhanced synthetic dataset generation
  • Technique: Model capacity reduction to prevent memorization
  • Control: Configurable noise levels and privacy parameters
  • Metrics: Fidelity and privacy guarantee measurements

Database Savings Plans

  • Savings: Up to 35% serverless, 20% provisioned, 18% DynamoDB on-demand
  • Flexibility: Auto-applies across engines, regions, instance types
  • Coverage: Aurora, RDS, DynamoDB, ElastiCache, DocumentDB, Neptune, Keyspaces, Timestream, DMS

Series Wrap-Up: Four Episodes, 50+ Announcements

Part 1: Agentic AI Revolution

  • DevOps Agent, Security Agent, Kiro IDE
  • Bedrock AgentCore, Nova Act
  • Werner Vogels' "verification debt" thesis

Part 2: Infrastructure & Developer Experience

  • Graviton5, Trainium3
  • Lambda Durable Functions
  • Renaissance Developer framework

Part 3: EKS & Cloud Operations

  • EKS Ultra Scale (100K nodes)
  • EKS Capabilities, MCP Server
  • CloudWatch Gen AI observability

Part 4: Data & AI (This Episode)

  • S3 Tables, Aurora DSQL
  • S3 Vectors, Clean Rooms ML
  • Database Savings Plans

Timestamps

  • 00:00 - Series Finale Introduction
  • 01:00 - News: Envoy CVE-2025-0913, Rust in Linux kernel
  • 03:30 - News: Let's Encrypt 10 years, GitHub Actions incident
  • 05:00 - S3 Tables: Intelligent-Tiering and Replication
  • 10:00 - Aurora DSQL: Distributed SQL with GPS Clocks
  • 14:30 - Zero-ETL and Database Savings Plans
  • 17:00 - S3 Vectors: 2B vectors at 90% lower cost
  • 19:30 - Clean Rooms ML: Privacy-Enhanced Synthetic Data
  • 21:00 - Series Wrap-Up: Connecting 50+ Announcements
  • 23:30 - 2026 Predictions and Action Items

Top Story

Quick Hits


Sources

Data Services

Vector & AI

Analytics & Cost


Actionable Takeaways

  1. Start with S3 Tables - Iceberg is winning; automatic maintenance, Intelligent-Tiering, and replication beat DIY
  2. Evaluate Aurora DSQL for new global apps - Multi-region strong consistency without sharding logic
  3. Replace specialized vector DBs with S3 Vectors - 90% cost savings with same performance
  4. Commit to Database Savings Plans - Up to 35% savings, flexible across engines and regions
  5. Experiment with Clean Rooms ML - Unlock data partnerships blocked by privacy concerns

Full Transcript

Jordan: This is it. The finale.

Alex: Four episodes deep into AWS re:Invent 2025, and we're about to stick the landing.

Jordan: Over the past three weeks, we've covered agentic AI, infrastructure innovation, and Kubernetes at scale. Today? The data and AI services that tie it all together. Plus, the big question - what does all of this actually mean?

Alex: S3 Tables with Apache Iceberg, Aurora DSQL going distributed, S3 Vectors hitting general availability. And a comprehensive wrap-up that connects the dots across 50-plus announcements.

Jordan: But first, because infrastructure never sleeps - let's hit the news. Four quick items from this week that every platform engineer should know about.

Alex: Starting with security. Envoy v1.34.12 dropped on December 10th to patch CVE-2025-0913. This is a use-after-free vulnerability in the c-ares DNS dependency that can crash Envoy if you've got malfunctioning or compromised DNS infrastructure.

Jordan: Limited exploitability since an attacker would need control of your DNS, but still - patch immediately. Multiple versions got the fix: v1.35.8, v1.34.12, and v1.33.14. If you're running Envoy, especially in service mesh deployments, this is your wake-up call.

Alex: Next up - a milestone three years in the making. Rust in the Linux kernel is officially no longer experimental. The Maintainers Summit consensus? "The experiment is done, and it was a success."

Jordan: After three years since Linux 6.1, Rust is now declared a core part of the kernel. Not "mature," not "widely adopted" - but officially endorsed as permanent infrastructure.

Alex: Linux 6.13 brought real progress: in-place modules, trace events, driver bindings for PCI and platform devices. Greg Kroah-Hartman said developers are "almost at the write a real driver in rust stage now." And real drivers already exist - Android's Binder IPC driver is being ported to Rust, with experimental drivers covering network PHYs, block devices, DRM panic screens.

Jordan: But here's the human challenge lurking underneath. Can Rust for Linux sustain a maintainer community when pioneers are leaving and some C veterans remain, let's say, less than welcoming?

Alex: The technical experiment succeeded. The social experiment? That's ongoing.

Jordan: Third - Let's Encrypt just hit 10 years. December 9th, 2025 marked a decade of free TLS certificates for everyone.

Alex: The growth curve is staggering. In 2018, they were issuing one million certificates per day. Today? Ten million per day. They're on track to reach one billion active sites in 2026.

Jordan: Impact check: when Let's Encrypt launched in 2015, only 39% of Firefox traffic was protected by HTTPS. Today? 80%. The tagline "Encryption for Everybody" actually delivered.

Alex: And finally, GitHub Actions had a brief incident on December 8th involving the Copilot GPT-4o model for completions. Resolved within 30 minutes after mitigation. There was also a larger incident back on October 29th affecting Actions and Codespaces - connection failures lasting about seven hours.

Jordan: Currently everything's operating normally, but it's a good reminder that even the infrastructure providers have infrastructure problems.

Alex: Four news items showing what we always say - infrastructure never sleeps. Speaking of which, let's talk about what AWS wants to do with your data infrastructure.

Jordan: S3 Tables. Let's start there, because this is AWS making a serious play for the lakehouse architecture market.

Alex: At re:Invent 2025, AWS announced S3 Tables has hit general availability with two major new capabilities: Intelligent-Tiering for tables and automatic replication across regions and accounts. But first, the growth metrics. Since launch, S3 Tables has scaled to over 400,000 tables. They've shipped 15-plus features in the last 12 months.

Jordan: And what is S3 Tables, for those who haven't been tracking this? It's AWS's managed Apache Iceberg implementation built directly into S3. First cloud object store with built-in support for Iceberg table formats.

Alex: The original announcement promised up to three times faster query performance and ten times higher transactions per second compared to running Iceberg yourself on vanilla S3. Automated table maintenance, optimized storage layout, all managed.

Jordan: Now comes Intelligent-Tiering. This is the same technology that's saved S3 customers over 6 billion dollars in storage costs. And they're bringing it to Iceberg tables.

Alex: Three access tiers: Frequent Access, Infrequent Access, and Archive Instant Access. Data automatically moves between tiers based on access patterns. AWS claims up to 80% storage cost savings without performance impact or operational overhead.

Jordan: Here's why this matters. When you're running a data lake with petabytes of historical data, storage costs become existential. But traditional tiering solutions treat everything as blobs - they don't understand table structure, partitions, metadata.

Alex: S3 Tables Intelligent-Tiering understands Iceberg semantics. It knows which files are hot query paths versus cold archive. It moves data based on actual table access patterns, not just object-level metrics.

Jordan: That's the difference between storage tiering and lakehouse tiering. One saves you money on blobs. The other saves you money while preserving query performance guarantees.

Alex: The second big announcement? Automatic replication for Apache Iceberg tables across AWS regions and accounts. And this isn't just S3 cross-region replication with Iceberg bolted on top. AWS built this from the ground up to understand how Iceberg works.

Jordan: What does "built from the ground up for Iceberg" actually mean?

Alex: Three things. First, you can have your primary tables in standard storage class, but replicas use Intelligent-Tiering for additional cost savings. Second, separate and independent retention policies per replica. Your source tables might be retained for seven days, compliance backups for ninety days, archival replicas for up to seven years. Snapshot expiration works independently for each replica.

Jordan: Third?

Alex: Separate encryption keys per replica. Improved security posture without managing key replication yourself.

Jordan: So this is disaster recovery for data lakes that actually understands data semantics. Not just "copy these files over there," but "replicate this complete table structure with all its snapshots and metadata, maintain consistency, and let me configure different policies per destination."

Alex: Exactly. And it reduces query latency for global analytics workloads. Distributed teams query local replicas for faster performance while maintaining consistency across regions and accounts.

Jordan: Platform engineering angle - what's the integration story?

Alex: S3 Tables integrates with Amazon Bedrock Knowledge Base for RAG applications at production scale. OpenSearch integration is now generally available, so you can use S3 Tables as your vector storage layer while using OpenSearch for search and analytics capabilities.

Jordan: Wait, vector storage in S3 Tables or S3 Vectors? Those are two different things.

Alex: Two different things, we'll get to S3 Vectors in a minute. But customers like BMW Group, MIXI, Precisely, Qlik, and Twilio are already using S3 Tables for data lake foundations.

Jordan: The pattern I'm seeing: AWS is making S3 your lakehouse platform, not just your blob store. Tables become first-class citizens with table-aware tiering, replication, and maintenance. Let's talk Aurora DSQL.

Alex: Aurora DSQL - the distributed SQL database that got a standing ovation at re:Invent. Announced as a preview at re:Invent 2024, hit general availability in May 2025.

Jordan: People actually cheered for a database?

Alex: They cheered for Database Savings Plans even louder, but yeah - Aurora DSQL was the crowd favorite. And here's why. It's AWS's answer to: what if we built distributed SQL from scratch in 2024, knowing everything we know about cloud-native architecture?

Jordan: PostgreSQL-compatible, serverless, 99.999% multi-region availability. Those are the marketing bullets. What's the architecture story?

Alex: Aurora DSQL is disaggregated into multiple independent components: query processor, adjudicator, journal, and crossbar. These components have high cohesion, communicate through well-specified APIs, and scale independently based on your workloads.

Jordan: Most databases are monoliths. You scale the whole thing. DSQL lets you scale query processing separately from storage, separately from coordination?

Alex: Correct. And here's the technical moment that makes this work - Amazon Time Sync Service. They're using atomic clocks synchronized via GPS satellites, providing globally consistent time to every EC2 instance.

Jordan: Wait, they're using GPS atomic clocks to solve distributed systems problems?

Alex: To solve the ordering problem specifically. In a multi-region distributed SQL system, you need to know the order of commits across regions. Traditional approaches use consensus protocols, which add latency. Aurora DSQL uses globally synchronized time to order transactions without heavy coordination overhead.

Jordan: That's genuinely clever. Expensive to implement, but solves a fundamental distributed systems challenge.

Alex: And they're using Optimistic Concurrency Control instead of traditional locking. Better throughput, higher system efficiency, no resource locking during transaction execution. Works well in a distributed architecture where lock coordination would kill performance.

Jordan: Implementation detail that caught my eye - the project started 100% on the JVM and finished 100% Rust.

Alex: Complete rewrite mid-project. That tells you they were serious about performance and safety guarantees.

Jordan: Performance claims - four times faster reads and writes compared to other popular distributed SQL databases. Compared to what? Under which workload?

Alex: AWS doesn't specify competitors in the benchmark. And "popular distributed SQL databases" is doing some heavy lifting there. CockroachDB? YugabyteDB? Spanner? We don't know.

Jordan: Early adopters: Autodesk, Electronic Arts, Klarna, QRT, Razorpay. They're exploring Aurora DSQL, not running it at production scale yet from what I can tell.

Alex: Use cases make sense though - multi-tenant SaaS applications, payment processing, gaming platforms, social media apps that need multi-region scalability and resilience. Industries like banking, e-commerce, travel, retail.

Jordan: Here's my platform engineering question. When does distributed SQL actually beat well-architected sharding? Because I've seen teams adopt distributed databases to avoid thinking about data topology, and they end up with complexity in a different place.

Alex: Fair question. Aurora DSQL wins when you need strong consistency across regions, high availability with automatic failover, and you don't want to manage sharding logic in your application code. It loses when you have workloads that naturally partition geographically or by tenant, where eventual consistency is acceptable.

Jordan: So, not a silver bullet. But for new global applications where you'd otherwise be implementing custom sharding and failover logic? That's the sweet spot.

Alex: And the multi-region design is thoughtful. Two regional endpoints, both supporting read and write operations. A third region acting as a log-only witness. Single logical database, strong data consistency.

Jordan: Operational trade-offs: you get 99.999% availability, no single point of failure, automated recovery. You give up control over data placement, query optimization, cost predictability under spike load.

Alex: Serverless always has that trade-off. Simplicity versus control.

Jordan: Let's shift to Zero-ETL. Because "Zero-ETL" has been an AWS marketing slogan for three years now. What actually shipped?

Alex: At re:Invent 2022, Adam Selipsky promised "a Zero-ETL future." The first integration was Aurora to Redshift for near real-time analytics. Three years later at re:Invent 2025, AWS announced AWS Glue Zero-ETL for self-managed database sources.

Jordan: Self-managed meaning on-premises or EC2 databases?

Alex: Correct. Oracle, SQL Server, MySQL, PostgreSQL - all replicating to Redshift through a simple no-code interface. Ongoing replication, not one-time migration.

Jordan: So "Zero-ETL" is AWS-speak for "we'll handle the plumbing."

Alex: Pretty much. You still need to think about schema mapping, data types, transformation logic. But the infrastructure for continuous replication? AWS manages that.

Jordan: Platform engineering value proposition: reduce bespoke integration code. Stop maintaining custom ETL pipelines. Let AWS handle the operational burden of keeping data in sync.

Alex: And it pairs with Database Savings Plans, which got announced at the same re:Invent. Up to 35% savings on database costs when you commit to a one-year term.

Jordan: Database Savings Plans got the loudest cheer at Matt Garman's keynote. Tell me why accountants and engineers both got excited about this.

Alex: Because it's flexible in all the ways Reserved Instances are rigid. Auto-applies across database engines, instance families, sizes, deployment options, and AWS regions. You can change from Aurora db.r7g to db.r8g instances, shift workloads from EU Ireland to US Ohio, modernize from RDS for Oracle to Aurora PostgreSQL or from RDS to DynamoDB - and your savings commitment follows you.

Jordan: Serverless deployments get up to 35% savings. Provisioned instances get up to 20%. DynamoDB and Keyspaces on-demand throughput gets up to 18%, provisioned capacity gets up to 12%.

Alex: Supported services: Aurora, RDS, DynamoDB, ElastiCache, DocumentDB, Neptune, Keyspaces, Timestream, and AWS Database Migration Service.

Jordan: One-year commitment, no upfront payment, flexibility to change technology underneath. This is the boring announcement that saves millions.

Alex: AWS CEO Matt Garman unveiled it with just two seconds remaining on his keynote "lightning round" shot clock. The crowd went wild.

Jordan: Because platform engineers know - cost optimization that doesn't require Reserved Instance Tetris is genuinely valuable. Okay, let's talk about the dark horse announcement. S3 Vectors.

Alex: S3 Vectors hit general availability at re:Invent 2025. First cloud object storage with native support to store and query vector data. And the growth from preview to GA is staggering.

Jordan: Numbers?

Alex: During preview, which only lasted about four months, users created over 250,000 vector indexes and ingested more than 40 billion vectors, performing over 1 billion queries. That's as of November 28th.

Jordan: And the scale jump from preview to GA?

Alex: Preview supported up to 50 million vectors per index. GA supports up to 2 billion vectors per index. That's a 40x increase. And the bucket limit? Up to 20 trillion vectors.

Jordan: Twenty trillion vectors in a single vector bucket. That's, uh, that's a lot of embeddings.

Alex: Performance improved too. Infrequent queries return results in under one second. Frequent queries now hit around 100 milliseconds or less. Write throughput: 1,000 vectors per second when streaming single-vector updates into indexes. Retrieve up to 100 search results per query. Store up to 50 metadata keys alongside each vector for fine-grained filtering.

Jordan: Cost story?

Alex: AWS claims up to 90% cost reduction when compared to specialized vector database solutions. That's Pinecone, Weaviate, Qdrant in the crosshairs.

Jordan: This is AWS doing to vector databases what RDS did to Oracle. Take a specialized workload, integrate it into commodity infrastructure, undercut on price, win on operational simplicity.

Alex: Regional expansion: 5 AWS regions during preview, 14 regions at GA. And the integrations are where this gets platform-engineering interesting.

Jordan: Bedrock Knowledge Base integration for RAG applications?

Alex: Yep. You can use S3 Vectors as your vector storage engine for production-grade RAG. And OpenSearch integration is now generally available, so you use S3 Vectors as your vector storage layer while using OpenSearch for search and analytics.

Jordan: Customers: BMW Group, MIXI, Precisely, Qlik, Twilio - using S3 Vectors to accelerate AI search and power recommendation systems at scale without the complexity or cost of managing dedicated vector infrastructure.

Alex: When vector search becomes infrastructure instead of a specialized database product.

Jordan: Platform engineering shift. Three years ago, you'd evaluate Pinecone versus Weaviate versus Qdrant, stand up infrastructure, manage indexing, tune performance. Now? You enable S3 Vectors, start writing vectors, query them. Done.

Alex: Same pattern across all these announcements. Aurora DSQL - distributed SQL without managing sharding. S3 Tables - lakehouse without managing compaction. S3 Vectors - semantic search without managing vector databases.

Jordan: AWS learned the EC2, RDS, S3 playbook. Win by making infrastructure disappear. Let's talk privacy-enhanced AI. Clean Rooms ML.

Alex: AWS Clean Rooms launched privacy-enhancing synthetic dataset generation for ML model training at re:Invent 2025. This is for organizations that want to train ML models on collaborative data without exposing raw data.

Jordan: How do you train on data you can't legally see?

Alex: You generate a synthetic version of the dataset that preserves statistical patterns while de-identifying subjects. AWS uses a model capacity reduction technique to prevent the model from memorizing information about individuals in the training data.

Jordan: So not just perturbing or copying the original data?

Alex: Correct. It's generating new datasets that maintain statistical properties of the original while quantifiably reducing the risk of re-identification.

Alex: Organizations have control over privacy parameters: amount of noise applied, level of protection against membership inference attacks - where an adversary tries to determine if a specific individual's data was in the training set.

Jordan: After generation, you get metrics to understand dataset quality across two dimensions: fidelity to the original data and privacy guarantees.

Alex: Use cases: campaign optimization, fraud detection, medical research. Any ML training scenario previously restricted by privacy concerns.

Jordan: Example - an airline with a proprietary algorithm wants to collaborate with a hotel brand to offer joint promotions to high-value customers. Neither wants to share sensitive consumer data.

Alex: Using Clean Rooms ML, they generate a synthetic version of their collective dataset, train the model without exposing raw data. Both parties benefit from the insights, neither party exposes customer information.

Jordan: IDC predicts that by 2028, 60% of enterprises will collaborate on data through private exchanges or data clean rooms.

Alex: This is AWS adding a second layer of privacy protection. Synthetic data was already a technique organizations used. AWS is adding privacy-enhancing generation on top, with quantifiable guarantees.

Jordan: For platform engineers building data partnerships - this unlocks use cases that compliance would've vetoed. Alright. We've covered the data and AI services. S3 Tables, Aurora DSQL, Zero-ETL, S3 Vectors, Clean Rooms ML, Database Savings Plans. Time for the big wrap-up. Four episodes, what's the pattern?

Alex: Let's connect the dots across 50-plus announcements. Part 1: agents that work for days autonomously. DevOps Agent, Security Agent, Kiro, Bedrock AgentCore, Nova Act, and Werner Vogels introducing "verification debt."

Jordan: Part 2: chips and runtimes that make AI economically viable. Graviton5, Trainium3, Lambda Durable Functions, the Renaissance Developer framework.

Alex: Part 3: Kubernetes infrastructure that scales to 100,000 nodes. EKS Ultra Scale, EKS Capabilities, MCP Server for EKS, CloudWatch AI observability.

Jordan: Part 4, today: the data layer that makes AI training and inference boring. S3 Tables, Aurora DSQL, S3 Vectors, Clean Rooms ML.

Alex: AWS isn't selling AI features. They're selling AI infrastructure as a solved problem.

Jordan: And there's a hidden thread running through every announcement. Every single one is about reducing operational complexity.

Alex: S3 Tables - automatic table maintenance, managed Iceberg compaction. Aurora DSQL - no sharding logic, automatic failover. S3 Vectors - no vector database to operate. Clean Rooms ML - privacy controls without custom cryptography. Database Savings Plans - cost optimization without Reserved Instance Tetris.

Jordan: AWS learned from the EC2, RDS, S3 playbook. The way you win is by making infrastructure disappear. The best platform is invisible.

Alex: Let's talk about Werner Vogels' "verification debt" thesis from Part 1. Traditional QA doesn't scale to agentic systems. You need new verification and validation primitives. Platform engineering teams must build trust infrastructure.

Jordan: That was the most important conceptual framing from the entire conference. We've spent two decades building CI/CD pipelines, test automation, observability. Now agents are writing code, making decisions, executing tasks autonomously for days.

Alex: How do you verify an agent did what you wanted when you didn't specify exactly what to do?

Jordan: You can't unit test intent. You can't integration test autonomy. Verification debt is the gap between "the agent completed the task" and "the agent did what I actually needed."

Alex: And AWS didn't just name the problem - they shipped primitives to address it. Bedrock AgentCore quality evaluations, policy controls for deploying trusted agents, improved memory, natural conversation abilities.

Jordan: The Renaissance Developer framework from Part 2 is AWS trying to win developers with full-stack AI infrastructure. Lambda Durable Functions give you stateful workflows without operational overhead. Graviton5 and Trainium3 deliver price-performance that enables new use cases.

Alex: And EKS Ultra Scale from Part 3 - 100,000 node clusters, 2.5 million pods. Not for everyone, but it signals AWS is serious about enterprise Kubernetes at scale.

Jordan: MCP Server for EKS means Claude can now manage your clusters. That's the integration story coming together - agents, infrastructure, orchestration, data.

Alex: What does this mean for 2026? For platform teams, less time operating databases, more time designing systems. AI agents as team members, not experiments. Data infrastructure that scales without heroics.

Jordan: Your job shifts from "keep it running" to "design the experience." The infrastructure becomes boring in the best possible way. It just works. You focus on developer experience, system design, business outcomes.

Alex: For the industry - commoditization of AI infrastructure is accelerating. Specialized vendors for vector databases, ETL tools, ML platforms are facing existential pressure. Open source projects like Apache Iceberg and Kubernetes are winning, but with cloud provider integration.

Jordan: 2026 is the year AI infrastructure becomes table stakes. If you're not offering managed vector search, automatic lakehouse optimization, distributed SQL, privacy-enhanced ML - you're behind.

Alex: But there are uncomfortable questions we need to ask.

Jordan: Alright, let's get uncomfortable. How much vendor lock-in comes with these convenience layers?

Alex: Multi-cloud is expensive. Most teams pick one cloud and go deep. The lock-in isn't really the APIs - it's the operational knowledge. Your team knows how to debug Aurora DSQL performance issues, tune S3 Tables for your workload, optimize costs with Database Savings Plans. That knowledge doesn't transfer to GCP or Azure.

Jordan: When does "managed" mean "AWS knows your data better than you do"? Because S3 Tables manages compaction, Aurora DSQL manages sharding, S3 Vectors manages indexing. You're giving up visibility into how your data is actually stored and processed.

Alex: Trade-off: you get simplicity and automatic optimization. You give up deep understanding of the storage layer. For most teams, that's the right trade. For teams with extreme scale or unusual workload patterns, it might not be.

Jordan: Are we building on AWS infrastructure or AWS dependency?

Alex: Choose your dependencies based on bet-the-company trust. If you trust AWS to be around in 10 years, to keep prices competitive, to prioritize your use cases in their roadmap - then deep integration makes sense. If you don't, maintain abstraction layers and portability.

Jordan: That's the honest answer. There's no free lunch. Convenience has a cost.

Alex: Actionable takeaways. Five concrete things platform teams should do in 2026.

Jordan: One - start with S3 Tables if you're doing analytics. Iceberg is the winning table format for data lakes. S3 Tables is the easiest path. Automatic maintenance, Intelligent-Tiering, cross-region replication. Stop managing Iceberg yourself.

Alex: Two - evaluate Aurora DSQL for new global applications. Don't migrate existing systems. Design new services with distributed SQL in mind. Multi-region strong consistency, automatic failover, serverless scaling.

Jordan: Three - replace specialized vector databases with S3 Vectors. 90% cost savings, same performance, way less operational overhead. If you're running Pinecone, Weaviate, or Qdrant on AWS, do the math.

Alex: Four - commit to Database Savings Plans if you have predictable database usage. Up to 35% savings, one-year commitment, no upfront payment. Auto-applies across engines and regions. This is free money.

Jordan: Five - experiment with Clean Rooms ML to unlock data partnerships blocked by privacy concerns. If you've had use cases vetoed by compliance because of data sharing restrictions, synthetic dataset generation might solve it.

Alex: Those are concrete, actionable, low-risk starting points.

Jordan: Four episodes. Eighteen hours of research. Dozens of announcements. And the punchline?

Alex: AWS wants to make your infrastructure boring.

Jordan: Not boring like "nothing happening" - boring like "it just works."

Alex: The highest compliment in infrastructure engineering.

Jordan: Next week we're back to non-AWS content, I promise.

Alex: But keep watching these announcements materialize through 2026. Aurora DSQL adoption stories, EKS Ultra Scale war stories, Nova Act agents in production.

Jordan: And whether Werner's "verification debt" becomes a real framework or just a keynote soundbite.

Alex: For senior platform engineers, SREs, DevOps folks listening - this is your roadmap. Not every announcement applies to you. But the patterns do.

Jordan: Build boring infrastructure. Enable magical experiences.

Alex: That's the job.

Jordan: That's the job. Until next time.