Episode #074: Agentic AI Foundation - MCP and the Future of AI-Native Platform Engineering
Duration: 16 minutes | Speakers: Jordan & Alex | Target Audience: Platform engineers, DevOps professionals, infrastructure architects
📰 News Segment: This episode covers 4 platform engineering news items before the main topic.
News Segment​
| Story | Source | Why It Matters |
|---|---|---|
| Docker Makes Hardened Images Free | InfoQ | Container security shift - 1000 most popular images now hardened at no cost (previously $11/image) |
| MongoBleed CVE-2025-14847 | Critical unauthenticated MongoDB exploit with public PoC; patch immediately | |
| Cloudflare "Fail Small" Resilience Plan | Cloudflare | Post-incident resilience improvements: isolation zones, blast radius reduction |
| DORA Metrics with Process Behavior Charts | InfoQ | Statistical process control for CI/CD metrics - beyond raw DORA numbers |
The Linux Foundation announced the Agentic AI Foundation (AAIF) on December 9, 2025, bringing together AWS, Anthropic, Google, Microsoft, OpenAI, Block, Cloudflare, and Bloomberg to standardize how AI agents connect to tools and data. This episode breaks down MCP (Model Context Protocol), the "HTTP for AI" that's already at 97 million monthly downloads, and what it means for platform engineering.
Key Statistics​
| Metric | Value | Source |
|---|---|---|
| MCP SDK downloads | 97M+ monthly | npm registry |
| Public MCP servers | 10,000+ | Anthropic |
| AAIF platinum members | 8 | Linux Foundation |
| Foundation announcement | December 9, 2025 | Linux Foundation |
The Three Founding Projects​
| Project | Origin | Purpose |
|---|---|---|
| MCP | Anthropic | Universal protocol for AI-to-tool communication |
| goose | Block | Developer-focused AI agent framework |
| AGENTS.md | OpenAI | Standardized agent configuration format |
MCP Architecture​
MCP consists of three components:
- Hosts: AI applications (Claude, ChatGPT, Cursor)
- Clients: MCP client libraries in hosts
- Servers: Tools and data exposed through MCP protocol
Key Features​
- JSON Schema interfaces for type-safe tool definitions
- Built-in OAuth flows for secure authentication
- Long-running task APIs for operations that take minutes
- Stateless architecture for scalability
The N×M → N+M Simplification​
Before MCP: N AI tools × M integrations = N×M custom code With MCP: N AI tools + M MCP servers = N+M standard interfaces
Adoption​
MCP is already supported by:
- ChatGPT
- Cursor
- Gemini
- VS Code Copilot
- Microsoft Copilot
Platform Engineering Implications​
Immediate Actions​
- Experiment: Install existing MCP servers, see how they work with Cursor or Claude
- Identify: Find one internal tool that would benefit from AI accessibility
- Watch: Monitor AAIF governance formation (Spring 2026)
Security Considerations​
- Start with read-only MCP servers (visibility before mutation)
- Use OAuth scopes to limit client access
- Implement audit logging for all MCP calls
- Apply the 60/30/10 framework from Episode #067
Key Takeaways​
- AAIF is real industry consolidation - 8 platinum members including all major AI/cloud players collaborating on standards
- MCP already won adoption - 97M downloads, 10K servers, major tool support before foundation announcement
- N×M → N+M reduction - One protocol instead of custom integrations per AI tool
- Platform teams should start experimenting - Build MCP servers for internal tools now
- Security is solvable - OAuth, permission scopes, and audit logging are part of the spec
Resources​
- Linux Foundation Agentic AI Foundation
- Model Context Protocol Documentation
- MCP GitHub Repository
- goose AI Agent Framework
Related Episodes​
- Episode #067: Agentic AI Platform Operations 2026
- Episode #071: Platform Engineering 2026 Predictions
- Episode #073: FinOps 2026 for Platform Engineers
Full Transcript​
Jordan: Today we're diving into one of the most significant announcements in AI infrastructure this year. On December ninth, the Linux Foundation announced the Agentic AI Foundation, and the companies involved read like a who's who of tech: AWS, Anthropic, Google, Microsoft, OpenAI, Block, Cloudflare, Bloomberg. All of them. Working together on a shared standard.
Alex: Before we get into AAIF, let's hit some quick news from this week. First up, Docker just made hardened images free. Previously eleven dollars per image, they've now hardened the thousand most popular Docker Hub images at no cost. That's a significant container security shift.
Jordan: Free security improvements are always welcome. What else?
Alex: MongoBleed. CVE-2025-14847. This is a critical unauthenticated MongoDB vulnerability with a public proof of concept already circulating. If you're running MongoDB, patch immediately.
Jordan: Unauthenticated exploits with public PoCs. That's the worst combination. What else?
Alex: Cloudflare published their "Fail Small" resilience plan following their recent outages. They're implementing isolation zones and reducing blast radius. It's a good post-mortem on how to architect for partial failure.
Jordan: We covered their outages in a previous episode. Good to see them following through on improvements.
Alex: And finally, InfoQ has a piece on using DORA metrics with Process Behavior Charts. The gist is that raw DORA numbers without statistical process control can mislead you. Worth reading if you're measuring deployment frequency or change failure rate.
Jordan: Alright, let's get into the main topic. The Agentic AI Foundation. Alex, what exactly is this thing?
Alex: So the Linux Foundation announced AAIF on December ninth as what they're calling a "directed fund." It's a governance structure for three founding projects that are meant to standardize how AI agents connect to tools and data. The three projects are MCP from Anthropic, goose from Block, and AGENTS.md from OpenAI.
Jordan: Wait, Anthropic and OpenAI collaborating on the same standard? These are direct competitors.
Alex: Exactly. That's what makes this significant. The platinum members include AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. When competitors agree on infrastructure standards, it usually means the standard is inevitable and they'd rather shape it than fight it.
Jordan: Okay, so what is MCP specifically? I keep hearing "Model Context Protocol" but what does it actually do?
Alex: MCP is a universal protocol for connecting AI models to tools, data sources, and services. Think of it like... okay, the analogy people keep using is "HTTP for AI." Just like HTTP standardized how web browsers talk to web servers, MCP standardizes how AI applications talk to tools.
Jordan: That's a big claim. HTTP enabled the entire web. You're saying MCP could enable... what exactly?
Alex: AI-native infrastructure. Here's the technical picture. You have three components. Hosts, which are AI applications like Claude or ChatGPT. Clients, which are the MCP client libraries in those hosts. And Servers, which expose tools and data through the MCP protocol.
Jordan: So if I'm building a platform, where do I fit in this picture?
Alex: You'd build MCP servers for your platform capabilities. Say you have an internal developer portal that can provision databases. Today, to make that AI-accessible, you'd need to build custom integrations for every AI tool your developers use. ChatGPT plugin, Cursor extension, Claude integration, and so on.
Jordan: Right, the N times M problem. N AI tools times M integrations equals a lot of custom code.
Alex: Exactly. With MCP, you build one server that exposes your database provisioning capability, and any MCP-compatible AI tool can use it. The math becomes N plus M instead of N times M.
Jordan: What does an MCP server actually look like technically?
Alex: It's surprisingly simple. An MCP server exposes three main primitives. Tools, which are functions the AI can call. Resources, which are data sources the AI can read. And Prompts, which are templates the AI can use. Each tool is defined with a JSON Schema interface, so the AI knows exactly what parameters it accepts and what it returns.
Jordan: JSON Schema. So you get type safety and validation built into the protocol.
Alex: Right. And there's more. OAuth flows are part of the spec, so you get secure authentication without building it yourself. There are APIs for long-running tasks, so operations that take minutes instead of milliseconds are handled properly. And the architecture is stateless, so servers don't maintain session state.
Jordan: How mature is this? Is it vaporware or production-ready?
Alex: This is where it gets interesting. MCP has ninety-seven million monthly SDK downloads on npm. There are over ten thousand public MCP servers already. ChatGPT, Cursor, Gemini, VS Code Copilot... they all support MCP or have announced support. The standard won adoption before the foundation was even announced.
Jordan: Ninety-seven million monthly downloads. That's not experimental.
Alex: No, it's not. And that adoption is why the foundation makes sense. When something is already winning in the market, standardizing it through a neutral governance body protects everyone's investment.
Jordan: Let's talk about the other two founding projects. What's goose?
Alex: goose is an open-source developer agent framework from Block, formerly Square. It's designed for building AI agents that can complete multi-step tasks. Think of it as scaffolding for building agents that coordinate across tools and maintain context over complex workflows.
Jordan: So MCP is the protocol and goose is the framework for building agents that use the protocol?
Alex: Exactly. And AGENTS.md from OpenAI is the third piece. It's a standardized configuration format for describing AI agents. What capabilities they have, what permissions they need, how they should behave.
Jordan: So MCP defines how agents talk to tools, goose provides a framework for building agents, and AGENTS.md defines how agents describe themselves. That's a complete stack.
Alex: That's the vision. All three under neutral governance so no single company controls the standard.
Jordan: Alright, let's get practical. I'm a platform engineer. My team runs Kubernetes clusters, maintains CI/CD pipelines, manages internal developer portals. What does AAIF mean for me?
Alex: Short term, you can start building MCP servers for your existing platform capabilities. Your APIs become AI-accessible through a standard interface. Developers can ask Claude or ChatGPT to provision a database, and the AI can call your MCP server to do it.
Jordan: That sounds powerful but also terrifying. AI agents calling production APIs?
Alex: This is where the sixty-thirty-ten framework from episode sixty-seven applies. Sixty percent of the value is visibility: AI agents reading from your systems to understand state. Thirty percent is AI-assisted actions with human approval. And ten percent is fully autonomous operations.
Jordan: So start with read-only MCP servers. Let AI agents query your systems before they can modify them.
Alex: Exactly. An MCP server that exposes Kubernetes pod status is low risk. An MCP server that can delete pods requires careful permission design.
Jordan: What about security? OAuth is built in, you said, but what else?
Alex: The spec includes permission scopes, so you can limit what any given client can access. Audit logging is expected in production implementations. And because the protocol is JSON-based, you can inspect exactly what's being requested.
Jordan: What should platform teams be doing right now? What's the action item?
Alex: Three things. First, experiment with existing MCP servers. Install a few, see how they work with Cursor or Claude. Build intuition for the developer experience.
Jordan: That's read-only exploration.
Alex: Right. Second, identify one internal tool that would benefit from AI accessibility. Something low risk but high value. Maybe your documentation search or your runbook lookup.
Jordan: Start with something that's already public or low stakes.
Alex: Exactly. Third, watch the AAIF governance formation. They're setting up the official structure through Spring 2026. The decisions made there will shape how the ecosystem evolves.
Jordan: What are the risks? What could go wrong with this standard?
Alex: Fragmentation is the biggest risk. If major vendors implement MCP slightly differently, you get the browser wars all over again. The foundation governance is meant to prevent that, but governance is only as strong as member commitment.
Jordan: And if companies defect from the standard?
Alex: Then platform teams are back to building custom integrations. But the incentives favor collaboration here. Nobody wants to maintain N-times-M integrations when N-plus-M is possible.
Jordan: Let's talk about what this means longer term. If MCP succeeds, if it really becomes the HTTP of AI, what does platform engineering look like in three years?
Alex: Your internal developer portal becomes an AI-native interface. Developers don't click through forms to provision infrastructure. They describe what they need in natural language, and AI agents orchestrate the provisioning through your MCP servers.
Jordan: That's a fundamental shift in how developers interact with platforms.
Alex: It is. And it changes what platform teams build. Instead of designing UI flows for every operation, you design tool interfaces that AI agents can use effectively.
Jordan: What about observability? If AI agents are calling MCP servers, how do you debug issues?
Alex: This is an open area. The protocol supports some introspection, but production-grade observability for agent interactions is still evolving. You'll want distributed tracing through your MCP server calls, correlation IDs that span agent sessions, and clear audit trails.
Jordan: Sounds like OpenTelemetry instrumentation for MCP servers should be table stakes.
Alex: It should be. And I expect we'll see MCP-aware observability tooling emerge in 2026.
Jordan: What about the competitive dynamics? OpenAI, Anthropic, Google, they're all in AAIF. Does that mean they're giving up differentiation on AI infrastructure?
Alex: They're differentiating on models, not on how models connect to tools. It's similar to how browser vendors competed on rendering engines but collaborated on HTTP and HTML. The protocol layer becomes shared infrastructure. The intelligence layer remains proprietary.
Jordan: So platform engineers should think of MCP like they think of HTTP. Foundational protocol, not a competitive feature.
Alex: Exactly. You don't choose between REST and GraphQL based on who invented them. You choose based on what fits your use case. Same will be true for AI agent protocols.
Jordan: Let's land this. What are the key takeaways for our listeners?
Alex: First, AAIF is real industry consolidation. Eight platinum members including all the major AI and cloud players. This isn't a paper standard.
Jordan: Second?
Alex: MCP already won adoption. Ninety-seven million downloads. Ten thousand servers. ChatGPT, Cursor, Gemini, all on board. The standard is here.
Jordan: Third?
Alex: The N-times-M to N-plus-M reduction is real. One protocol instead of custom integrations per tool. That's genuine operational simplification.
Jordan: Fourth?
Alex: Platform teams should start experimenting now. Build MCP servers for internal tools. Get ahead of the curve before this becomes expected.
Jordan: And fifth?
Alex: Security is solvable within the spec. OAuth, permission scopes, audit logging. The protocol was designed with production use cases in mind.
Jordan: Final thought?
Alex: When HTTP was standardized in the early nineties, nobody predicted e-commerce or social media. We're at that moment for AI. The question isn't whether AI agents will interact with your infrastructure. It's whether you'll be ready with the standard interface when they do.
Jordan: The standard interface is MCP. The governance is AAIF. And the time to start learning is now.