文章

📚 Agent/Sources Pack(2026-02-27)

总览

这期 Agent/Sources Pack 主要覆盖四条主线:

  1. MCP 标准化继续成为 Agent 连接数据层的基础协议(Anthropic)。
  2. A2A 从概念走向企业可部署接口(Google Cloud A2A v0.3)。
  3. Agent 工程化工具链继续上移到“可视化编排 + 评测 + 治理”(OpenAI AgentKit)。
  4. 开发者侧仍需轻量可控 SDK 作为底层骨架(OpenAI Agents SDK)。

一句话判断:2026 年做 Agent,不再是“会不会调模型”,而是“能不能把协议、工具链、部署与治理串成闭环”。

重点条目

1) Anthropic:Model Context Protocol(MCP)

  • 看点:把“模型连接业务系统”的接入方式标准化,降低每接一个新系统都要重写连接器的成本。
  • 意义:对企业内网、知识库、协同系统这类异构数据源,MCP 给了统一接口思路。
  • 下一步建议:把内部高频工具(如文档库、工单系统)优先 MCP 化,先打通 1-2 条关键流程。

2) Google Cloud:A2A v0.3 升级

  • 看点:新增 gRPC、安全卡签名、Python SDK 客户端扩展,明显朝生产可用推进。
  • 意义:多 Agent 协作从“同框架内调度”转向“跨平台可互联”,更接近企业真实技术栈。
  • 下一步建议:优先定义 Agent Card 与任务生命周期,再做跨团队 Agent 编排。

3) OpenAI:AgentKit 发布

  • 看点:把 Agent Builder、Connector Registry、ChatKit、Evals 增强打包成一套工程化工具。
  • 意义:将过去分散在编排、前端、评测、治理上的工作压缩到统一工作流。
  • 下一步建议:把“可视化编排 + 自动评测 + 安全护栏”设为默认开发路径,而不是后补。

4) OpenAI Agents SDK(Python)

  • 看点:最小原语(Agent / Handoff / Guardrails)+ tracing + MCP 工具调用。
  • 意义:适合做底层骨架,控制复杂度,避免过早被重平台锁死。
  • 下一步建议:先用 SDK 跑通核心任务,再按需要上层接可视化或商业化组件。

原文留档(全文)

Source 1

Anthropic — Introducing the Model Context Protocol

来源: https://www.anthropic.com/news/model-context-protocol

Today, we’re open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses. As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy s

ystems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale. MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need. The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (

MCP clients) that connect to these servers. Today, we’re introducing three major components of the Model Context Protocol for developers:

  • The Model Context Protocol specification and SDKs
  • Local MCP server support in the Claude Desktop apps
  • An open-source repository of MCP servers Claude 3.5 Sonnet is adept at quickly building MCP server implementations, making it easy for organizations and individuals to rapidly connect their most important datasets with a range of AI-powered tools. To help developers start exploring, we’re sharing pre-built MCP servers f

or popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer. Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies including Zed, Replit, Codeium, and Sourcegraph are working with MCP to enhance their platforms—enabling AI agents to better retrieve relevant information to further und

erstand the context around a coding task and produce more nuanced and functional code with fewer attempts. “At Block, open source is more than a development model—it’s the foundation of our work and a commitment to creating technology that drives meaningful change and serves as a public good for all,” said Dhanji R. Prasanna, Chief Technology Officer at Block. “Open technologies like

the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration. We are excited to partner on a protocol and use it to build agentic systems, which remove the burden of the mechanic

al so people can focus on the creative.” Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today’s fragmented integrations with a mo

re sustainable architecture.

Getting started

Developers can start building and testing MCP connectors today. All Claude.ai plans support connecting MCP servers to the Claude Desktop app. Claude for Work customers can begin testing MCP servers locally, connecting Claude to internal systems and datasets. We’ll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization. To start building:

An open community

We’re committed to building MCP as a collaborative, open-source project and ecosystem, and we’re eager to hear your feedback. Whether you’re an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, we invite you to build t

he future of context-aware AI together.

Statement from Dario Amodei on our discussions with the Department of War

A statement from our CEO on national security uses of AI. Read more

Anthropic acquires Vercept to advance Claude’s computer use capabilities

Read more

Anthropic’s Responsible Scaling Policy: Version 3.0

Read more

Source 2

Google Cloud — Agent2Agent protocol (A2A) is getting an upgrade

来源: https://cloud.google.com/blog/products/ai-machine-learning/agent2agent-protocol-is-getting-an-upgrade

Editor’s note: Agentspace is now part of Gemini Enterprise. The agent creation and orchestration technology behind Agentspace is now powering the core functionalities of the Gemini Enterprise platform. Learn more.

AI is evolving beyond single, task-specific agents into an interconnected ecosystem, where autonomous agents collaborate to solve complex problems, regardless of their underlying platform. To make this transition easier for developers, we are announcing a comprehensive suite of t

ools that will empower developers to build, deploy, evaluate, and sell Agent2Agent (A2A) agents with Google Cloud.

Today, we’re excited to announce the release version 0.3 of the A2A protocol, which brings a more stable interface to build against and is critical to accelerating enterprise adoption. This version introduces several key capabilities, including gRPC support, the ability to sign s

ecurity cards, and extended client side support in the Python SDK, which provide more flexible use, better security and easier integration.

The A2A protocol is quickly gaining momentum, with support from a growing ecosystem of over 150 organizations that spans every major hyperscaler, leading technology providers, and multinational customers using Google Cloud. Businesses are already building powerful capabilities for their organizations. For example, Tyson Foods and Gordon Food Service are pioneering collaborative A2A systems to drive sales and reduce supply chain friction, creating a real-time channel for their agents to share product data and leads that enhance the food supply chain.

Build: Native support for A2A in the Agent Development Kit (ADK)

We’re releasing native support for A2A in Agent Development Kit (ADK), a powerful open source agent framework released by Google. This makes it easy to build A2A agents if you are already using ADK and is built upon our previously-released A2A SDKs. For example, with a simple “Hello, World!” style code snippet, developers can now use ADK to:

  • Use an A2A agent with an Agent Card and use it as a sub-agent.
  • Expose an existing ADK agent to make it discoverable as an A2A agent.

Developers can start building collaborative agents with ADK today.

Deploy: Flexible deployment options with Agent Engine, Cloud Run, and GKE

Once agents are built, they need a robust and scalable home to exist within. We are providing three powerful deployment paths for customers to scale agents into production:

  • Deploy to Agent Engine: For a managed, agent-optimized environment, Agent Engine is the choice for many agent builders. We’re adding support for A2A to Agent Engine in the coming weeks so that you can easily deploy an agent written in any framework to Agent Engine and get a production ready, Google-scale, A2A agent.
  • Deploy to Cloud Run: For increased flexibility, you can containerize and deploy your A2A agents to Cloud Run, leveraging Google’s serverless infrastructure for massive scale and reliability. Follow the published guide.
  • Deploy to Google Kubernetes Engine (GKE): For maximum control, you can deploy agents to GKE, providing the full power of Kubernetes to manage A2A systems at scale.

With support for A2A arriving in the coming weeks, developers will be able to use the agent-starter-pack CLI tool to complete CI/CD setup in just one line:

Integrate: Bring your A2A agents to users with Agentspace

Agents need safe and accessible environments to be useful. That’s why we built Agentspace, the destination where agents meet end users. In the coming weeks, partners will be able to make any A2A agent available in Agentspace, transforming it from a standalone tool into a valuable

service that people can consume. This includes partner-built agents that are built on partner platforms, giving customers the flexibility to access these A2A agents in multiple locations.

More than just a hub, Agentspace provides the critical governance, safety, and control features needed for an enterprise-ready agent platform, ensuring that interactions are secure and reliable.

Evaluate and commercialize your A2A systems

Building and deploying agents is just the beginning. To create truly enterprise-grade systems, you need robust evaluation capabilities, which is why we’re extending the Vertex GenAI Evaluation Service to support A2A agent evaluations. See our hands-on guidance.

Discover and sell partner-built A2A agents in AI Agent Marketplace

Partners can now sell their A2A agents directly to customers in the AI Agent Marketplace. This will allow Google Cloud customers to discover and purchase agents published by ISVs, GSIs, and other technology providers. The AI Agent Marketplace provides an important path to market for partners looking to monetize their AI Agents.

Partners can request more information here.

The A2A ecosystem is growing

We announced the A2A protocol in April to lead the industry toward interoperable agent systems, and in June, we advanced that commitment by contributing it to the Linux Foundation. The industry’s response continues to grow, reflecting a shared belief in vendor-neutral, community-driven standards. Many of Google Cloud’s partners have previously offered agents to joint customers, and they are now enabling these agents with A2A to help future-proof investments for customers.

  • Adobe: A leader in generative AI, Adobe is leveraging the A2A protocol to make its rapidly-growing number of distributed agents interoperable with agents in Google Cloud’s ecosystem. The A2A protocol enables Adobe agents to collaborate in the enterprise to create powerful new digital experiences, streamline workflows that optimize the content creation process, and automate multi-system processes and data integrations.
  • S&P Global Market Intelligence: S&P, a provider of information services and solutions to global markets, has adopted A2A as a protocol for inter-agent communication. This strategic alignment enhances interoperability, scalability, and future-readiness across the organization’s agent ecosystem.
  • ServiceNow: As a founding partner of A2A, ServiceNow empowers customers with its AI Agent Fabric, a multi-agent communication layer that connects ServiceNow, customer, and partner-built agents. This provides enterprises with the greater choice and flexibility needed to unlock the full potential of agentic AI, resulting in faster decisions, fewer handoffs, and more scalable solutions.
  • Twilio: Twilio is using A2A protocol for implementing Latency Aware Agent Selection. By extending the A2A protocol, individual agents now broadcast their latency, enabling the system to intelligently route tasks to the most responsive agent available and also adapt gracefully – for example, playing a filler prompt or adding typing sounds, if a high-latency agent is the only option.

Developers can review more about past releases in the release notes, learn about what’s coming in the future in our roadmap, and join the community to help evolve the protocol moving forward. The community has also released great tooling around A2A with the launch of A2A Inspector and Technology Compatibility Kit.

Get started

We’re excited to partner across the industry to build the future of artificial intelligence. Here’s how you can start:

We can’t wait to see what you build.

Posted in

Source 3

OpenAI — Introducing AgentKit

来源: https://openai.com/index/introducing-agentkit/

Today we’re launching AgentKit, a complete set of tools for developers and enterprises to build, deploy, and optimize agents. Until now, building agents meant juggling fragmented tools—complex orchestration with no versioning, custom connectors, manual eval pipelines, prompt tuni

ng, and weeks of frontend work before launch. With AgentKit, developers can now design workflows visually and embed agentic UIs faster using new building blocks like:

  • Agent Builder: a visual canvas for creating and versioning multi-agent workflows
  • Connector Registry: a central place for admins to manage how data and tools connect across OpenAI products
  • ChatKit: a toolkit for embedding customizable chat-based agent experiences in your product

We’re also expanding evaluation capabilities with new features like datasets, trace grading, automated prompt optimization, and third-party model support to measure and improve agent performance. Since releasing the Responses API and Agents SDK in March, we’ve seen developers and enterprises build end-to-end agentic workflows for deep research, customer support, and more. Klarna built a support agent that handles two-thirds of all tickets and Clay 10x’ed growth with a sales agent. AgentKit builds on the Responses API to help developers build agents more efficiently and reliably. As agent workflows grow more complex, developers need clearer visibility into how they work. Agent Builder provides a visual canvas for composing logic with drag-and-drop nodes, connecting tools, and configuring custom guardrails. It supports preview runs, inline eval configuration, and full versioning—ideal for fast iteration. Builders can get started with a blank canvas or with prebuilt templates.

At Ramp, the team went from a blank canvas to a buyer agent in just a few hours: Agent Builder transformed what once took months of complex orchestration, custom code, and manual optimizations into just a couple of hours. The visual canvas keeps product, legal, and engineering on the same page, slashing iteration cycles by 70% and getting an agent live in two

sprints rather than two quarters.”— Ramp

Similarly, LY Corporation—a leading Japanese technology and internet services company—built a work assistant agent with Agent Builder in less than two hours. “Agent Builder allowed us to orchestrate agents in a whole new way, with engineers and subject matter experts collaborating all in one interface. We built our first multi-agentic workflow and ran it in less than two hours, dramatically accelerating the time to create and deploy a

gents.”— LY Corporation

We’re also launching a Connector Registry for enterprises to govern and maintain data across multiple workspaces and organizations. The Connector Registry consolidates data sources into a single admin panel across ChatGPT and the API. The registry includes all pre-built connectors like Dropbox, Google Drive, Sharepoint, and Microsoft Teams, as well as third-party MCPs. Developers can also enable Guardrails in Agent Builder—an open-source, modular safety layer that helps protect agents against unintended or malicious behavior. Guardrails can mask or flag PII, detect jailbreaks, and apply other safeguards, making it easier to build and deploy reliable, safe agents. Guardrails can be deployed standalone or via the guardrails library for Python and JavaScript. Deploying chat UIs for agents can be surprisingly complex— handling streaming responses, managing threads, showing the model thinking, and designing engaging in-chat experiences. ChatKit makes it simple to embed chat-based agents that feel native to your product. It can be embedded into apps or websites and customized to match your theme or brand. ChatKit already powers a range of use cases, from internal knowledge assistants and onboarding guides to customer support and research agents. HubSpot’s customer support agent is one example: Building reliable, production-ready agents requires rigorous performance evaluations. Last year, we launched Evals to help developers test prompts and measure model behavior. We’re now adding four new capabilities that make it even easier to build evals:

  • Datasets–rapidly build agent evals from scratch and expand them over time with automated graders and human annotations..
  • Trace grading–run end-to-end assessments of agentic workflows and automate grading to pinpoint shortcomings.
  • Automated prompt optimization–generate improved prompts based on human annotations and grader outputs.
  • Third-party model support–evaluate models from other providers within the OpenAI Evals platform.

We’ve already seen major performance gains from customers using Evals. Reinforcement fine-tuning (RFT) lets developers customize our reasoning models. It is generally available on OpenAI o4-mini and in private beta for GPT‑5. We are working closely with dozens of customers to refine the RFT for GPT‑5 before wider release. Today, we’re introducing two new features in that RFT beta designed to push agent performance even further:

  • Custom tool calls–train models to call the right tools at the right time for better reasoning
  • Custom graders–set custom evaluation criteria for what matters most in your use case

Starting today, ChatKit and the new Evals capabilities are generally available to all developers. Agent Builder is available in beta, and Connector Registry is beginning its beta rollout to some API, ChatGPT Enterprise and Edu customers with a Global Admin Console (where Global Owners can manage domains, SSO, multiple API orgs). The Global Admin console is a pre-requisite to enabling Connector Registry. All of these tools are included with standard API model pricing. We plan to add a standalone Workflows API and agent deployment options to ChatGPT soon. We can’t wait to see what you build.

Source 4

OpenAI Agents SDK(Python)首页文档

来源: https://openai.github.io/openai-agents-python/

The OpenAI Agents SDK enables you to build agentic AI apps in a lightweight, easy-to-use package with very few abstractions. It’s a production-ready upgrade of our previous experimentation for agents, Swarm. The Agents SDK has a very small set of primitives:

  • Agents, which are LLMs equipped with instructions and tools
  • Agents as tools / Handoffs, which allow agents to delegate to other agents for specific tasks
  • Guardrails, which enable validation of agent inputs and outputs

In combination with Python, these primitives are powerful enough to express complex relationships between tools and agents, and allow you to build real-world applications without a steep learning curve. In addition, the SDK comes with built-in tracing that lets you visualize and

debug your agentic flows, as well as evaluate them and even fine-tune models for your application.

Why use the Agents SDK

The SDK has two driving design principles:

  • Enough features to be worth using, but few enough primitives to make it quick to learn.
  • Works great out of the box, but you can customize exactly what happens.

Here are the main features of the SDK:

  • Agent loop: A built-in agent loop that handles tool invocation, sends results back to the LLM, and continues until the task is complete.
  • Python-first: Use built-in language features to orchestrate and chain agents, rather than needing to learn new abstractions.
  • Agents as tools / Handoffs: A powerful mechanism for coordinating and delegating work across multiple agents.
  • Guardrails: Run input validation and safety checks in parallel with agent execution, and fail fast when checks do not pass.
  • Function tools: Turn any Python function into a tool with automatic schema generation and Pydantic-powered validation.
  • MCP server tool calling: Built-in MCP server tool integration that works the same way as function tools.
  • Sessions: A persistent memory layer for maintaining working context within an agent loop.
  • Human in the loop: Built-in mechanisms for involving humans across agent runs.
  • Tracing: Built-in tracing for visualizing, debugging, and monitoring workflows, with support for the OpenAI suite of evaluation, fine-tuning, and distillation tools.
  • Realtime Agents: Build powerful voice agents with features such as automatic interruption detection, context management, guardrails, and more.

Installation

#__codelineno-0-1pip install openai-agents

Hello world example

#__codelineno-1-1from agents import Agent, Runner

#__codelineno-1-2

#__codelineno-1-3agent = Agent(name=”Assistant”, instructions=”You are a helpful assistant”)

#__codelineno-1-4

#__codelineno-1-5result = Runner.run_sync(agent, “Write a haiku about recursion in programming.”)

#__codelineno-1-6print(result.final_output)

#__codelineno-1-7

#__codelineno-1-8# Code within the code,

#__codelineno-1-9# Functions calling themselves,

#__codelineno-1-10# Infinite loop’s dance.

(If running this, ensure you set the OPENAI_API_KEY environment variable)

#__codelineno-2-1export OPENAI_API_KEY=sk-…

本文由作者按照 CC BY 4.0 进行授权