🤖 AI 每日简报(2026-03-01)
总览
今天这波信息挺“同一个故事的不同侧面”:一边是模型能力继续往“能干活”的方向长(移动端多步自动化、企业插件/连接器),另一边是围绕这些能力的“边界”变得更硬(国防合同的安全条款、欧盟 AI Act 的执法节点)。再加上硬件侧的推力(推理/推断 inference 专用平台),基本就是 2026 年 AI 产业的主旋律:更像基础设施,也更像政治。
重点条目
1) OpenAI 宣布与美国国防部(Department of War)达成协议:强调两条安全原则(禁止国内大规模监控、对武力使用的“人类责任”)会写进合同,并会做“技术护栏”。
2) Claude 在 App Store 冲到生产力榜首:围绕“AI 用于国防/监控/武器”的舆论与用户迁移正在真实影响产品格局。
3) Anthropic 推企业级 agents / plug-ins:把“公司内部可控的插件市场、受控的数据流、可定制插件”当作企业落地的关键(对 SaaS 也是直接威胁)。
4) Google Gemini 上线 Android 多步自动化(beta):先从外卖/打车/杂货等场景做起来,并通过“安全虚拟窗口 + 可视化进度 + 可随时中止”来降低自动化翻车成本。
5) Google 推出 Nano Banana 2(Gemini 3.1 Flash Image):更快、更高分辨率、更强一致性,并默认覆盖 Gemini App / Search / Lens / AI Mode 等入口,同时继续用 SynthID + C2PA 做溯源。
6) 欧盟 AI Act 今日进入首批执法:先打“不可接受风险”的禁区(社交评分、公共场所实时生物识别监控、利用脆弱性操纵等),但市场情绪已经开始传导到创业公司与融资叙事。
7) Reuters:Nvidia 被曝将推出面向 inference 的新平台(传将整合 Groq 相关芯片),并可能在 GTC 发布。
Interpretation
我自己的读法偏现实一点:今年开始,大家别再把“agent”当作炫技 demo 了。真正的分水岭是两件事——能不能在复杂系统里稳定地“接入数据/执行动作”,以及出了事有没有人敢用、敢背锅。
像 Google 这种把自动化圈在“安全虚拟窗口”里、Anthropic 把插件/连接器做成“IT 部门能接受的投放方式”,都是在解决同一个问题:把能力装进笼子里。
如果你在做产品/团队落地,我觉得今天最实用的动作是:
- 盘点你们的“动作权限面”:哪些操作必须人类确认、哪些能自动执行、日志怎么留、回滚怎么做。
- 尽早做“合规叙事”:不只是欧盟 AI Act,未来任何大客户(尤其政企)都会把这套当门槛。
- 关注 inference 侧硬件的变化:训练很贵,但推理才是要长期付费的那部分;硬件和平台一变,成本/延迟/可用性会直接改写产品策略。
原文
TechCrunch|OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’(原文留档)
来源: https://techcrunch.com/2026/02/28/openais-sam-altman-announces-pentagon-deal-with-technical-safeguards/
OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department’s classified network.
This follows a high-profile standoff between the DoD — also known under the Trump administration as the Department of War — and OpenAI’s rival Anthropic. The Pentagon pushed AI companies, including Anthropic, to allow their models to be used for “all lawful purposes,” while Anthr
opic sought to draw a red line around mass domestic surveillance and fully autonomous weapons.
In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” but he argued that “in a narrow set of cases, we believe AI can unde
rmine, rather than defend, democratic values.”
More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.
After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized the “Leftwing nut jobs at Anthropic” in a social media post that also directed federal agencies to stop using the company’s products after a six-month phase-out period.
In a separate post, Secretary of Defense Pete Hegseth claimed Anthropic was trying to “seize veto power over the operational decisions of the United States military.” Hegseth also said he is designating Anthropic as a supply-chain risk: “Effective immediately, no contractor, supp
lier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
On Friday, Anthropic said it had “not yet received direct communication from the Department of War or the White House on the status of our negotiations,” but insisted it would “challenge any supply chain risk designation in court.”
Techcrunch event
San Francisco, CA | October 13-15, 2026
Surprisingly, Altman claimed in a post on X that OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said. “The DoW agrees with these principles, reflects them in law and policy, and we put the
m into our agreement.”
Altman said OpenAI “will build technical safeguards to ensure our models behave as they should, which the DoW also wanted,” and it will deploy engineers with the Pentagon “to help with our models and to ensure their safety.”
“We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman added. “We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable
agreements.”
Fortune’s Sharon Goldman reports that Altman told OpenAI employees at an all-hands meeting that the government will allow the company to build its own “safety stack” to prevent misuse and that “if the model refuses to do a task, then the government would not force OpenAI to make
it do that task.”
Altman’s post came shortly before news broke that the U.S. and Israeli governments have begun bombing Iran, with Trump calling for the overthrow of the Iranian government.
Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City.
You can contact or verify outreach from Anthony by emailing [email protected].
DNYUZ / Business Insider 转载|Claude hits No. 1 on App Store...(原文留档)
来源: https://dnyuz.com/2026/03/01/claude-hits-no-1-on-app-store-as-chatgpt-users-defect-in-show-of-support-for-anthropics-pentagon-stance/
Anthropic’s Claude has seen an influx of users defecting from ChatGPT picture alliance/dpa/picture alliance via Getty Images
While OpenAI locks down Washington, Anthropic is locking down users and rocketing to the top of the App Store.
Anthropic has been sidelined in Washington following a public dispute with the Department of Defense over how its AI models would be deployed. President Donald Trump ordered federal agencies to phase out its technology.
Meanwhile, OpenAI has secured new ground, with CEO Sam Altman announcing in a Friday night post on X that it had reached an agreement with the Department of War to deploy AI models in its classified network.
OpenAI’s agreement has left some loyal ChatGPT users uneasy about OpenAI’s ambitions, prompting online debates about the ethical implications — and some saying they were defecting to its rival Claude.
As of 6:38 p.m. ET on Saturday, Claude ranked number one among the most downloaded productivity apps on Apple’s App Store, trailing ChatGPT.
BI Converts have taken to social media to share screenshots documenting their switch.
Pop musician Katy Perry wrote that she was “done” on X, alongside a screenshot of Claude’s pricing page, with a red heart around the $20-per-month “Pro” plan.
done pic.twitter.com/DkS9DmlUAR
— KATY PERRY (@katyperry) February 28, 2026
Another X user, Adam Lyttle, wrote “Made the switch,” alongside a screenshot of his email inbox with a receipt from Anthropic and cancellation confirmation from OpenAI.
Made the switch pic.twitter.com/lm8xh48xDj
— Adam Lyttle (@adamlyttleapps) February 28, 2026
On Reddit’s ChatGPT subreddit, dozens of users say they’ve deleted their accounts and are urging others to do the same.
“Cancel ChatGPT” has become a common refrain online, while some users have taken a more personal tone, saying Altman’s move “crossed the line.”
The agreement hasn’t polarized all AI users, however.
In one Reddit thread, several commenters said the news does not affect their choice of AI model, arguing that Anthropic’s work with Palantir raises similar concerns. In November 2024, Anthropic, Palantir, and Amazon Web Services struck an agreement to provide US intelligence and
defense agencies access to Claude models.
After Secretary of War Pete Hegseth said he would designate Anthropic as a “supply chain risk to national security,” Anthropic said it would “challenge any supply chain risk designation in court.”
In his Friday post, Altman said the Department of War had agreed with two of OpenAI’s safety principles.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. “The DoW agrees with these principles, reflects them in law and policy, and we p
ut them into our agreement.”
By Saturday afternoon, OpenAI published a more detailed description of its contract with the DoW, including the specific language it used surrounding the use of its models for surveillance and autonomous weapons.
On the topic of autonomous weapons, OpenAI said:
On the topic of mass surveillance, OpenAI said:
While some chatbot users suggested it’s all fair in business, war, and federal procurement, others suggested the Pentagon’s stance may have handed Anthropic a public relations win.
X user Tae Kim joked that Hegseth might need a new title: “Secretary Hegseth Chief of Claude Marketing.”
Read the original article on Business Insider
The post Claude hits No. 1 on App Store as ChatGPT users defect in show of support for Anthropic’s Pentagon stance appeared first on Business Insider.
TechCrunch|Anthropic launches new push for enterprise agents...(原文留档)
来源: https://techcrunch.com/2026/02/24/anthropic-launches-new-push-for-enterprise-agents-with-plugins-for-finance-engineering-and-design/
On Tuesday, Anthropic unveiled its new enterprise agents program, its most aggressive push yet to integrate agentic AI into everyday workplaces.
In an official briefing, Anthropic’s head of Americas, Kate Jensen, told reporters that the new system would finally deliver on the promise of agentic AI. “2025 was meant to be the year agents transformed the enterprise, but the hype turned out to be mostly premature,” Jensen sai
d. “It wasn’t a failure of effort. It was a failure of approach.”
Under the new program, companies can use the plug-in system to deploy pre-built agents to help with common enterprise tasks, including financial research and engineering specifications. The result is a major opportunity to grow Anthropic’s enterprise client base — and a significa
nt threat to SaaS products currently performing those functions.
“We believe that the future of work means everybody having their own custom agent,” Anthropic product officer Matt Piccolella told TechCrunch.
Much of the enterprise agents program draws on previously announced technology, particularly Claude Cowork and the plug-in system, which was announced in research preview on January 30th. The systems launched today are largely focused on making those tools easier to deploy within
a company, including private software marketplaces, controlled data flows, and customized plug-ins. The result is a system for deploying Claude-powered agents with the same controls a corporate IT department would expect when deploying software.
“Admins want to be able to have really, really, really tailored workflows and skills for their specific organization,” Piccolella said. “And this allows the admin of a Claude Cowork organization to be able to do this in a very centralized way.”
The stock plug-ins included at launch take aim at particular departments present within most companies, including agents designed for finance, legal, and HR departments. Each plug-in includes basic skills common across different companies, although Anthropic expects that companie
s will modify each plug-in to bring it in line with unique needs and customs.
In the case of finance, the stock plug-in gives Claude the basic information and data flows necessary to perform market and competitive research, financial modeling, and other common tasks for finance teams. The HR plug-in includes skills for generating job descriptions, onboardi
ng materials, and offer letters, among others.
The launch also includes a number of new enterprise connectors, including integrations for Gmail, DocuSign, and Clay, among others. Previously unavailable, these connectors will allow agents to pull in data and context directly from the linked system.
Russell Brandom has been covering the tech industry since 2012, with a focus on platform policy and emerging technologies. He previously worked at The Verge and Rest of World, and has written for Wired, The Awl and MIT’s Technology Review. He can be reached at [email protected] or on Signal at 412-401-5489.
TechCrunch|Gemini can now automate some multi-step tasks on Android(原文留档)
来源: https://techcrunch.com/2026/02/25/gemini-can-now-automate-some-multi-step-tasks-on-android/
Google on Wednesday announced a series of updates to its Gemini AI-powered features on the Android operating system, the most notable being a new way to use the AI to handle multi-step tasks like ordering an Uber or food delivery. These automations join other Gemini improvements
shipping today, including an expansion of scam detection for phone calls and Circle to Search updates that now let you identify all the items on your phone’s screen.
The automations, explains Google, allow users to essentially offload their to-do list to Gemini. In practice, however, the types of things that Gemini can manage are still limited.
The company says that the feature, which is in beta, will initially support select apps in the food, grocery, and rideshare categories. And it will initially be available only in the U.S. and Korea.
In the U.S. and Korea, this includes apps like DoorDash, Grubhub, Instacart, Lyft, McDonald’s, Starbucks, Uber, and Uber Eats. In Korea, Kaemin and Kakao T will also be supported.
At launch, the feature will be limited to the Gemini app on certain devices, including the Pixel 10, Pixel 10 Pro, and Samsung Galaxy S26 series.
Image Credits:Google
AI-powered automations could potentially go wrong, of course, so Google has added some protections. For starters, the automations can’t be kicked off without an explicit command from the device’s owner. As they run, you can watch their progress in real time and stop the task if i
t’s making a mistake or getting stuck. Google notes also that the automations take place in a secure, virtual window on your phone where they can only access limited apps, not the rest of the data on your device.
The feature ties into the growing trend of using AI to automate more tasks in users’ personal lives. ChatGPT, for instance, lets users create tasks that can be run on schedules or at specific times, as well as offering an agent that can complete a variety of computer-based tasks
like navigating a calendar, generating a slideshow, or running code. Anthropic’s Cowork, meanwhile, brings the capabilities of its Claude AI to non-coding tasks, letting non-developers automate everyday file and task management. And, of course, an AI tool called OpenClaw recently
went viral for its ability to manage everyday tasks like sending emails, managing calendars, checking into flights, and more.
Techcrunch event
San Francisco, CA | October 13-15, 2026
Image Credits:Google
Another Gemini update arriving now is the expansion of a Scam Detection feature for phone calls, which is becoming available on Samsung Galaxy S26 series devices in the U.S. (The feature is already offered on Pixel phones in the U.S., Australia, Canada, India, Ireland, and the U.
K.) Google is also using its Gemini on-device model to detect scam texts in the U.S., Canada, and the U.K. on Pixel 10 series devices, and soon on the Galaxy S26 series phones, as well.
Finally, Google says its Circle to Search feature, which lets you use gestures like scribbles and circling to initiate searches, can now search for everything you’re seeing on the phone screen, not just a single object. That means you can search every item of clothing and every a
ccessory in an outfit you like, or learn more about a group of things and the related topic on the screen.
Image Credits:Google
Google has been steadily releasing Gemini updates to its Android ecosystem at regular intervals through new operating system updates and updates targeted toward its flagship phone, the Google Pixel, via its frequent updates known as Pixel Drops. Meanwhile, Apple has been struggli
ng to release a more comprehensive AI feature set, which is set to include an AI-powered Siri — a launch that was recently pushed back again to later in the year.
Sarah has worked as a reporter for TechCrunch since August 2011. She joined the company after having previously spent over three years at ReadWriteWeb. Prior to her work as a reporter, Sarah worked in I.T. across a number of industries, including banking, retail and software.
You can contact or verify outreach from Sarah by emailing [email protected] or via encrypted message at sarahperez.01 on Signal.
TechCrunch|Google launches Nano Banana 2 model with faster image generation(原文留档)
来源: https://techcrunch.com/2026/02/26/google-launches-nano-banana-2-model-with-faster-image-generation/
Google today announced the latest version of its popular image generation model, Nano Banana 2. The new model, which is technically Gemini 3.1 Flash Image, can create more realistic images than its predecessor. The model will also now become the default in the Gemini app for its
Fast, Thinking, and Pro modes.
The company first released Nano Banana in August 2025, prompting people to generate millions of images in the Gemini app, especially in countries like India. In November, the company released Nano Banana Pro, which allows users to create more detailed and high-quality images.
The new Nano Banana 2 retains some of the high-fidelity characteristics of the Pro model but produces images faster. The company says you can create images with a resolution ranging from 512px to 4K, in different aspect ratios.
Image Credits:Google
Nano Banana 2 can maintain character consistency for up to five characters and fidelity of up to 14 objects in one workflow for better storytelling. Users can also issue complex requests with detailed nuances for image generation, Google says. In addition, users can create media
with more vibrant lighting, richer textures, and sharper detail.
Image Credits:Google
With the launch, Nano Banana 2 will become the default model for image generation across all apps in the Gemini app. The company is also making it the default model for image generation in its video editing tool, Flow.
In Search, Nano Banana 2 will become the default for Google Search results via Google Lens and in AI Mode across 141 countries on the Google app and on the web across desktop and mobile.
On Google’s higher-end plans, Google AI Pro and Ultra, subscribers can continue to use Nano Banana Pro for specialized tasks by regenerating images via the three-dot menu.
Techcrunch event
San Francisco, CA | October 13-15, 2026
Image Credits:Google
For developers, Nano Banana 2 will be available in preview through the Gemini API, Gemini CLI, and the Vertex API. It will also be available through AI Studio and the company’s development tool Antigravity, which was released last November.
The company said that all images created through the new model will have a SynthID watermark, which is Google’s mark to denote AI-generated images. The images are also interoperable with C2PA Content Credentials, created by an industry body consisting of companies like Adobe, Mic
rosoft, Google, OpenAI, and Meta. Google said that since launching the SynthID verification in the Gemini app in November, people have used it over 20 million times.
Ivan covers global consumer tech developments at TechCrunch. He is based out of India and has previously worked at publications including Huffington Post and The Next Web.
You can contact or verify outreach from Ivan by emailing [email protected] or via encrypted message at ivan.42 on Signal.
Silicon Canals|EU's new AI Act enforcement begins today...(原文留档)
来源: https://siliconcanals.com/sc-n-eus-new-ai-act-enforcement-begins-today-and-most-startups-say-they-arent-ready/
https://www.google.com/preferences/source?q=https://siliconcanals.com
Add Silicon Canals to your Google News feed. https://news.google.com/publications/CAAqLQgKIidDQklTRndnTWFoTUtFWE5wYkdsamIyNWpZVzVoYkhNdVkyOXRLQUFQAQ
February 2, 2025 was circled on every European tech founder’s calendar. Today, the first enforcement provisions of the EU’s sweeping AI Act officially go into force — and the mood across the continent’s startup ecosystem is less celebration, more scramble.
The initial phase targets what the regulation classifies as “unacceptable risk” AI systems — including social scoring, real-time biometric surveillance in public spaces, and manipulative AI designed to exploit vulnerabilities. Penalties for violations can reach €35 million or 7%
of global annual turnover, whichever is higher. For a seed-stage startup, that’s not a fine. That’s an extinction event.
Photo by RDNE Stock project on Pexels
What actually changes today
The AI Act uses a tiered risk framework. Today’s enforcement covers only the top tier — prohibited practices. The heavier obligations around “high-risk” AI systems (think hiring tools, credit scoring, medical diagnostics) won’t kick in until August 2026. General-purpose AI model
rules land in August 2025.
But the psychological weight of today’s date extends far beyond the banned categories. According to a survey from the European Digital SME Alliance, more than 60% of small and medium-sized tech companies say they are not adequately prepared for compliance with any phase of the AI
Act. Nearly half reported that they hadn’t yet conducted a risk classification of their own AI systems — a foundational first step.
The numbers suggest a gap that isn’t just technical. It’s cognitive. Many founders built their companies in an environment where European AI regulation was theoretical. Today, it’s operational.
Why most startups say they aren’t ready
The readiness problem has three roots, and none of them are simple.
1. Regulatory ambiguity
Despite years of legislative debate, critical details remain unclear. The EU’s AI Office is still developing guidelines, codes of practice, and technical standards that will define what compliance actually looks like in practice. Startups building AI-powered products are, in many
cases, trying to hit a target that’s still being drawn.
“We know what’s prohibited in theory,” one Amsterdam-based AI founder told me. “But the grey areas are enormous. Is our recommendation engine ‘manipulative’? Depends who you ask.”
2. Resource constraints
Large enterprises like SAP and Siemens have already stood up dedicated AI compliance teams. Early-stage startups don’t have that luxury. Legal counsel specialising in AI regulation is expensive and scarce. For a team of twelve burning through runway, hiring a compliance officer f
eels like a contradiction — you need revenue to survive, but you need compliance to operate.
3. A misaligned timeline
Startup development cycles and regulatory timelines operate on fundamentally different clocks. Products pivot quarterly. Regulations take years to draft and then arrive all at once. Several founders I spoke with described a kind of regulatory whiplash — building fast to meet mark
et demand while simultaneously trying to decipher 144 pages of legislation that references dozens of yet-to-be-published standards.
The broader competitive anxiety
This enforcement moment arrives amid growing unease about Europe’s position in the global AI race. While EU regulators have been finalising compliance frameworks, Chinese AI companies have been shipping products at a staggering pace — the recent launch of DeepSeek’s R1 model ratt
led markets and prompted fresh debate about whether Europe is regulating itself into irrelevance.
That anxiety is real, but it can also be overstated. Regulation and innovation aren’t necessarily zero-sum. GDPR was predicted to cripple European tech; instead, it created a global standard and a cottage industry of privacy-tech companies. The AI Act could follow a similar traje
ctory — painful in the short term, strategically advantageous over time.
Still, the timing matters. With stock markets jittery amid trade tensions and geopolitical uncertainty, European AI startups face a fundraising environment where investors are already cautious. Adding regulatory uncertainty on top doesn’t help the pitch deck.
What founders can actually do right now
The situation isn’t hopeless. In fact, founders who act decisively in the next six months — before the general-purpose AI rules land in August — could turn compliance into a genuine competitive advantage. Here’s where to start.
Classify your risk tier
Before anything else, determine where your product sits in the AI Act’s risk framework. If you’re nowhere near the prohibited categories, today’s enforcement date is mostly symbolic for you — but August 2025 and August 2026 are not. The AI Act Explorer maintained by the Future of
Life Institute is a practical starting point.
Document everything
The AI Act places heavy emphasis on transparency, documentation, and human oversight. Start building audit trails now — training data provenance, model decision logs, risk assessments. This isn’t just about compliance. Investors increasingly want to see governance maturity.
Join a code of practice
The EU’s AI Office is actively developing codes of practice with industry input. Participating in these consultations — or at minimum tracking their output — gives startups early visibility into what “good” looks like before the rules are finalised.
Talk to your investors
Smart VCs are already factoring regulatory readiness into due diligence. Frame compliance work not as overhead, but as de-risking. In a tightening market, that narrative matters.
The road ahead
Today’s enforcement is the beginning, not the climax. The most consequential provisions — governing high-risk systems, foundation models, and general-purpose AI — are still months away. The startups that treat this moment as a wake-up call rather than a crisis will be the ones be
st positioned when the full regulatory weight lands.
Europe chose to regulate AI early and comprehensively. Whether that proves to be visionary or self-defeating depends less on the law itself and more on whether the ecosystem — founders, investors, and regulators — can build the infrastructure of trust that makes the whole framewo
rk functional.
The clock started today. For most of Europe’s AI startups, the real work starts tomorrow morning.
Feature image by Markus Winkler on Pexels
Reuters|Nvidia plans new chip to speed AI processing, WSJ reports(原文留档)
来源: https://www.reuters.com/business/nvidia-plans-new-chip-speed-ai-processing-wsj-reports-2026-02-28/
Nvidia logo, computer chips and a 3D-printed representation of a robot hand are seen in this illustration taken August 27, 2025. REUTERS/Dado Ruvic/Illustration Purchase Licensing Rights
Feb 27 (Reuters) - Nvidia (NVDA.O) plans to launch a new processor designed to help OpenAI and other customers build faster, more efficient AI systems, the Wall Street Journal reported on Friday, citing people familiar with the matter.
Nvidia is developing a new system for “inference” computing, a form of processing that allows AI models to respond to queries, the report said.
Sign up here.
The new platform is set to be unveiled at Nvidia’s GTC developer conference in San Jose next month and will incorporate a chip designed by startup Groq, the report added citing people familiar.
Reuters could not immediately verify the report. Nvidia and OpenAI did not immediately respond to Reuters request for comment.
Reuters earlier this month reported OpenAI is unsatisfied with the speed at which Nvidia’s hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software.
It needs new hardware that would eventually provide about 10% of OpenAI’s inference computing needs in the future, one of the sources told Reuters.
The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI’s talks, one of the sources told Reuters.
In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips.
Reporting by Mihika Sharma in Bengaluru; Editing by Tom Hogue
Our Standards: The Thomson Reuters Trust Principles.