SoftBank Plans to Develop Homegrown AI Servers
Catch up on the top artificial intelligence news and commentary by Wall Street analysts on publicly traded companies in the space with this daily recap compiled by The Fly.MADE-IN-JAPAN AI SERVERS:SoftBanklooks to develop and produce homegrown AI servers, weighting plans to start designing and assembling components by the end of the decade with the help of major players like Nividaand Foxconn, Nikkei's Natsuki Yamamoto.AI MODEL ACCESS:Googleis in talks with Blackstone, KKR, and EQT to let their portfolio companies access its models, after OpenAI and Anthropic announced joint ventures with private equity firms, The Information's Erin Woo and Julia Hornstein.CLAUDE FOR MICROSOFT 365:Anthropic announced that starting today, Claude for Excel, PowerPoint, and Word are generally available, and Claude for Outlook is now in public beta for all paid plans. "Claude for Outlook brings Claude into your inbox. Ask Claude to triage your inbox and it sorts messages by what needs your response, what it can draft for you, and what's noise. Replies land as drafts in Outlook's compose pane with recipients, subject, and body filled in. Calendar invites check attendee availability and open in Outlook's native event form," the company said. "Claude for Excel, PowerPoint, and Word are now generally available, with the controls IT admins and organizations need. One AppSource listing covers Excel, PowerPoint, and Word, and a separate listing adds Outlook in beta. Admins can deploy both from the Microsoft admin center. Enterprise admins can configure OpenTelemetry to stream prompts, tool calls, and document references to their own collector, so security teams see exactly what Claude does across every app. The Analytics API breaks out activity per user, per app, per day. Organizations can access all four add-ins with a Claude account, or route traffic through an existing LLM gateway to Claude models running on AmazonBedrock, GoogleCloud's Vertex AI, or MicrosoftFoundry."ENTERPRISE AI INFRASTRUCTURE:Rackspaceand AMDannounced the signing of a memorandum of understanding establishing a framework for a multiyear strategic partnership to create an Enterprise AI Cloud purpose-built for regulated enterprises and sovereign workloads. The companies said, "Today's dominant model requires enterprises to rent GPU capacity by the hour and carry the operational burden themselves including integration, security and accountability. This collaboration proposes to invert that model by integrating AMD Instinct GPUs and EPYC CPUs into a fully managed, governed stack. Through this understanding, the companies aim to establish a new category of managed enterprise AI infrastructure where dedicated AMD compute is embedded inside a governed managed operating model, with Rackspace owning the stack from silicon to outcomes. The AMD collaboration is intended to position Rackspace to complete its curated enterprise AI stack and introduce four integrated capabilities. Together, these capabilities are designed to form a complete, integrated stack from bare metal compute and developer-ready inference tooling through a fully operated inference runtime with defined SLAs to a governed Enterprise AI Cloud. The aim is to give enterprises a single operator accountable for every layer, calibrated to the sovereignty, performance, and compliance requirements of each workload."