Search

OpenAI raises $122 billion, Qwen3.5-Omni and 15 hidden features of Claude Code

OpenAI raises $122 billion, Qwen3.5-Omni and 15 hidden features of Claude Code

Three announcements dominate the end of the month: OpenAI closes the largest private tech funding round in history with 122billionraisedatan122 billion raised at an 852 billion valuation, Qwen reaches a milestone with a native omnimodal model capable of seeing, hearing and coding simultaneously, and the head of Claude Code posts a viral thread revealing 15 little-known features of the tool. The week also saw the launch of Perplexity’s Secure Intelligence Institute, new GitHub Copilot tools, and infrastructure initiatives from Runway and NVIDIA.


OpenAI raises $122 billion

March 31 — OpenAI announced the closing of its latest financing round with 122billionincommittedcapital,forapostmoneyvaluationof122 billion in committed capital, for a post-money valuation of 852 billion. It’s one of the largest private financing rounds in tech history.

The round is co-led by SoftBank and a16z, with strategic participation from Amazon, NVIDIA and Microsoft. For the first time, OpenAI extended participation to individual investors through banks, raising over $3 billion from individuals. ARK Invest will also include OpenAI in several listed index funds (Exchange-Traded Funds / ETF).

Supporting the raise, OpenAI published growth metrics:

IndicatorValue
Weekly active ChatGPT users900 million
ChatGPT paid subscribers50 million
Monthly revenue$2 billion
Tokens processed by the API (per minute)15 billion
Weekly Codex users2 million (+5× in 3 months)
Month-over-month Codex growth+70%

The company outlines a roadmap centered on an “AI superapp”: a unified interface combining ChatGPT, Codex, web search and AI agents. The stated goal is to exceed one billion weekly active users. Enterprises already account for 40% of revenue.

GPT-5.4, OpenAI’s latest model, is described as delivering gains in reasoning, coding and agentic workflows. OpenAI’s growth is presented as four times faster than Google and Meta at comparable stages.

🔗 Official OpenAI announcement


Qwen3.5-Omni: native omnimodal model

March 29 — Alibaba Qwen launched Qwen3.5-Omni, a model natively designed to process text, images, audio and video in a single unified model. Unlike classic multimodal approaches that add modalities in layers, this model processes those inputs simultaneously.

Raw capabilities are significant: up to 10 hours of audio or 400 seconds of 720p video natively, trained on over 100 million hours of data, speech recognition in 113 languages and expression in 36 languages.

Flagship feature: Audio-Visual Vibe Coding

The most directly usable feature is “Audio-Visual Vibe Coding”: the user describes their project aloud in front of a camera, and Qwen3.5-Omni-Plus generates a functional website or game. It’s an application of the vibe coding concept extended to audio and video in real time.

Comparative performance

CategoryQwen3.5-Omni-PlusGemini 3.1 Pro
DailyOmni (audio/vision)84.682.7
WorldScene62.865.5
QualocommInteractive68.552.3
OmniClear64.855.5
IFEval (text)89.793.5
MMLU-Redux94.290.0

The model outperforms Gemini 3.1 Pro on audio benchmarks and is comparable on audio-visual understanding.

Voice capabilities

  • Fine-grained voice control: adjust emotion, pace and volume in real time
  • Voice Cloning from a short sample (engineering deployment announced soon)
  • Semantic Interruption that understands actual intent and ignores ambient noise
  • Integrated web search and complex function calls

Model family

VariantPositioning
Qwen3.5-Omni-PlusSOTA performance, detailed audio-visual captioning
Qwen3.5-Omni-Plus-RealtimeVoice Control, WebSearch, Voice Clone, Semantic Interruption
Qwen3.5-Omni-FlashSpeed
Qwen3.5-Omni-LightLightweight

Access via chat.qwen.ai (VoiceChat/VideoChat button) and the Alibaba Cloud API.

Note: Qwen 3.6 Plus Preview is available for free on OpenRouter for a limited time — exchanges are collected during this period to improve the model.

🔗 Tweet @Alibaba_Qwen


15 hidden features of Claude Code

March 30 — Boris Cherny, head of Claude Code at Anthropic, posted a thread revealing 15 little-documented features of the tool. The thread reached 3.6 million views, 2,000 reposts and 22,000 likes.

“I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I’ll focus on the ones I use the most. Here goes.” — @bcherny on X

Mobility and remote sessions

  • The Claude app on iOS and Android includes a Code tab allowing coding from your phone
  • --teleport (or /teleport) lets you switch a cloud session to a local machine; /remote-control lets you control a local session from any device
  • Cowork Dispatch: secure remote control of the Claude Desktop App from mobile, with access to MCP (Model Context Protocol) servers, the browser, etc.

Automation

  • /loop and /schedule allow launching Claude automatically at set intervals, up to a week — Cherny uses /loop 5m /babysit for continuous automated code review and rebase
  • Hooks (SessionStart, PreToolUse, etc.) allow injecting deterministic logic into the agent cycle, for example to route permission requests to WhatsApp

Parallelization

  • /batch distributes work to dozens, hundreds or even thousands of agents in parallel — useful for large-scale code migrations
  • claude -w starts parallel sessions in separate git worktrees

Daily productivity

  • /btw lets you ask a quick question while an agent is working, without interrupting the current task
  • /branch allows forking a session; or via CLI: claude --resume <session-id> --fork-session
  • --agent lets you define custom agents in .claude/agents/ with a prompt system and configurable tools
  • --add-dir / /add-dir gives Claude access to multiple folders or repos simultaneously
  • --bare speeds up SDK startup by up to 10× (avoids loading CLAUDE.md, settings and MCP servers)
  • /voice enables voice input (spacebar in CLI, dedicated button on Desktop, iOS dictation)
  • Chrome extension (beta): Claude Code + Chrome to test web apps, debug console logs and automate the browser

🔗 Full thread @bcherny


Claude Code: auto mode extended to Enterprise and API

March 30 — Claude Code’s auto mode, launched March 24 for Pro and Max users, is now available on the Enterprise plan and for developers accessing the API. This feature allows Claude to make its own approval decisions for actions (writing files, running bash commands) instead of prompting the user at every step.

To enable it in an Enterprise or API environment:

claude --enable-auto-mode

Auto mode relies on internal classifiers that assess the risk of each action before executing it, balancing between permissive mode (--dangerously-skip-permissions) and manual approvals.

March 30 — Cowork Dispatch can now start coding tasks with a specific model, mentioned directly in natural language in the instruction.

🔗 Tweet @claudeai


Perplexity launches the Secure Intelligence Institute

March 31 — Perplexity launched the Secure Intelligence Institute (SII), a research lab dedicated to the security, privacy and safety of advanced AI systems. The Institute is led by Dr. Ninghui Li — Samuel D. Conte Professor at Purdue University, ACM and IEEE Fellow, former chair of ACM SIGSAC — with academic partnerships including Dan Boneh’s applied cryptography group and Neil Gong’s Gong Lab.

The SII published three initial works:

PublicationTypeDescription
BrowseSafeOpen-source benchmark14,700+ real attack scenarios, 14 risk categories for AI browsing
Securing Agents NIST/CAISIPolicyResponse to the RFI (Request for Information) on securing autonomous agents
Building Security Into CometArchitectureDefense-in-depth for the Comet AI browser

The SII translates its research into concrete improvements for Perplexity systems and shares its work with the AI ecosystem.

🔗 Secure Intelligence Institute


Cohere and Ensemble: LLM specialized in healthcare revenue cycle management

March 31 — Cohere and Ensemble announced the construction of the first industry-native large language model (LLM) specialized in Revenue Cycle Management (RCM) for U.S. healthcare.

Ensemble offers an end-to-end solution for hospitals and medical groups, from appointment scheduling to final billing. Unlike competitors that wrap general LLMs in specialized prompts, this model is fully customized on Cohere’s Command family.

DomainCapability
FinancialDenial prediction before submission, continuous billing quality control
ClinicalPoint-of-care documentation guidance, assembly of appeal dossiers
AgenticMulti-step orchestration of the revenue cycle

The model was trained on Cohere’s pretraining data, Ensemble’s operational logs, public RCM knowledge sources and expert annotations. A domain-specific benchmark co-developed with partners will measure performance against general LLMs on real RCM tasks.

🔗 Cohere Blog


GitHub Copilot: agent-first development and Slack integration

March 31 — Tyler McGoffin, senior researcher on GitHub’s Copilot Applied Science team, shared a write-up on building an internal tool with Copilot as the primary coding agent. The tool automates analysis of agent trajectories on benchmarks like TerminalBench2 and SWEBench-Pro.

Practices described: using /plan before coding, creating “contract tests” that only a human can modify, detailed prompts instead of terse ones, and weekly automated maintenance via /plan Review the code for any missing tests.... The conclusion: the qualities of a good engineer (planning, context, communication) are the same when collaborating effectively with an AI agent.

March 30 — The GitHub app for Slack now integrates Copilot to create GitHub issues directly from Slack using natural language. Just mention @GitHub in any channel and describe the work.

FeatureDetail
Natural language creationDescription → structured issues (title, body, assignees, labels, milestones)
Sub-issuesBreak work into parent/child issues from a single message
Conversation modeIterate on issues before creating them

March 31 — GitHub presented the Copilot SDK enabling agentic workflows in third-party applications according to three architectural patterns.

🔗 GitHub Blog - Agent-driven development 🔗 GitHub Changelog - Create issues from Slack


Runway: investment fund and startup program

March 31 — Runway launched two simultaneous initiatives.

The Runway Fund is an investment fund for early-stage startups in AI, media and world simulation. Initial commitment up to 10million,withinvestmentsupto10 million, with investments up to 500,000 in pre-seed/seed. Focus areas: AI research (world models and generative AI), new applications (application layer on LLMs), and new media and content. Investments have already been made in Cartesia, LanceDB and Tamarind Bio.

Runway Builders is an accelerator program for startups from seed to Series C building products with generative video and real-time conversational AI. Participants receive API credits, the highest rate limits and access to a private community.

🔗 Runway Fund 🔗 Runway Builders


NVIDIA and Emerald AI: flexible AI factories on the power grid

March 31 — NVIDIA and Emerald AI presented at CERAWeek a new approach for AI factories: treating them as flexible assets on the power grid rather than static loads. The architecture is built on NVIDIA Vera Rubin DSX and Emerald AI’s Conductor platform.

Announced energy partners: AES, Constellation, Invenergy, NextEra Energy, Nscale Energy and Vistra. Related announcements:

  • Maximo: 100 MW robotic solar AI installation operational at Bellefield with NVIDIA Isaac Sim
  • TerraPower + SoftServe: NVIDIA Omniverse digital twin to reduce Natrium nuclear plant design lead times
  • Adaptive Construction Solutions: national training program for AI factory construction
  • GE Vernova, Schneider Electric, Vertiv: validated reference designs for Vera Rubin

Jensen Huang described energy as the foundational layer of a “five-layer AI cake.”

🔗 NVIDIA Blog - AI Factories


In brief

Gemini Live on Gemini 3.1 Flash LiveMarch 30 — Google confirmed the deployment of the Gemini 3.1 Flash Live model in the Gemini Live app, available to all users. This transition (announced March 26) brings more natural audio conversations and improved accuracy in noisy environments. 🔗 Tweet @GeminiApp

Manus: phone control for DesktopMarch 30 — Manus adds the ability to control the Desktop application from your smartphone: start tasks, access files, and launch workflows without touching the computer. 🔗 Tweet @ManusAI

Midjourney V8 teaserMarch 29 — David Holz (founder of Midjourney) announces a “radically different” version of V8, “arriving very soon”. No date announced. 🔗 Tweet @DavidSHolz

Claude Code v2.1.87 — Fixed a bug in Cowork Dispatch where messages were not being delivered. 🔗 CHANGELOG GitHub


What this means

OpenAI’s fundraising at a $852 billion valuation marks an inflection point: at these numbers, the gap between leading players and the rest of the industry widens structurally. With 900 million weekly users and a target of one billion, ChatGPT is establishing itself as mass infrastructure, not just a technology product.

The launch of Qwen3.5-Omni illustrates the growing competition around omnimodal models. Audio-Visual Vibe Coding represents a concrete evolution of intention-based coding (vibe coding) — moving from text to voice and video as the primary interface to generative AI.

On the developer tools side, Boris Cherny’s thread reveals that Claude Code has accumulated advanced features (massive parallelization with /batch, automation via hooks, distributed sessions) that remained little known due to lack of visible documentation. The extension of auto mode to Enterprise plans follows a classic trajectory: validation in preview, then gradual rollout.

Finally, Perplexity’s creation of the Secure Intelligence Institute and Cohere’s initiatives in healthcare signal a trend: second-tier players are looking to differentiate themselves in specialized verticals (AI security, regulated healthcare) rather than compete head-on on general-purpose models.


Sources

This document was translated from the fr version into the en language using the gpt-5-mini model. For more information on the translation process, consult https://gitlab.com/jls42/ai-powered-markdown-translator