CTO Mode

CTO Mode

By CTOs, for CTOs

Editor’s Primer

NVIDIA ships Vera Rubin with Groq baked in, OpenAI drops multi-agent costs to $0.20/M tokens with GPT-5.4 Nano, and Meta delays Avocado after it trails rivals despite $135B in planned capex. Plus, in today's brief: what NVIDIA is actually selling at GTC - and it's not a chip.

 

Today’s Signal

01

NVIDIA Unveils Vera Rubin Platform at GTC, Forecasts $1T in AI Chip Orders Through 2027

NVIDIA's bet is that the next era is inference economics, not training. Vera Rubin + Groq 3 LPX claims 35x tokens-per-watt over GPU-only setups. If you're capacity planning AI workloads for 2027, the cost-per-token math just changed significantly.

Infrastructure

02

OpenAI Ships GPT-5.4 Mini and Nano for High-Volume Coding and Sub-Agent Workloads

Mini approaches full GPT-5.4 on SWE-bench at $0.75/M input tokens. Nano at $0.20/M is purpose-built for sub-agents. If you're running multi-agent architectures, the cost floor for small orchestrated models just dropped hard.

AI / ML

03

Replit Raises $400M at $9B Valuation, Launches Agent 4 Canvas Interface

Tripled valuation in six months, targeting $1B ARR by year-end. The bet: vibe coding for non-programmers is bigger than better tools for existing devs. Worth tracking whether this is a durable category or a hype peak.

Funding

04

Meta Delays Avocado AI Model to May After Internal Tests Show It Trailing Rivals

$135B in planned capex and a $14.3B leadership hire, yet Avocado trails Gemini 3.0 and GPT-5.4. Reports of Meta leadership discussing licensing Gemini temporarily. Real questions about whether brute-force spending without cloud distribution is a viable model strategy.

Platform

05

GlassWorm Campaign Force-Pushes Malware Into Hundreds of GitHub Python Repos

Stolen tokens from compromised IDE extensions used to inject malware via force-push, leaving no PR trail or visible audit log. If your teams use VS Code or Cursor extensions, audit them now and enforce branch protection blocking force-pushes.

Security

06

Mistral Releases Leanstral: Open-Source Formal Verification Agent for Lean 4

First open-source agent for Lean 4 proof engineering. 6B active params under Apache 2.0, beats Claude Sonnet on formal proof benchmarks at 93% lower cost. Niche today, but if formal verification of AI-generated code becomes standard, this is early infrastructure.

Open Source

The Brief

NVIDIA Is Selling You an Operating System

By Adam Placker  ·  2 min read  ·  OPINION

Jensen Huang unveiled seven chips at GTC this week. The most important one is the chip NVIDIA didn't design. Three months after closing the $20B Groq acqui-hire, Groq's SRAM-based LPU is already integrated into Vera Rubin, claiming 35x better tokens-per-watt. That integration speed tells you everything about what NVIDIA is actually building.

The keynote pitch was inference efficiency - Vera Rubin delivering 10x throughput per watt at one-tenth the cost per token versus Blackwell. Great numbers. But the real product isn't a chip. It's a stack. Custom silicon (Groq LPUs), inference orchestration (Dynamo), an agent runtime (NemoClaw) - all designed to work together and optimized to work best with each other. NVIDIA isn't selling GPUs anymore. It's selling an operating system for inference.

This is the Mellanox playbook running a second time. Find the layer where the next bottleneck lives, buy it, integrate it, make the full stack the product. And the timing tracks. Inference now accounts for 85% of enterprise AI budgets, driven by agentic loops, RAG pipelines, and always-on deployments. It went from a line item to the line item, and NVIDIA is planting itself at the center of where the money actually gets spent.

 

The question for CTOs isn't whether Vera Rubin is impressive. It's whether you're buying infrastructure or subscribing to someone else's roadmap.

But here's what the keynote glossed over. Custom ASIC shipments from cloud providers are projected to grow 44.6% in 2026 - nearly triple the growth rate of GPUs. Google, Meta, Amazon, and Anthropic are all building inference-specific silicon. Midjourney moved from H100s to TPU v6e and cut monthly inference spend by 67%. Suddenly the Groq deal looks less like offense and more like defense - absorbing the best inference-specific hardware before it became someone else's moat.

The open-source framing of Dynamo is the clever bit. It gives NVIDIA plausible "we're not locking you in" cover, but the hardware optimization runs so deep that open-source-on-NVIDIA-hardware is still NVIDIA-dependent. Cursor, Perplexity, PayPal, and Pinterest are already running production workloads on it. That's real switching costs accumulating in real time.

If you're evaluating inference this quarter, NVIDIA's integrated stack will almost certainly benchmark best. Just know what you're signing up for. Every layer you adopt makes the exit harder - and that's not a bug in their strategy, it's the entire point.

Hidden Gem

Hidden Gem Tweet

Thanks for reading today’s edition of CTO Mode. If you’d like to advertise to our readers, please reach out.

Meme

Keep reading