NVIDIA

The Pick and Shovel of AI

v1
April 17, 2026
🔄 Auto-updated weekly

NVIDIA

The Pick and Shovel of AI

Founded: 1993 | HQ: Santa Clara, CA | Key People: Jensen Huang (CEO/Founder), Chris Malachowsky (Co-Founder), Colette Kress (CFO) | Market Cap: ~$4.5 trillion | FY2026 Revenue: ~$216 billion | Key Products: Blackwell B200/B300 GPUs, Vera Rubin architecture, NVIDIA AI Enterprise, CUDA

The Origin Story

NVIDIA was founded in April 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem at a Denny's diner in San Jose, California. The trio set out to build graphics chips for the nascent PC gaming market, and NVIDIA's early success was built on GeForce GPUs that made 3D gaming accessible to consumers. The company went public in 1999 and won the contract to supply the graphics chip for Microsoft's original Xbox in 2001. The pivotal strategic insight came in the mid-2000s when NVIDIA realized that the parallel processing architecture that made GPUs excellent at rendering graphics also made them uniquely suited for general-purpose computation. The launch of CUDA in 2006—a software platform that allowed developers to use GPUs for non-graphics tasks—was the foundational bet that would take nearly two decades to fully pay off. For years, CUDA was a niche tool used by scientific researchers and quantitative analysts. Then, around 2012, deep neural networks demonstrated that GPU-accelerated training could produce AI systems of unprecedented capability. NVIDIA had accidentally positioned itself at the center of the most transformative technology wave since the internet.

Key Milestones

The 2019 acquisition of Mellanox Technologies for $6.9 billion gave NVIDIA the high-performance networking essential for connecting thousands of GPUs in data center clusters. A planned $40 billion acquisition of ARM was blocked by regulators in 2022, forcing NVIDIA to refocus on organic growth at exactly the moment the AI revolution erupted. The H100 GPU, based on the Hopper architecture and launched in 2022, became the most sought-after semiconductor in history. Every major AI lab needed thousands of H100s, and demand far outstripped supply, driving gross margins above 75%. Revenue surged from $26.9 billion in FY2023 to $79.8 billion in FY2025, then to an estimated $216 billion in FY2026. Quarterly data center revenue reached $51.2 billion by Q3 FY2026—now exceeding the combined data center revenue of Intel and AMD by a factor of six. The Blackwell architecture, launched in late 2024, represented a generational leap in AI compute density. At GTC 2026 in March, Jensen Huang announced the Vera Rubin architecture and projected combined lifetime sales of $1 trillion for Blackwell and Vera Rubin through 2027, underpinned by $500 billion in GPU demand seen in the prior year alone. NVIDIA's market capitalization reached approximately $4.5 trillion. However, competitive pressure is mounting. Microsoft, Meta, Amazon, and Google are all developing custom silicon to reduce dependency on NVIDIA. These custom chips already account for an estimated $20+ billion in Amazon's revenue alone. NVIDIA's market share in data center GPUs has declined from a peak of 92% toward the 80 to 86% range. The company also faces geopolitical risk: 100% of its high-end GPUs are manufactured by TSMC in Taiwan.

Current Position

NVIDIA is the indispensable infrastructure provider of the AI era, occupying a position analogous to Intel during the PC revolution—but with significantly higher margins and a more defensible software moat through CUDA. The CUDA ecosystem, developed over 20 years, has over 5 million registered developers and represents the industry standard for GPU programming, creating massive switching costs. NVIDIA's full-stack strategy spans hardware (GPUs, networking, DGX systems), software (CUDA, NVIDIA AI Enterprise, NIM microservices), and vertical solutions (healthcare, automotive, robotics). The company invested $26 billion in its software ecosystem to ensure that NVIDIA-native AI remains the default deployment target. The emerging Sovereign AI market—nations building domestic AI clouds—represents a major growth vector, with Japan, France, and Saudi Arabia among early customers.

What Leaders Should Know

NVIDIA is not a vendor you choose—it is a vendor you work with by default, because every major AI model was trained on its hardware and every major framework is optimized for CUDA. The GPU scarcity that inflated pricing is easing as TSMC expands capacity and custom silicon matures, which should moderate costs over the next 18 to 24 months. Leaders planning large-scale deployments should evaluate NIM microservices for inference optimization and explore multi-vendor strategies. The Taiwan geopolitical risk—100% of high-end NVIDIA chips are fabricated by TSMC—warrants supply chain contingency planning. Watch the Vera Rubin launch in late 2026 as the next inflection point in AI compute capability.

This entry is part of the CXO Academy AI Encyclopedia — updated weekly.