Amazon Web Services
AI Infrastructure for Everyone
The Origin Story
Amazon Web Services launched in 2006 as a radical experiment: selling Amazon's internal infrastructure as a service to external developers. By the time the AI era arrived, AWS had become the world's largest cloud provider, generating over $100 billion in annual revenue with a 31% market share. But when OpenAI's ChatGPT ignited the generative AI revolution in late 2022, AWS found itself in an unfamiliar position—playing catch-up. Microsoft's exclusive partnership with OpenAI gave Azure a perceived AI advantage, and Google's internal DeepMind capabilities gave Google Cloud a strong narrative. Amazon's response was characteristically pragmatic: rather than building a single frontier model, AWS would build the infrastructure layer on which all AI models run, positioning itself as the Switzerland of AI computing. The strategy centered on Amazon Bedrock, launched in April 2023, which provides managed access to models from multiple providers—including Anthropic, Mistral, Cohere, and Amazon's own Titan models—through a single API.
Key Milestones
The September 2023 investment of $1.25 billion in Anthropic, expanded to $4 billion by year-end and ultimately to $8 billion total, was Amazon's signature AI move. The investment was structured as convertible notes that have appreciated spectacularly: Amazon's Anthropic stake grew from $8 billion invested to $60.6 billion in value by Q4 2025, generating over $12 billion in unrealized gains. Anthropic selected AWS as its primary cloud provider, using Amazon's custom Trainium and Inferentia chips for training and inference. Amazon Bedrock became the centerpiece of AWS's AI strategy, offering enterprises a model-agnostic platform that avoids vendor lock-in to any single AI provider. By early 2026, AWS AI revenue was running above $15 billion annually—nearly 260 times larger than AWS revenue at the same stage of its cloud lifecycle. Analysts project Anthropic's contribution to AWS revenue will grow from $3.9 billion in 2025 to $25 billion by 2027. Amazon's custom silicon strategy matured significantly. The annual revenue run rate for Amazon's chip business—including Graviton (general purpose), Trainium (AI training), Inferentia (AI inference), and Nitro (networking)—exceeded $20 billion, growing triple-digit percentages year-over-year. Trainium2, announced in late 2024, offered cost savings that Amazon claims provide several hundred basis points of operating margin advantage versus relying solely on NVIDIA GPUs. Amazon committed $200 billion in AI capital expenditure for 2026, including $50 billion earmarked for expanding AI infrastructure for U.S. federal agencies. AWS revenue growth reaccelerated to 24% year-over-year in Q4 2025, driven partly by AI workload demand. The company's Anthropic partnership has become a core competitive differentiator, with Claude models available as first-class citizens on Bedrock.
Current Position
AWS's AI strategy is fundamentally about infrastructure and choice. Rather than competing with frontier model labs, AWS provides the compute, storage, and managed services that make AI deployment possible at scale. The Bedrock platform's model-agnostic approach appeals to enterprises wary of single-vendor dependency, while custom silicon offers cost advantages for high-volume inference workloads. Amazon Q, the company's AI assistant for business intelligence and software development, competes with Microsoft Copilot but has achieved lower market penetration. AWS remains the cloud market leader by a significant margin, and its AI revenue growth rate suggests it is successfully converting that leadership into AI-specific revenue.
What Leaders Should Know
If your organization runs on AWS, the path to AI adoption runs through Bedrock—and that is a defensible choice. The multi-model approach means you can switch between Claude, Mistral, and other models without rewriting your application, providing optionality in a fast-moving market. Amazon's custom Trainium chips offer 30 to 40% cost savings over NVIDIA GPUs for compatible workloads, which matters at scale. The Anthropic partnership gives AWS customers privileged access to Claude, now the enterprise AI model of choice for many regulated industries. Leaders should note that AWS's AI revenue growth rate exceeds its cloud growth rate, signaling that the AI layer is becoming the primary driver of new cloud commitments. Negotiate long-term committed spend agreements now, as AWS is pricing aggressively to capture AI workloads.