Tech analyst Beth Kindig forecasts a 258% surge in Nvidia's stock, projecting a $10 trillion valuation by 2030, driven by its advanced Blackwell GPU and CUDA software platform. Nvidia's current success, with a $2.8 trillion valuation, is bolstered by its dominant position in AI chips and a broad market reach including automotive sectors. Kindig emphasizes Nvidia's "impenetrable moat" in the GPU market, suggesting its unique software integration will secure its market leadership against competitors like AMD and Intel.
Amazon has faced several challenges in developing AI GPU chips that can compete with Nvidia, as reported by Business Insider. Some of these challenges include:
Low adoption rates: Amazon's AI chips, Trainium and Inferentia, have seen low adoption rates among large cloud customers. Last year, the adoption rate of Trainium chips among AWS's largest customers was just 0.5% of Nvidia's GPUs. Inferentia, another AWS chip designed for a type of AI task known as inference, was only slightly better, at 2.7% of the Nvidia usage rate.
Compatibility gaps: AWS's AI chips still have "compatibility gaps" in certain open-source frameworks, making Nvidia GPUs a more popular option.
Strong appeal of Nvidia's CUDA platform: Customers have faced "challenges adopting" AWS's custom AI chips, in part due to the strong appeal of Nvidia's CUDA platform. Early attempts from customers have exposed friction points and stifled adoption.
GPU shortage: Explosive demand for Nvidia chips is causing a GPU shortage at Amazon. An obvious response to this would be to have cloud customers use Amazon's AI chips instead. However, some of the largest AWS customers have not been willing to use these homegrown alternatives.
Performance and cost efficiencies: Inferentia chips fall behind the performance and cost efficiencies of Nvidia GPUs, some customers have told Amazon, and their performance issues have been escalated internally.
These challenges have put millions of dollars in cloud revenue at risk for Amazon and have made it difficult for the company to break Nvidia's grip on the AI chip market.
According to Beth Kindig's predictions, Nvidia's next-generation Blackwell GPU is expected to surpass its predecessor, the H100, in both performance and revenue generation. Kindig estimates that the Blackwell GPU will generate data center revenue of $200 billion by the end of Nvidia's fiscal year 2026. This would be driven by the massive growth in the adoption of Blackwell GPU chips, along with Nvidia's CUDA software platform and its exposure to the automotive market.
In terms of performance, the Blackwell GPU is expected to empower and enable trillion-plus large language models, which is where big tech companies are heading. This will result in a very large hardware data center segment for Nvidia6. While specific performance metrics have not been mentioned, it is clear that Kindig expects the Blackwell GPU to significantly outperform the H100 and contribute to Nvidia's continued dominance in the AI and data center market6.