۶Ƶ speaker stage

Summary

We think the easing of AI hardware supply chain constraints throughout 2025 will shift the investor debate to focus more on whether demand for AI compute and software applications in the coming years will be sufficient to justify this current ramp in infrastructure build-out. In this Q-Series report, we have gathered feedback from the entire ۶Ƶ Research Department and incorporated ۶Ƶ Evidence Lab data to assess whether the demand from model providers, consumers, and enterprises will be able to drive an attractive return on AI CapEx and further support the “AI trade” into 2026 and beyond. Bottom line – we conclude that it will, with enterprise demand the laggard that we examine in detail.

Three Pillars of AI Demand

We see three primary sources of AI demand: 1) training for LLM/model providers; 2) inference for consumer-facing products; and 3) building and management of enterprise AI applications. We approached this question from several angles. First, we present a qualitative assessment to determine the durability of demand and where we may see a disappointment. Second, we release a quantitative, model-based framework rooted in required compute capacity by AI use case. Third, we present a bottoms-up assessment by canvassing views of all analysts across ۶Ƶ's Research Department as to how AI products are being deployed. Fourth, we lean on the ۶Ƶ Evidence Lab to track the conference call mentions of AI and all of its related terms to gauge management product development focus.

Key Conclusions

We remain constructive on AI demand trends. We conclude that demand to train new models and consumer inference workload growth on the back of ChatGPT's popularity and the continued rollout of various products from Meta, Google, and Amazon should sustain GPU demand for years to come. This should, in turn, drive demand for cloud infrastructure, hardware and other technology that gets pulled along with this compute. We see enterprise AI spend as the primary source of risk as organizations are moving slowly, the ROI is less clear, and AI technology needs to be architected to automate specific enterprise workflows and tasks. The risk is that the LLM/consumer demand engines cool before enterprise AI spend ramps, resulting in a temporary digestion phase. We attach a low probability to this scenario – non-zero but manageable.