The first movers in artificial intelligence (AI) have been the hyperscaler operators. This is partly because their businesses had progressed to the point where they needed AI. Google needed AI to optimize web searches; Amazon to do customization of its online retail offerings; and Facebook to enhance its activity feed, photo, and social media applications. The other reason is that the hyperscalers are the ones with the deep pockets to fund the high costs of research in AI. These companies are now attempting to democratize AI technology and make it pervasive. Data center infrastructure, specifically computing, memory, storage, and networking, is in the process of going through a reboot to support AI. Though AI represents just a small portion of a cloud data center’s workload and an even smaller portion of an enterprise’s workload, it drives a different type of application profile and thus requires different architectures and components. Advances in technology have played a major part in enabling AI expansion and market penetration. In turn, AI applications are driving the development of new silicon and system architectures, storage and networking options, and delivery models. Meanwhile, Tractica’s research indicates that enterprises are not abandoning on-premise computing. While the hyperscalers have been driving AI implementation in the cloud, there is corresponding demand for on-premise and colocated solutions from early adopter enterprises. This Tractica report examines the AI applications in business, consumer, and government that are driving requirements in AI infrastructure, especially the compute, storage, and networking functions in cloud and enterprise data centers. The report also catalogs the changing nature of the market, ecosystem, vendors, and technologies, including the underlying semiconductors powering the next generation in AI. Market forecasts include infrastructure hardware spend from 2018 to 2025 segmented by region, function, chipset, delivery model, and enterprise vertical.