The need for artificial intelligence (AI) acceleration is widely recognized as of 2020. AI acceleration chipsets have become a standard feature requirement for device manufacturers within the enterprise (data center) and edge markets. As a result, the volume and revenue of AI chipsets have increased drastically in the last two years. NVIDIA’s latest A100 offers petaOPS of compute performance under certain compute conditions, marking a tremendous jump from the petaOPS server DGX-1 introduced just two short years ago. Deep learning (DL) is slowly moving past its hype cycle as proof-of-concept (PoC) AI applications developed in the past two years go into production. AI chipset customers have become more sophisticated in terms of chipset needs for AI application acceleration and are asking for specific benchmarks when talking to vendors. Customers’ needs for chipsets are coming to the forefront, forcing chipset companies to rethink the applicability of their technology. All prominent chip companies, such as Intel, NVIDIA, and Qualcomm, have invested heavily in AI. Cloud companies have started rolling out graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs), giving developers a choice for AI acceleration. Omdia forecasts that global revenue for DL chipsets will increase from $11.4bn in 2019 to $71.2bn by 2025. This Omdia Market Report assesses the industry dynamics, technology issues, and market opportunity surrounding DL chipsets, including CPUs, GPUs, FPGAs, ASICs, and SoC accelerators. As an update to Omdia’s 2019 Deep Learning Chipsets report, it captures the state of this fast-moving chipset market. Global market forecasts, segmented by chipset type, compute capacity, power consumption, market sector, and training versus inference, extend through 2025. Omdia also provides profiles of 23 key industry players.
All set! This article has been sent to email@example.com.
All fields are required. For multiple recipients, separate email addresses with a semicolon.
Please Note: Only individuals with an active subscription will be able to access the full article. All other readers will be directed to the abstract and would need to subscribe.