This Omdia report covers a new arena of competition in the enterprise information and communications technology (ICT) industry. Technologies such as Kubernetes have made hybrid and distributed cloud infrastructures more manageable and operationally agile. The emerging Internet of Things (IoT) has led to a proliferation of connected sensors and also connected effectors such as robots. Meanwhile, 5G wireless networks have begun to appear within enterprises. These networks provide dramatically lower latency, enabling new applications in industrial control systems, robotics, autonomous vehicles, and augmented/ virtual reality (AR/VR). As a result, it is a competitive necessity for applications to migrate closer to the end user to fully benefit from lower latency.
The need for artificial intelligence (AI) acceleration is widely recognized as of 2020. AI acceleration chipsets have become a standard feature requirement for device manufacturers within the enterprise (data center) and edge markets. As a result, the volume and revenue of AI chipsets have increased drastically in the last two years. NVIDIA’s latest A100 offers petaOPS of compute performance under certain compute conditions, marking a tremendous jump from the petaOPS server DGX-1 introduced just two short years ago. Deep learning (DL) is slowly moving past its hype cycle as proof-of-concept (PoC) AI applications developed in the past two years go into production. AI chipset customers have become more sophisticated in terms of chipset needs for AI application acceleration and are asking for specific benchmarks when talking to vendors. Customers’ needs for chipsets are coming to the forefront, forcing chipset companies to rethink the applicability of their technology. All prominent chip companies, such as Intel, NVIDIA, and Qualcomm, have invested heavily in AI. Cloud companies have started rolling out graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs), giving developers a choice for AI acceleration. Omdia forecasts that global revenue for DL chipsets will increase from $11.4bn in 2019 to $71.2bn by 2025. This Omdia Market Report assesses the industry dynamics, technology issues, and market opportunity surrounding DL chipsets, including CPUs, GPUs, FPGAs, ASICs, and SoC accelerators. As an update to Omdia’s 2019 Deep Learning Chipsets report, it captures the state of this fast-moving chipset market. Global market forecasts, segmented by chipset type, compute capacity, power consumption, market sector, and training versus inference, extend through 2025. Omdia also provides profiles of 23 key industry players.
The first movers in artificial intelligence (AI) have been the hyperscaler operators. This is partly because their businesses had progressed to the point where they needed AI. Google needed AI to optimize web searches; Amazon to do customization of its online retail offerings; and Facebook to enhance its activity feed, photo, and social media applications. The other reason is that the hyperscalers are the ones with the deep pockets to fund the high costs of research in AI. These companies are now attempting to democratize AI technology and make it pervasive. Data center infrastructure, specifically computing, memory, storage, and networking, is in the process of going through a reboot to support AI. Though AI represents just a small portion of a cloud data center’s workload and an even smaller portion of an enterprise’s workload, it drives a different type of application profile and thus requires different architectures and components. Advances in technology have played a major part in enabling AI expansion and market penetration. In turn, AI applications are driving the development of new silicon and system architectures, storage and networking options, and delivery models. Meanwhile, Tractica’s research indicates that enterprises are not abandoning on-premise computing. While the hyperscalers have been driving AI implementation in the cloud, there is corresponding demand for on-premise and colocated solutions from early adopter enterprises. This Tractica report examines the AI applications in business, consumer, and government that are driving requirements in AI infrastructure, especially the compute, storage, and networking functions in cloud and enterprise data centers. The report also catalogs the changing nature of the market, ecosystem, vendors, and technologies, including the underlying semiconductors powering the next generation in AI. Market forecasts include infrastructure hardware spend from 2018 to 2025 segmented by region, function, chipset, delivery model, and enterprise vertical.
The quantum computing market is a small but rapidly maturing one from the technology perspective. However, for quantum computing to succeed it needs the market in suitable applications to also mature. The quantum computing scientists will tell you: despite the progress the question remains if, not when, quantum computing will become mainstream. This statement on the future for quantum may make it seem as if it is a technology more relevant to a 2050 Trends to Watch than a 2020 one. However, the technology and the application use cases are evolving so fast that Ovum considers it the correct time to shine the spotlight on quantum technology. This Ovum report examines important market trends in quantum computing and offers recommendations for enterprises and vendors. The report includes discussion of use cases, implications for the security area and the interface between classical and quantum computing.
All set! This article has been sent to firstname.lastname@example.org.
All fields are required. For multiple recipients, separate email addresses with a semicolon.
Please Note: Only individuals with an active subscription will be able to access the full article. All other readers will be directed to the abstract and would need to subscribe.