KubeCon Paris had a lot to say about AI workloads on the cloud-native technology stack, such as how to exploit latest GPU resource allocation, the formation of the CNCF AI Working Group, and the first Cloud Native AI Day.

Omdia view

Summary

KubeCon Paris, which took place in March 12–15, 2024, had a lot to say about artificial intelligence (AI) workloads on the cloud-native technology stack, such as how to exploit the latest GPU resource allocation, the formation of the CNCF AI Working Group, and the first Cloud Native AI Day colocated event at KubeCon. Developers want to run AI workloads on Kubernetes, and AI can be used to help run Kubernetes. We look at new-generation projects added to CNCF; there is a sense that many aspects of AI are increasing mindshare within the CNCF.

Analyst view

The KubeCon Paris opening keynote by CNCF executive director Priyanka Sharma highlighted a FinOps Foundation survey reporting that enterprises spending $100m plus on the cloud saw their AI/ML costs increase by 45%, and this is a key concern. AI training on GPUs is a significant expense, and enterprises that choose to train their own models face rising costs. In addition, the shift to automate more processes and imbue more services and products with AI, exploiting the latest generative AI (GenAI) technology, will lead to rising AI-related costs.

A subscription is required to view this content.

Already subscribed? Continue Continue