The third and final instalment of this blog series, led by Omdia Chief Analyst Eden Zoller, focuses on AI governance and the growing emphasis on responsible AI. This blog highlights the frameworks, policies, and best practices organizations are adopting to ensure ethical AI deployment. Explore how enterprises are addressing accountability, transparency, and trust to balance innovation with responsibility.
Enterprises are facing significant challenges in keeping pace with the rapidly evolving landscape of AI regulations. According to Omdia’s 2024 AI Market Maturity Survey* less than half of the enterprises in the survey are compliant with existing AI regulations or are actively working towards this. This gap is a concern, given that a growing body of AI regulations are already in place, notably the EU AI Act. Moreover, enterprise adoption of voluntary initiatives to support responsible AI (RAI) are patchy and inconsistent, with no single effort—such as industry standards, voluntary principles, continuous evaluations, impact assessments—achieving over 50% implementation. This is disappointing given that most enterprises voice support for RAI.
AI regulatory developments are accelerating at a swift rate. The European Union’s Act came into force in 1st August 2024 and although full compliance is not due for two years, there are rolling obligations, with bans on unacceptable risk AI systems taking affect within six months of the act coming into force (i.e., February 2025). Rules governing general purpose AI models apply after 12 months, and obligations for AI embedded into regulated products must be met after 36 months. Just over a third of enterprises in Western Europe enterprises report being compliant with AI regulations or are working towards it, considerably lower than Eastern Europe (49%). AI regulations and compliance are clearly proving a challenge for enterprises— and 35% of respondents cite these issues are a major obstacle to adopting and scaling AI. Enterprises should treat regulatory compliance as critical capability in supporting AI that needs to be established in tandem with other, more technical components. Companies with weak support for AI regulations make themselves vulnerable to operational risks, eroded customer trust, and reputational damage. Moreover, AI compliance failures hit the bottom line: fines and penalties, costs attached to court cases, and outlays associated with retrofitting deployed AI systems to align with regulations.
When it comes to voluntary initiatives for supporting responsible AI, adherence to industry standards is the most widely adopted approach by enterprises in the survey (49%). Industry standards are usually rigorous and respected, are important enablers of assurance for AI and help engender trust. There is groundswell of work on AI standards that include those from Institute of Electrical and Electronics Engineers (IEEE Standards) via the Association Autonomous and Intelligent Systems (AIS) workstream; and the International Organization for Standardization (ISO). The recently published (February 2023) ISO/IEC 23894 offers guidance for managing risks connected to the development and use of AI.
Voluntary AI principles are popular with enterprises, both for principles designed in-house (46%) and external principle (45%) provided by organizations such as the Organization for Economic Co-operation and Development (OECD). Voluntary principles can be a useful way of complementing or supplementing regulations and statute law that are still in development or slow moving. Voluntary principles can also be flexible and responsive - voluntary codes can adapt to rapidly evolving AI developments and ethical concerns in the way the law often cannot.
However, voluntary AI principles can suffer from being high level and vague and are difficult to operationalize and put into practice. There is limited accountability as voluntary principles are not legally enforceable. In-house principles can also be interpreted in ways that are self-serving.
Impact assessments are used by 41% enterprises in the survey. AI impact assessments are meant to provide a systematic process to evaluate the potential impacts from AI systems on users and other defined stakeholders, and are typically focused on identifying potential harms, risks and other negative impacts. Impact assessments are useful tools but have limits, for example, it can be challenging for AI impact assessments to account for all contexts and use cases. The capabilities and behavior of advanced Machine Learning (ML) models can change over time, sometimes in unpredictable ways. Impact assessments conducted at a single point in time may not capture model evolution and changing impacts. These limitations mean that impact assessments should be used in conjunction with other mechanisms for responsible AI, and AI governance. Besides those mechanisms already discussed, other approaches include value sensitive design, and off the shelf, commercial tools to support responsible AI. However, the latter initiatives are supported by, respectively, only 41% and 38% of enterprises in the survey.
The survey makes it clear that enterprise efforts to ensure responsible AI and AI compliance needs to improve. Many vendors already provide AI governance solutions, from long-standing enterprise partners like IBM through to responsible AI specialists like Credo AI. Vendors can fortify their AI governance propositions by providing solutions that are aligned with unique regulatory compliance requirements based on industry verticals, country specific legislation, and even specific AI use cases. Vendors should also flex AI governance solutions to better support smaller enterprises, which the survey shows are finding AI regulatory compliance particularly challenging.
* The Omdia 2024 AI Market Maturity Survey was completed in August 2024 among 478 enterprises from across the major global regions, key industry verticals, and companies of different sizes based on revenue.
Further reading
AI Market Maturity 2024 Survey: Data Tool (October 2024)
2024 AI Market Maturity Survey Analysis: Budgets and ROI (November 2024)
More from author
More insights
Assess the marketplace with our extensive insights collection.
More insightsHear from analysts
When you partner with Omdia, you gain access to our highly rated Ask An Analyst service.
Hear from analystsOmdia Newsroom
Read the latest press releases from Omdia.
Omdia NewsroomSolutions
Leverage unique access to market leading analysts and profit from their deep industry expertise.
Solutions