Omdia is part of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Omdia Research Highlights Global Consensus on Regulating High-Risk AI Scenarios

Jun 13, 2024

SUBSCRIBE FOR THE LATEST NEWS AND INSIGHTS
AI regulation AdobeStock_724998550

LONDON, June 13, 2024: Analysis from Omdia’s new report AI Regulation: Comparing Global Policies and Regulatory Frameworks shows several countries have already initiated efforts to regulate AI through public consultations, market reviews, global discussions, draft laws and policies etc. however, there is still some way to go before many of these are finalized and enforced. Generally, some form of consensus is beginning to appear regarding the need to regulate the use of AI in high-risk situations such as healthcare settings.

“New technology adoption often drives debate, but the AI debate has grown especially quickly, with discussions swiftly becoming mainstream among the general public”, said Sarah McBride, Principal Analyst, Regulation, at Omdia. AI technologies are proving so controversial that even AI companies themselves are calling for regulation to provide them with clear boundaries to work within. “Some form of regulation is inevitable, but in the absence of guidelines, companies are creating their own standards frameworks for developing, assessing, and deploying responsible AI. However, without knowing exactly what the regulations are going to look like in the future, there is a chance that these companies could be making the wrong choices now, which could be challenging to reverse in future” McBride suggests.

Singapore has already released its second National AI Strategy, while the European Commission was the first regulator to publish a draft AI regulatory framework in April 2021. It adopted a risk-based approach that imposes prohibitions on AI systems based on security risk levels, while the regulatory action plans of some other countries, such as the UK, stipulate a sector-specific approach for AI regulations. Meanwhile, in the US, the initial approach to AI regulation is focused on specific AI use cases, and the National Institute of Standards and Technology has been developing standards for the design, testing, and deployment of AI technologies. China has also been making progress with its AI regulatory frameworks, prioritizing aspects such as generative AI and focusing on sovereignty when it comes to AI development, deployment, and security.

Seven key AI regulation challenges

Omdia’s report identifies seven key challenges that must be addressed by regulators to ensure the many opportunities from AI development can be realized including: safety, privacy, ethics, controllability, transparency and accountability, security, as well as copyright and IP laws. Regulators have begun to tackle some of these issues, with most of the guidelines issued so far being heavily focused on the ethical and legal issues associated with AI implementation, but this is only expected to accelerate in the coming year as they focus on sector-specific laws and areas where there may be a conflict of interest between AI applications and existing regulatory policies, such as data protection laws, copyright laws, outdated user consent methods, licensing agreements or permits, and sector-specific laws.

“The first step should be to identify the sectors or use cases where adoption of AI is significantly held back due to inappropriate legislation and amend several of these existing or even outdated regulations to avoid stifling innovation”, McBride said. Another challenge facing the sector is the possible contention between where AI is being regulated directly, e.g., the EU AI Act, and where it is also being regulated through other existing and new legislation such as the EU’s DSA and DMA.

“The broad nature of AI presents a problem in terms of regulation and protecting end users from harm. Not only does it blur the traditional definition of markets, which challenges enforcement; it also transcends administrative boundaries internationally. AI is also posing a challenge for regulators to predict when a particular outcome of an AI service will be malicious. This has resulted in governments and regulators around the world accelerating efforts to assess what level of regulatory involvement is required,” said McBride.

Let's Connect

More insights

Assess the marketplace with our extensive insights collection.

More insights

Hear from analysts

When you partner with Omdia, you gain access to our highly rated Ask An Analyst service.

Hear from analysts

Omdia Newsroom

Read the latest press releases from Omdia.

Omdia Newsroom

Solutions

Leverage unique access to market leading analysts and profit from their deep industry expertise.

Solutions
register Banner

Register here for full complimentary research reports and content.

Get ahead in your business and receive industry insider news, findings and trends from Omdia analysts.

Register