Omdia reports from the Responsible AI Leadership Summit 2023 by Credo AI, where thought leaders and business practitioners shared their perspectives and advice on a key topic: practical implementation of responsible AI in the enterprise.

Omdia view

Summary

Omdia reports from the Responsible AI Leadership Summit 2023 by Credo AI, where thought leaders and business practitioners shared their perspectives and advice on a key topic: practical implementation of responsible AI in the enterprise.

Let’s make responsible AI real

“Making It Real” was the topic of this year’s Responsible AI Leadership Summit by Credo AI. This was an invite-only event that brought together more than 100 of Credo AI’s customers and partners. Participants included executives and practitioners involved in artificial intelligence (AI) policy, governance, and compliance across a range of industries, members of the European Parliament, think tanks, standards development organizations, civil society groups, and select press. They came together to discuss topics ranging from responsible AI in government to global standards and regulations to the social impacts of AI. The key theme of the event, though, was AI governance infrastructure—issues, challenges, tools, policies and change management for operationalizing RAI in the enterprise—especially in light of generative AI.

Credo AI’s Founder and CEO Navrina Singh set the tone: “As business leaders we don’t have a choice,” she said, “embracing AI is a business imperative.” “We do, however, have a choice in how to do it—responsibly, with appropriate guardrails in place. In fact, responsible AI is not an option, but a must,” she continued, “and moreover, it is a competitive advantage. The winners of the generative AI race will be organizations who invest in AI safety and governance and in building solutions that engender customer and public trust and internal stakeholder confidence.”

Omdia’s research bears this out:

  • Our Generative AI Software Market Forecast - 2H23 anticipates that generative AI will add more than $6.7 billion to the AI software market in 2023 alone and over $56 billion in the following five years.
  • Omdia’s 2023 AI Market Maturity Survey confirmed that market adoption of AI is firmly in the early majority phase, which is a phase when innovation becomes self-sustaining and accelerates rapidly. (AI reached critical mass last year.)
  • Enterprises have enthusiastically embraced generative AI: a whopping 38% of respondents in Omdia’s 2024 IT Enterprise Insights reported that they had either already fully adopted generative AI (13%) or were implementing it (25%). A further 26% are currently pilot testing generative AI applications. Only 10% said they had no interest in generative AI.
  • Meanwhile, consumers and the public are increasingly aware of AI risks and harms. In Omdia’s Consumer AI Survey: 2023, the top five concerns about generative AI are work displacement (52%), fraud (50%), misinformation (48%), threats to data privacy (44%), and negative impacts on education (39%).

Governments, of course, are working on regulatory frameworks to mitigate risks and harms from AI. (See further Omdia’s Artificial Intelligence (AI) Regulations, Policies, and Strategies: Case Studies – 2023 and The UK AI Safety Summit: What to expect and first impressions.)

But how? Early adopters show the way

Responsible AI practitioners from Accenture, AWS, Booz Allen Hamilton, Cisco, Mastercard, Northrop Grumman, and PwC (among others) offered their insights and substantial, informative, and practical advice, grounded in experience and lessons learned from implementing AI governance within their own and client organizations. Among the key takeaways they shared are:

  • Responsible AI implementation is in infancy—few know what it is and how to do it, and awareness is low in most organizations.
  • The speed of scaling and rapid innovation in AI makes it difficult to operationalize governance, but responsible AI is a competitive differentiator and enabler of innovation. Seat belts serve as a useful analogy. A standard feature in all cars, they have a long history of adoption, and they save thousands of lives every year.
  • For responsible AI to succeed, it must be grounded in enterprise strategy and ultimately in business value that starts with use cases. Organizations should focus on business challenges and whether AI can help solve those, not on “how do we leverage this technology and make money from it.”
  • Use cases are also a critical context to keep in mind when comparing, evaluating, and managing AI risks, as risk profiles could differ greatly from one use case to another.
  • A responsible AI program should have a cross-functional perspective and engage diverse stakeholders, including compliance, analytics, risk management functions, and procurement.
  • AI governance increasingly looks like vendor due diligence. With AI being embedded in a wide range of enterprise and line of business applications, the onus of risk management has shifted: it now involves procurement teams who unfortunately lack critical skills and knowledge. Broad availability of third-party generative AI models and enterprises’ appetite to experiment add further urgency. (One example: what does the contractual language say about which data will your organization be giving away during model training and/or via prompts?)
  • Agility and iteration are key in implementing a responsible AI program. It will take some time to get the program and its toolbox right. Policies, procedures, internal guidance, and checkpoints—as well as awareness—will continue evolving and, therefore, will need to be updated regularly. They should be treated as living documents and innovation in progress. “Don’t let the perfect be the enemy of the good” was one piece of advice. The important thing is to commit and get started.
  • Feedback from AI developers is particularly important as these workers apply policies, guidelines, and tools on a daily basis, and hence require a high level of specificity. AI developers must be able to differentiate the “showstoppers” from the “nice to haves.” A responsible AI program must therefore actively solicit developer feedback and act on it promptly.
  • A responsible AI program is not “one and done.” Ongoing monitoring and third-party tools will be needed. Understanding and appreciation of risks from AI will continue to change and evolve, as will AI regulation and standards, so organizations must remain vigilant and proactive.

Lean in and leverage existing structures

Governance and compliance are nothing new in the world of business. While the exact number of governance structures and the degree of competence vary between organizations, a modern enterprise is familiar with several of them: such governance structures include corporate, IT, architecture, application development, data, analytics, cybersecurity, and so on. And then there are sector-specific frameworks, such as model risk management in financial institutions. In fact, so many governance models and frameworks have sprung up over the years that in some organizations, the word “governance” has acquired a negative connotation.

Given this wealth of frameworks and teams (and hopefully experience), a dedicated team for governing AI may look a little redundant. Instead, organizations should leverage existing programs, resources, knowledge, and awareness. By having these people and resources focus on AI, an enterprise can draw on their existing expertise, educate others, and become vectors for responsible AI adoptions. So lean in.

Appendix

Further reading

Generative AI Software Market Forecast – 2H23 Analysis (August 2023)

AI Market Maturity 2023 (April 2023)

IT Enterprise Insights: IoT, Cloud, AI, 5G, and Sustainability – 2024 (October 2024)

Consumer AI Survey: 2023 – Generative AI Analysis (August 2023)

Artificial Intelligence (AI) Regulations, Policies, and Strategies: Case studies – 2023 (August 2023)

The UK AI Safety Summit: What to expect and first impressions” (October 2023)

Author

Natalia Nygren Modjeska, Research Director, AI & Intelligent Automation

[email protected]