• info@esgwise.org

Boards talking AI but not governing it: Investors face ‘unpriced risk’ ahead of proxy season

A lack of meaningful oversight of artificial intelligence at board level is creating a potentially “unpriced risk” in portfolios, according to new data from the AI Company Data Initiative (AICDI). 

As investors head into a proxy voting season where AI governance is expected to come under sharper scrutiny, the AICDI released a report, Responsible AI in practice: 2025 global insights which looked at data from almost 3,000 global companies across 11 sectors. 

The findings suggested a widening gap between companies’ AI ambitions and their governance frameworks. While 40% of companies report board or committee-level oversight of AI, fewer than a third (31%) can evidence a dedicated team or resource responsible for AI governance.

See also: Growing demand for ‘responsible AI’ amid governance risk concerns

Governance gap behind the headlines

At first glance, corporate adoption of AI appears to be gathering pace, with 44% of companies disclosing they have an AI strategy. However, the quality of governance underpinning those strategies is less convincing, the report said.

Just 13% of companies publicly commit to a recognised AI governance framework or standard – a figure that points to a lack of alignment with emerging global norms and best practice. Further, only 2.7% report having a formal AI model registry, a basic tool for tracking and managing AI systems.

Commenting on the findings, Katie Fowler, director of responsible business at the Thomson Reuters Foundation, said the issue for investors has moved beyond awareness. “The findings suggest that the challenge of responsible AI for investors is no longer awareness but ensuring good governance in practice,” she said.

Workforce and operational risks under the radar

The report also highlights a lack of preparedness at the workforce level, which could have knock-on effects for productivity and margins, the report said. 

Only 31% of companies disclose offering AI-related training or reskilling to employees, suggesting many organisations may struggle to fully capitalise on their AI investments. Meanwhile, just 14% have policies in place to protect workers from AI-related risks such as bias or surveillance.

More concerning, only 2.3% of companies report having a formal complaints mechanism for AI-related issues – limiting visibility over potential problems before they escalate.

These gaps point to latent operational risks that may not yet be reflected in valuations.

See also: The ethics of AI 

‘Black box’ risk for investors

Transparency around AI remains a central issue, but just 12% of companies disclose policies to ensure human oversight of AI systems, raising the prospect of “black box” decision-making with limited accountability.

Eva Cairns, head of responsible investment at Scottish Widows, warned this could have significant implications for investors. “While AI provides great opportunities for innovation and efficiency, it is also a major governance challenge, especially if AI decision-making is a ‘black box’ with little oversight and accountability,” she said.

“This can present considerable risks to companies and investors.”

The findings also suggest that responsible AI is still being interpreted narrowly through a compliance lens. Only 7% of companies report conducting human rights impact assessments, and just 5% undertake ethical impact assessments, indicating limited visibility over broader social risks.

See also: AI: Reality, risk and returns in the next tech revolution

Leave a Reply

Your email address will not be published. Required fields are marked *