Governance Concerns Outstripped by AI Ambition
Investors face mounting technology risks, as policymakers urged to agree global governance framework ahead of AI summit in February.
Policymaker and investor efforts are failing to address the risks emerging from AI and ensure suitable governance of the technology, as safety concerns slip behind economic opportunities for many investing in the industry.
As AI becomes more widely adopted, risks facing investors and their portfolios across the ESG spectrum are plentiful. These include social harms like bias and misinformation, including deepfake technology, data privacy, and infringement of intellectual property, as well as misuse for malicious purposes and environmental concerns.
Both investors and policymakers understand the risks that AI poses, but in the eyes of many they are outweighed by the opportunities – including economic benefits and reducing ESG due diligence costs, among others. According to PwC, AI could contribute US$15.7 trillion to the global economy by 2030.
“Between 2019 and 2023 there was a flurry of institutional investors such as Fidelity International, and proxy advisors like the Ethos Foundation, that developed expectations for issuers to comply with regards to digital ethics [and] there were even initiatives such as those from Federated Hermes and HSBC Asset Management which spoke directly to AI governance,” Charles Radclyffe, Partner at ESG data company EthicsGrade, told ESG Investor.
“While it felt during that time that momentum was building, the advent of generative AI, which OpenAI brought to the mainstream, has seemingly subdued investor concerns over the risks and instead fired up their enthusiasm for the opportunities. I think it’s far more likely to see institutional investors launch engagement on AI adoption than it is on AI governance [and] the private markets are falling over themselves to adopt AI.”
While some investors have prioritised opportunities over the myriad of risks, shareholders at large tech companies have attempted to hold them to account on the development of AI.
Last year, a shareholder proposal at Apple requesting greater transparency on AI received 37.5% of votes cast, while proposals seeking to curtail generative AI’s role in amplifying misinformation at Meta and Alphabet also highlighted rising investor concerns about the technology’s rapid development.
While the scale and aims of AI-related proposals during the 2025 proxy season are yet to fully emerge, proxy advisor Glass Lewis adopted a new policy regarding board oversight of the use or development of AI which will affect its more than 1,300 clients, representing US$40 trillion in AUM.
Fragmented governance objectives
Tomas van der Heijden, CEO at AI-focused software company Briink, stressed that “clear” governance frameworks are “essential” in ensuring transparency, accountability, and fairness in AI systems.
“This includes mandating environmental impact assessments, bias mitigation strategies, and safeguards to ensure AI is used ethically and does not perpetuate harm,” he said. “Weak governance frameworks or divergent standards across regions could allow harmful applications of AI to proliferate, increasing environmental and ethical risks.”
In his first week as returning President, Donald Trump rescinded an executive order signed during the Biden administration that required developers of AI systems which posed risks to US national security, the economy, and public health or safety to share the results of safety tests with the government before their public release.
Van der Heijden said that harmonised AI governance across regions, which include sustainability requirements, were key to “foster[ing] innovation while ensuring that AI systems align with global climate and social objectives”, but this decision seemed to signal a further fragmentation in global AI regulation and policy.
Meanwhile, the UK has looked to position itself as a leader in AI development. Last week, Prime Minister Kier Starmer introduced the AI Opportunities Action Plan as a “blueprint to turbocharge AI”, including dedicated AI Growth Zones to accelerate AI infrastructure planning, while three tech companies committed £14 billion (US$17.5 billion) of AI infrastructure investment in the UK.
The UK government looked to foster further trust in AI in November with the launch of an AI assurance platform to help businesses across the country identify and mitigate the potential risks and harms posed by the technology. The UK’s AI assurance market currently employs more than 12,000 people and is worth more than £1 billion, with the government believing it could expand to £6.5 billion by 2035.
In November 2023, the UK hosted the AI Safety Summit bringing policymakers and tech firms together to address the potential harms emerging from AI. At the summit, 28 countries – including the UK and US – signed the Bletchley Declaration, which recognised the “urgent need” to understand and collectively manage potential AI risks.
Next month, France will host the AI Action Summit – an event directly building on the UK AI Safety Summit and the UK co-hosted AI Seoul Summit last May – which will be attended by policymakers and tech companies. A key objective of the summit is to establish a global governance framework for AI, with the organisers describing the current governance of AI as “piecemeal”.
“The absence of coordination [and] multiple parallel initiatives creates a complex landscape that is often insufficiently inclusive and where frameworks sometimes compete,” the event’s website read. “The aim is clear: to build a consensus on the global governance framework for AI, with and for all parties.”
Just seven countries worldwide are currently participating in major international AI initiatives, while 119 are totally absent from them, highlighting the scale of the challenge. The event also noted the need to closely involve private stakeholders and civil society to “define a common international AI governance architecture”.
“Diverging approaches between the US, UK, and EU create a lack of cohesion in AI governance, making it challenging to establish global standards for sustainable AI development [and] creating compliance risks and challenges for investors managing international portfolios,” said van der Heijden. “Harmonizing AI governance across regions, including sustainability requirements, will foster innovation while ensuring that AI systems align with global climate and social objectives.”
The evolving risks of AI are also a priority for the Association of Southeast Asian Nations, with the ten-strong alliance of countries this week launching an expanded guide to help member nations navigate the evolving risks associated with AI and generative AI.
The post Governance Concerns Outstripped by AI Ambition appeared first on ESG Investor.