• info@esgwise.org

Shaping the future of ethical AI

The development and deployment of safe and responsible AI is one of the defining challenges of our time. These technologies offer transformative potential but also come with serious risks, which is why ethical AI has been one of our top stewardship priorities since 2022. From algorithmic bias and misinformation to cybercrime and loss of privacy, the implications for individuals, societies and markets are profound.

Google Gemini, as much as other AI models, became the target of nation-state hackers attempting to bypass safeguards and steal sensitive data. Incidents like this highlight how fast artificial intelligence (AI) risks are evolving and why responsible governance cannot wait.

Investors have a critical role to play in ensuring that AI is developed and used responsibly. Through active engagement, voting and coalition-building, we are pressing companies to improve transparency, adopt stronger safeguards and protect long-term value. Our focus is to ensure that governance systems and resulting guardrails keep pace with the rapid evolution of AI.

Through contribution to public consultations and frameworks that shape global norms, we aim to move companies from words to action – from publishing AI principles to implementing meaningful safeguards that stand up to scrutiny.

Alphabet: A call for greater oversight

Alphabet is widely viewed as a front-runner in AI development. It has established ethics councils, published responsible AI frameworks, and co-founded the Frontier Model Forum to promote safe practices. Yet significant risks remain.

In May 2024, we raised concerns about the malicious use of Alphabet’s Gemini model on behalf of the WBA Collective Impact Coalition group which we lead. Six months later, Google published a report showing how actors linked to China, Russia, Iran and North Korea had attempted to exploit Gemini for phishing, cybercrime and disinformation. While many of Google’s safeguards held up, the incident underlined the importance of ongoing risk assessment. We have since called for more systematic use of human rights impact assessments (HRIAs) and stronger oversight of YouTube’s content moderation practices.

As lead investors in Alphabet’s engagement, we followed up on our collective letter to senior executives, voted against multiple directors at the 2024 and 2025 AGMs, and supported AI-related shareholder proposals calling for improved governance and transparency. Alphabet responded with links to resources, including updates on its AI principles, privacy practices and red teaming protocols. Despite this, several concerns remain, and we continue our outreach for collective engagement.

See also: What AI really means for asset management: How firms can prepare

Meta: A rollback of earlier reforms?

While Alphabet has shown a degree of transparency, our experience with Meta illustrates how progress can quickly unravel. In September 2024, we participated in a joint investor call with Meta’s governance team and shortly later, arranged a letter, backed by $3.6 trillion in assets under management, urging greater clarity on content moderation, child safety, privacy and data training practices.

Meta initially responded constructively, but in early 2025 the tone shifted. Following changes at senior leadership and board levels, the company dismantled its fact-checking programme in the US and replaced it with a crowdsourced moderation tool known as Community Notes, which has a mixed record. On X, independent analysis found that 74% of misleading posts remained uncorrected, and when notes were added, false content continued to receive far more views than corrections.[i]

So, do Meta’s changes represent a rollback of earlier reforms? The lack of formal HRIAs and reduced transparency around data use have heightened our concerns. We are assessing whether the company remains committed to responsible AI governance and advocating for stronger protections against disinformation, bias and privacy breaches.

In May 2025, we pre-declared our votes at the Meta AGM: we voted against five of its directors on accountability concerns, the executive remuneration and the auditor, we supported one-year Say on Pay frequency and eight shareholder resolutions, including five asking for more efforts towards ethical AI.

ServiceNow: Advancing shared responsibility

In contrast to the challenges encountered with some consumer-facing platforms, our engagement with enterprise software provider ServiceNow has been constructive and encouraging.

As lead investors in the WBA’s engagement on ethical AI, we raised concerns about how ServiceNow’s generative AI tools could be used in ways that introduce bias or leak user data. Following our outreach, the company published its Human-Centred Responsible AI Guidelines and shared details of its governance approach.

ServiceNow’s model emphasises shared responsibility, with mechanisms for customers to participate in AI risk assessment and mitigation. These include detailed model cards, human oversight systems, differentiated access controls and risk evaluation tools. We recommended greater clarity on board-level governance and encouraged the company to share real-world examples and independent evaluations.

ServiceNow was receptive to our feedback and committed to further improvements. We will continue to monitor progress and support its efforts to embed responsible AI practices.

From words to action

As AI continues to evolve rapidly, one emerging focus for our engagement is the rise of autonomous AI agents – systems capable of making decisions and acting independently. These technologies promise new efficiencies while introducing major risks. We will be pressing companies to put in place robust guardrails, including anti-hijack safeguards, transparency requirements and clearer lines of accountability.

Investors have both the responsibility and the influence to shape a safer AI future. Through persistent engagement, coalition-building and public advocacy, we continue to push for ethical AI, ensuring technology serves society, not the other way around.

Leave a Reply

Your email address will not be published. Required fields are marked *