• info@esgwise.org

AI Proposals Ask Tech Giants to go “Beyond Platitudes”

Shareholders send strong signal on misinformation risks arising from generative AI, targeting increased accountability and transparency at Meta and Alphabet.

Shareholder proposals centred on generative AI’s (GAI) role in amplifying misinformation and disinformation filed at big tech companies Meta and Alphabet have echoed rising investor concerns around the fast-developing technology.

The proposal at Meta was led by ESG activist investor Arjuna Capital, with non-profit Open Media and Information Companies Initiative (Open MIC) as a co-filer. It demanded the board issue a report within a year of the AGM, assessing risks to the firm’s operations and finances and public welfare presented by GAI.

The proposed report would also outline steps that Meta intends to take to remediate those harms, and explain how the effectiveness of such efforts would be gauged.

“Because of [Mark] Zuckerberg’s 61% voting power, it’s impossible for any shareholder proposal to receive a majority vote – yet, a 17% vote for the proposal requesting better disclosure of GAI risks and mitigation efforts is significant when excluding the insider votes,” Julia Cedarholm, Senior Associate for ESG Research and Shareholder Engagement at Arjuna Capital, told ESG Investor. “This vote sends a strong signal to Meta and the larger community that investors are concerned about GAI risks, and want more transparency and accountability.”

Meta, and other tech giants including Alphabet, use dual class share structures which give distinct voting rights to holders of Class A and Class B stocks. At Meta, Class B stocks have ten votes per share while Class A has only one vote per share. As a result, Meta founder Zuckerberg holds an estimated 13.5% of the company’s total stock, but due to his high number of Class B shares holds more than 60% of shareholder voting power.

Open MIC’s Advocacy Director Jessica Dheere said the strong minority vote told management “these are issues that that need to be dealt with”.

The full results of the shareholder meeting, including the Arjuna-led proposal, are yet to be published.

Escalating risk

Arjuna Capital is also the lead filer of a proposal requesting a similar report from Alphabet, due to be submitted during the company’s upcoming shareholder meeting on 7 June.

Earlier this year, Arjuna and fellow investor Follow This were served with a lawsuit by oil major ExxonMobil in response to a shareholder proposal requesting medium-term decarbonisation targets for Scopes 1-3 emissions. The case is now proceeding against just Arjuna Capital, despite the group promising to never file such a proposal at the company again.

Both Meta and Alphabet advised shareholders to vote against the Arjuna Capital-led resolutions. The proposal at Meta was one of two requesting greater disclosure of GAI impacts, while the one at Alphabet will be one of three – underscoring GAI risks as an emerging issue investors want companies to address.

A similar proposal at Microsoft’s AGM last December garnered 21% of shareholder votes, while in February a proposal requesting that Apple publish a transparency report on the use of AI in its business operations received 37.5% of support – despite recommendations to shareholders from the tech giant to vote against.

“As mis- and disinformation become more believable and easier to produce, companies face increasing regulatory, legal, and reputational risks – and impacts are even more profound over the long term,” said Cedarholme. “There is a real risk that the vast spread of mis- and disinformation will sow significant mistrust in the institutions that uphold our democracy and economy.”

Google’s Gemini recently created historically inaccurate images, including ‘Black Nazis – which eventually saw the image generator being taken down – while Meta’s Llama 2 was found to have a so-called hallucination rate of 48%. Earlier this week, technology publication Wired launched a tracker to help identify election-related images created via GAI, and evaluate how the technology is influencing the political and information landscape.

“Responsible AI commitments and qualitative assurances aren’t enough, which is why investors are asking for quantitative metrics that assess the company’s performance in preventing mis- and disinformation.” said Cedarholme. “Up until this point, we’ve only seen platitudes from technology companies regarding ethical AI. We’re asking for a report that outlines the specific risks of mis- and disinformation and quantifies the companies’ success in mitigating those.” 

Open MIC’s Deere agreed that GAI products can “exponentially amplify” deceptive or inaccurate content – and can do so in a faster and more targeted way than ever before. “Yet, [companies] still haven’t been able to assure the public that they can moderate content in the first place,” she added.

Here to stay

Open MIC previously branded unconstrained GAI as a “risky investment”, saying that its development and deployment without risk assessments, human rights impact assessments, or other policy guardrails in place, could put companies at risk, financially, legally, and reputationally.

Despite these forewarnings, the GAI market is projected to far surpass US$1 trillion by the early 2030s. In addition to investor action on GAI, regulators are looking to address risks arising from the technology. Singapore’s Infocomm Media Development Authority has launched a toolkit for testing the safety of GAI models. The authority has also finalised guidelines on its ‘Model AI Governance Framework for Generative AI’. The framework outlines nine dimensions to bolster trust in GAI, including accountability, data, trusted development and deployment, and AI for public good.

Separately, the European Securities and Markets Authority (ESMA) has issued guidance to investment firms on the use of AI in the provision of retail investment services.

Open MIC has urged asset managers and investors to use their influence to push companies to align their AI development policies and practices with proposed regulatory guardrails to help secure the integrity of information ecosystems.

“These AI resolutions are just the beginning – this is not something that we see as a one off,” said Dheere. “AI is here to stay and it is in the interest of civil society, investors, companies, and governments policymakers to maintain focus on how we integrate it constructively into society while protecting human rights – and, frankly, the companies that are creating it.”

Businesses should think about AI critically, as opposed to prioritising speed of development and capturing market share at all costs, she argued. While the technology holds considerable transformative potential to solve some of the world’s biggest problems, it could also create equally significant challenges.

It is crucial that companies do everything they can to predict GAI’s vulnerabilities and implement guardrails to prevent devasting consequences to society, investors, and the companies themselves,” Cedarholme added.

The post AI Proposals Ask Tech Giants to go “Beyond Platitudes” appeared first on ESG Investor.

Leave a Reply

Your email address will not be published. Required fields are marked *