Racist Robots and the Rise of AI Activism
Since ChatGPT was launched in 2022, investors have been scratching their heads about how to approach the risks related to the technology.
In 2023, Alphabet Inc., Google’s parent company, did something that spooked investors.
A few months earlier, OpenAI – the artificial intelligence (AI) company backed by Microsoft – had unleashed ChatGPT, an AI chatbot that used large language models to generate eerily intelligent responses to complex questions.
After years failing to deliver on its futuristic promises, this version of AI was unlike anything the world had seen, sparking excitement in some, and in others reigniting The Terminator-era fears of super-intelligent robots vanquishing humanity.
OpenAI’s success was a wake-up call for other big tech companies, and Alphabet immediately got to work preparing its own AI chatbot, ‘Bard’, unveiled in February of that year.
But things did not go to plan. In the online launch, Alphabet showed off Bard responding to questions, which eagle-eyed fact-checkers quickly noticed it was getting wrong. Bard evidently was not ready, and Alphabet’s share price plummeted US$100 billion.
For Jonas Kron, Chief Advocacy Officer at Trillium Asset Management, this was cause for concern. “Alphabet was clearly rushing into the AI space,” he tells ESG Investor. “We had spent the previous few years filing slightly different shareholder proposals at the company that were focused on algorithm disclosure and more transparency. So after the mess with Bard, we made the switch to AI.”
Kron feared that, in the rush to keep up with OpenAI, Google was failing to put in place the safeguards needed for such a powerful, and potentially dangerous, tool.
So, when attempts to engage Alphabet were ignored, he decided to file a shareholder resolution in December 2023, calling on the company to amend the charter of its Audit and Compliance Committee of the Board to give it official oversight of the group’s AI activities.
“We believe the critical nature of AI to the company and its shareholders calls for expressly articulated coverage,” the resolution stated.
Trillium was not alone. This year, the Interfaith Centre on Corporate Responsibility (ICCR) recorded 14 AI-related resolutions, mostly filed with the big US tech companies like Meta, Apple, Amazon and Alphabet – covering governance, human and worker rights, misinformation, and diversity.
In the event, only 7.5% of votes were cast in favour of Trillium’s resolution. Nevertheless, Kron and others like him believe AI-focused shareholder action may just be getting started – especially given the US government has shown little willingness to regulate the sector itself.
Human rights concerns
One of the first things investors and shareholder advocates will say when asked about engagement on AI risks, is that it’s virgin territory.
“We’re relatively new to the space,” says Nadira Narine, Senior Director of Strategic Initiatives at the ICCR. “We’re still doing our homework, talking to all the tech experts, so that we can figure out what the investor angle is here.”
While the most apocalyptic fears – such as robots dominating humanity – may or may not materialise, there are some immediate human rights issues that must be addressed. One of those, says Narine, is the employment conditions of data workers.
“Data workers are the ones behind your ChatGPT,” Narine explains. “They’re scrubbing the internet for data, labelling data appropriately so that it gets into those technologies, and so that when you or I get into ChatGPT we’re stunned by its ability to reproduce things that we ask it for. It can only do that because of the data that’s input.”
An example is US company Sama, which OpenAI contracted to label its data from centres in Kenya. Last year, reports emerged of poor working conditions and pay as low a US$1.32 an hour. A key issue is the sort of things these workers must watch, says Narine, which includes violent or sexually disturbing images.
A better understood but no less real risk to workers is automation. “A key question investors should ask is, ‘How do you upskill workers if they are losing out on job opportunities from automation?’” she says.
A third risk, Narine adds, is that large language models have been shown to suffer from sexism, racism, and manifold other prejudices.
Racist robots
Type, ‘Does generative AI have a problem with racial bias?’ into ChatGPT, and this is the answer you will get:
“Yes, generative AI can exhibit racial bias. This bias arises from several factors related to how these models are developed and trained.” The precocious chatbot then goes on to explain that because large language models are trained on historic data, and much of that data is intrinsically discriminatory of various social groups, the bot itself learns to be discriminatory.
The most famous example of this came in 2018, when it emerged that Amazon was using AI to go through resumes and shortlist candidates for recruitment purposes.
“Based on the algorithms that looked back at all the people that Amazon had hired before, the AI came to the conclusion that Amazon hired primarily white men for these roles in the last 10 years,” explains Anita Dorett, Director, Investor Alliance for Human Rights.
When this became clear, Amazon had to ditch the whole program.
“You use AI that is built on data to identify trends,” says Dorett. “If those trends and that data reflect systemic racism, systemic bias that has been built into systems for decades and decades, it will continue to build that moving forward.”
Hasta la vista, baby?
These risks may not be as eye-catching as the more apocalyptic visions of robots taking over the earth and destroying humanity (a prediction made by serious people, such as AI expert and founder of the Machine Intelligence Research Institute Eliezer Yudkowsky, who wrote in Time magazine in March 2023: “The most likely result of building a superhumanly smart AI under anything remotely like the current circumstances, is that literally everyone on Earth will die.”)
Nevertheless, the more mundane harms are real and must be taken seriously, says Lydia Kuykendall, Director of Shareholder Advocacy at Mercy Investment Services.
“We need to put guardrails around what is happening today, so that we’re protected against those long-term existential risks,” she argues. “If we start now and continue on that path, hopefully it solves a lot of problems.”
Kuykendall, who like Kron has filed shareholder resolutions on AI with big tech groups, agrees that ingrained prejudice is a huge issue, as are misinformation and disinformation.
Another one is the lack of transparency on how companies like Facebook and Instagram parent Meta utilise users’ data to feed their eye-poppingly profitable advertising businesses.
“We don’t know how [the algorithm] is managed, how it’s governed, and we don’t understand the AI behind it,” Kuykendall says. “It’s just a big black hole of money. If you’re an investor it’s hard to argue with that money. But investors are people too, and we use Meta, and our children use Meta. So we need to know what is going on.”
But thanks to the dual-class share structure in operation at companies like Meta and Alphabet, forcing Big Tech to play ball can be next to impossible. Meta CEO Mark Zuckerberg, for example, controls 57% of share votes, despite only holding 13.6% of company equity.
“Basically, to a large extent with Big Tech, they have been able to ignore the investor voice, so they don’t even care to have constructive dialogue with us,” says Dorett. This frustration seems to be widespread across investors and advocates involved in the space.
Opacity is even more of a concern in private markets, where much of the early-stage research and development on AI is already done. Dorett urges investors in private equity and venture capital to make a greater effort to scrutinise AI companies from an ESG and human rights perspective.
Europe leads, America lags
Despite their efforts through engagement and shareholder resolutions, investors say there is only so much they can do to hold Big Tech accountable – and that the real regulatory power lies with government.
In April 2023, a group of high-profile academics, tech experts and entrepreneurs including Tesla CEO Elon Musk signed an open letter calling for a six-month hiatus in AI development to allow governments to assess the risks, such as “loss of control of our civilisation”.
Their call was not heeded, and sixteen months on, one of the letter’s signatories, Gary Marcus, Emeritus Professor of Psychology and Neural Science at New York University, says many risks are still not being addressed.
“I don’t think AI is advancing as rapidly as many people seem to think, but there are already many risks even in its current state – particularly around disinformation, accidental misinformation (for example, in the form of defamation), cybercrime, and nonconsensual deepfake porn,” he tells ESG Investor.
Asset owners and managers have a responsibility to keep a watch on how Big Tech companies manage these risks.
“Investors should wake up and realise that Silicon Valley is talking a good game about regulation, but doing everything in its power to duck any real regulation with teeth – and will continue to do so if it possibly can get away with it,” Marcus says.
Since the letter was signed, only one major jurisdiction has moved quickly to regulate AI: the EU. Earlier this year, it signed into law the EU Artificial Intelligence Act, imposing sweeping restrictions on tech companies operating within the bloc. The state of California – one of America’s most regulation-friendly state – is attempting to legislate a similar bill, CA SB-1047, designed among other things to protect against the “hazardous capabilities” of AI.
At the federal US level, President Joe Biden introduced a voluntary Blueprint for an AI Bill of Rights, but there has been no momentum to enshrine European-style regulations in law. Meanwhile, Republican presidential candidate Donald Trump has vowed to wind back Biden’s modest reforms, turning AI into yet another polarising topic in the campaign.
Another impediment to AI regulation, according to Kron, is recent rulings by the Conservative-dominated US Supreme Court, which have hobbled the ability of federal agencies to exercise regulatory powers.
All this puts more onus on investors and civil society to keep a close watch on the sector, Kuykendall says.
“AI has enormous potential to make the world better if it’s done right,” she says. “But what it has done so far is exacerbate existing inequalities, and we need to work hard to fix that.”
The post Racist Robots and the Rise of AI Activism appeared first on ESG Investor.