• info@esgwise.org

Investors at War with Tech Firms on AI

The increasingly pervasive use of big tech-designed AI in weapons and surveillance poses risks to investors. 

The responsible use of AI in warfare is a topic of increasing debate among investors, with at least one asset manager filing a shareholder proposal calling for transparency on the matter this proxy season. 

In February, Google and its parent company Alphabet dropped their commitment to not develop AI applications that can be used in weapons or surveillance systems. This decision follows other tech firms similarly walking back their AI usage policies in favour of expansion into the weapons market, including OpenAI, Anthropic and Meta. 

In response, Zevin Asset Management has filed a shareholder resolution at Alphabet requesting an evaluation of the company’s due diligence process for determining whether customer use of its products and services for surveillance, censorship, and/or military purposes contributes to human rights harm in conflict-affected and high-risk areas (CAHRA). 

“In the case of Alphabet today, the use of technology in CAHRA presents material risks and impacts to our investment,” Marcela Pinilla, Director of Sustainable Investing at Zevin Asset Management, told ESG Investor. 

“As the tech sector and Alphabet compete for business, incorporating AI into military applications is blurring the lines between the tech and defence industries through contracts, partnerships, or technological integration.”  

AI is increasingly commonplace in warzones. The Ukrainian military has used AI-equipped drones mounted with explosives, while the Israel Defense Forces utilised an AI-enabled targeting system to label over 30,000 Palestinians as suspected militants during the first weeks of its war in Gaza. 

The US military alone has more than 800 active AI-related projects and requested almost US$2 billion in funding for AI in its 2024 budget. 

“Any companies providing technologies to militaries that could be violating human rights or the laws of war run the risk of criminal indictments, sanctions, civil suits and considerable reputational damage,” said Audrey Mocle, Deputy Director of non-profit Open MIC. 

“The risks for investors in these tech firms increase as they become more enmeshed with the [weapons] sector.” 

A study on the ethical implications of AI in warfare conducted by Queen Mary University of London found that the integration of AI-enabled weapon systems facilitates the objectification of human targets, while automation bias and technological mediation weaken moral agency among operators of AI-enabled targeting systems. 

AI arms race 

The increase in AI contracts with governments and militaries shows that big tech firms – including Amazon and Microsoft – are actively looking to pivot into the weapons sector, warned Michael Connor, Executive Director of Open MIC. 

“We are seeing a giant shift from tech companies. Previously they looked to provide products and services that could be used at home and in business, but now they are looking at the government and military market,” he said. 

The Chinese People’s Liberation Army has utilised Meta’s Llama model for potential military applications, including gathering and processing intelligence. 

In addition, despite its original ethical commitment, in 2023 Google secured a US$1.2 billion deal with the Israeli government and military to provide AI capabilities – also known as Project Nimbus. Employees who protested the deal were fired. 

“No one expects an Anduril or a Palantir to avoid developing AI tools for war – it would be antithetical to their mission,” said Jan Rydzak, Digital Transformation Lead at the World Benchmarking Alliance (WBA). 

“But when dominant tech giants like Google or Meta revamp their principles to enter the arms race, they send a signal that ethical foundations are fluid and nothing is off-limits.” 

Rydzak is also Lead of the Ranking Digital Rights Index (RDR Index), which, as part of its work, has been looking at tech companies’ use of AI for specific purposes and services. 

Some firms, like Microsoft, have emphasised the importance of AI for defence purposes, following mounting geopolitical tensions and conflicts, and government ambitions to rearm.  

Following its ReArm Europe proposal, the European Commission recently published a white paper outlining the bloc’s roadmap to grow its defence sector over the next four years – including its plans to accelerate the transformation of the sector through “disruptive innovations” like AI and quantum technology. 

“As the lines between the defence and tech industries blur, the investment community must reassess what qualifies as a controversial weapon, its critical components, and what defines a weapons company,” said Samuel Jones, President and Co-founder of the Heartland Initiative, a US-based non-profit focused on the rights of people in conflict-affected and high-risk areas. “At present, there are more questions than answers.” 

Kept in the dark 

Investors are beginning to engage individually or in small groups on the issue of tech companies’ connections to warfare through AI.  

Last year, the Investor Alliance for Human Rights partnered with PeaceNexus and the Heartland Initiative to assess human rights due diligence in CAHRA. 

Meanwhile, the Investor Advocates for Social Justice (IASJ) has supported the filing of an AI-focused customer due diligence shareholder proposal at Amazon for the past five years. During the most recent three years, that proposal also addressed Amazon’s connection to Project Nimbus. 

“The company never agreed to meet with us to discuss our concerns,” said Aaron Acosta, the IASJ’s Programme Director. 

Mocle at Open MIC said equity investors should now be asking tech companies to disclose their policies around the use of AI in a military context and what their red lines are – if any.  

“Investors within these companies should advocate for proper AI governance, meaning actual human oversight – military AI systems should never operate without human intervention,” added Antoine Argouges, CEO and Founder of impact investment platform Tulipshare. 

“We also believe in transparency and third-party audits to keep these companies accountable.” 

However, the opacity of government contracts is likely to make securing any transparency from big tech on their AI applications used in warfare and surveillance incredibly challenging, warned Jones at the Heartland Initiative.  

“[That opacity from] defence companies, the preference for a compliance-only approach over robust human rights due diligence, and intense defence industry lobbying already pose significant challenges for investors promoting responsible business practices,” he said.  

“As tech companies increasingly secure military contracts, we anticipate similar obstacles emerging in the sector.”

The post Investors at War with Tech Firms on AI appeared first on ESG Investor.

Leave a Reply

Your email address will not be published. Required fields are marked *