AI and ESG: Part Two: Risk and Governance
Institutional investors need to grasp its E, S and G risks quickly to use and invest in AI effectively, says Lorenzo Saa, Chief Sustainability Officer of Clarity AI.
The potential rewards of AI are enormous, but they do not come risk-free. Just as smoke alarms safeguard our homes, the modern economy needs to install early warning systems and guardrails that will enable us to safely power ahead with the use of AI.
That means minimising the potential misuse of AI in the three key areas of environmental, social and governance risk. As is often the case with investors, let’s start with governance.
AI governance risks
There are a lot of factors to capture in the governance of AI but a simple way to capture them is by using the following main points:
Accountability and oversight: Assigning clear responsibility for the ownership and management of the AI that they are investing in or using. It should never get to “it is the AIs’ fault!”
Disinformation and hallucinations: Ensuring that the AI model has the appropriate guardrails in place to avoid results that are misleading or outright fabrications.
Data privacy and security: Ensuring that the AI or the user interacting with it is not using data that it does not have authorised access to.
Fairness and non-discrimination: Ensuring that the AI model or algorithm is not trained on biased data which can lead to the unfair treatment or the exclusion of certain individuals or groups.
Transparency and explainability: Ensuring that the workings of the AI model are disclosed and explained to an extent that makes it trustworthy and understandable.
Even in AI’s early stages, poor governance has emerged, as seen with Clearview AI. The US-based facial recognition company faced fines from regulators in the UK, Netherlands, and elsewhere for scraping billions of social media images without user consent.
To address these risks, investors need to apply governance principles and guidelines for managing AI. The key principles guiding the industry are the Organisation for Economic Co-operation and Development’s (OECD) AI Principles. Updated in May 2024 to capture the emerging risks introduced by generative AI tools like ChatGPT and Google’s Bard, they are the main reference point for anyone wanting to tackle AI responsibly.
In addition, there are a growing number of players offering risk management guidelines such as The Partnership on AI, AI4People, Future of Live Institute, The Green Digital Finance Alliance (GDFA) and The Responsible AI Institute (RAI).
Investor-specific guidelines include World Economic Forum’s Responsible AI Playbook for Investors, the CFA Institute’s Ethics and Artificial Intelligence in Investment Management and RAI’s Guiding Framework for Responsible AI Integration Into ESG Paradigms tackles specifically sustainable investing.
These principles and guidance aim to prevent AI from amplifying issues like bias and misinformation, or enabling harms such as mass surveillance and human rights abuses. They emphasise accountability, requiring transparency on who deploys AI and who is responsible for its outcomes.
We are also seeing emerging discussions on best practices for AI policy, especially following the ratification of the EU AI Act. We expect that more regulations will aim to govern AI and we are seeing strong policy ideas emerge that focus on balancing innovation with risk management, creating sandboxes for testing, tailoring rules by sector, and ensuring cross-border interoperability aligned with OECD AI Principles.
AI’s environmental footprint
AI has the potential to accelerate progress toward global climate and nature goals. However, AI also poses environmental challenges that must be managed.
Most prominent is its use of electricity and water.
Data centres, essential for AI infrastructure, account for 2-4% of electricity consumption in major economies like the US, China, and the EU – a figure expected to grow with rising demand for AI. Similarly, AI systems require significant water, with estimates predicting it may require up to 6.6 billion metres cubed of water withdrawal globally by 2027, over half the UK’s annual water use. Due to their investments in AI, tech players like Microsoft and Google have seen their greenhouse gas emissions grow about 30 to 50%, despite their net zero commitments.
Investors play a key role in managing these risks. AI’s electricity consumption should increasingly rely on renewable energy, which is why tech companies are investing heavily in carbon-free energy. Examples include Microsoft’s partnership with Brookfield and recent investments by Amazon and Google in small nuclear reactors for clean energy.
Water use should also be minimised through closed-loop systems and sustainable practices, ensuring it’s not diverted from essential human needs and that wastewater is safely managed and reused, for example, in local heating systems.
To reduce environmental impacts, AI model designs should also be resource-efficient and aligned with specific use cases – generative AI models aren’t always necessary when simpler, less resource-intensive models suffice. Notably, AI is helping mitigate its own environmental footprint. Google’s DeepMind AI, for example, optimises the energy use in its data centres and has achieved a 40% reduction in energy use for cooling.
Lastly, responsible environmental management of AI requires sustainable sourcing of materials for data centres and hardware. Rare earth metals, like lithium and cobalt, are vital for AI but carry environmental and human risks if mined irresponsibly. Additionally, AI hardware often uses toxic chemicals and heavy metals such as lead, cadmium, and mercury. Without proper disposal, these can leach into soil and water.
The social side of AI
AI’s potential social benefits – ranging from new healthcare treatments to improved access to education – are significant, though often less discussed than environmental impacts.
A study published in Nature Journal in 2020 found that AI could help to positively contribute to 134 of the 169 (79%) UN Sustainable Development Goals (SDGs).
However, it also warned that the social risks that come with AI might hinder progress on 59 of the 169 (35%) SDGs if it is not managed wisely. Some of these issues overlap with the governance issues outlined above. For example, there is a risk that AI could exacerbate biases or unfair treatment in recruitment processes.
One of the most prominent social concerns discussed in relation to AI is its impact on the labour market.
While AI will boost productivity and create new roles, McKinsey predicts that by 2030, activities that account for up to 30% of hours currently worked across the US economy could be automated due to the accelerating use of generative AI. Just like for climate change, this shift will require a ‘just transition’, with governments and companies investing in training to help workers adapt.
While it is not the whole story, there is something to the much-quoted point that “AI is not going to take your job, but someone who knows how to use AI will”.
Encouragingly, AI ethics and governance are already being integrated in the curriculums of relevant university and technical institutes. We need to see more of this and more public awareness campaigns by governments, NGOs, and educational institutions to equip citizens with the knowledge to navigate AI’s risks.
The time to act on AI is now
What does this all mean for sustainable investors?
AI can enhance investment decisions across the portfolio cycle – from data collection to reporting – and transform the financial and sustainability outcomes of their investment activity. But it’s not a free lunch. Investors must fireproof their use of, and investment in, AI by designing their own approach to managing its risks.
The starting point is to establish accountability and governance to oversee AI strategy and implementation, using principles like the OECD AI Principles, aligning with new frameworks and regulations, and balancing risk management with innovation.
They must also address AI’s environmental and social risks by, for example, using energy-efficient models and chips, choosing AI that relies on renewable-powered data centres, and investing in AI training for both technical and non-technical staff, while supporting an AI ‘just transition’.
Delay is not an option. The technology is developing at speed, and institutional investors that do not take their first steps soon risk being left behind.
Part one of this article considers the possibilities when the power of long-term capital meets the power of technology.
The post AI and ESG: Part Two: Risk and Governance appeared first on ESG Investor.