Castlefield’s Ita McMahon: Engaging on AI
Even the Luddites among us have to concede that AI is changing the way we work. Across the economy there’ll be obvious winners and losers from the rise of machine learning, but the vast majority of businesses will be able to benefit in some way from the efficiencies that AI can bring.
As investors incorporate these new considerations into their analysis of prospective investments, those of us with an interest in stewardship are asking another set of questions: when it comes to corporate development and use of AI tools, what does ethical practice look like? What standards do we want the companies in our funds to adhere to? What does good governance look like in this new and evolving area?
This article explains how ethical investors and responsible business practitioners are starting to address the ethics of AI.
What are the ethical risks associated with AI?
Before we turn to assess existing activity, it is useful to review the key concerns surrounding AI and ethics. These risks have been well-documented and an OECD paper provides a helpful categorisation of the major areas of harm that can be caused by AI:
- Purposeful harm by design: eg deepfake videos
- Harm caused by inherent “side effects”: eg biased data leading to discrimination
- Harm caused by failure rates: false positives by facial recognition eg in policing
- Harm caused by intentional misuse: eg political disinformation
- Harm caused by security breach: eg hackers taking control of autonomous vehicles.
How can investors assess corporate activity?
The list of harms is wide-ranging and the whole topic of AI and ethics can be daunting to address. Thankfully, the responsible business community has a strong track record in dealing with tricky issues. In fact, every so often, a new ESG issue emerges requiring industry and its stakeholders to agree on new standards of responsible business practice. Often, the issue can seem quite esoteric: biodiversity is a case in point.
A decade ago, the idea of identifying and measuring a company’s reliance on, and damage of, something as complex as the global eco-system seemed almost ridiculous. And yet, over time, governments, businesses and financial institutions have come together under the auspices of the Taskforce for Nature-related Financial Disclosures (TNFD) to address this very issue.
In fact, most of the major social and environmental issues that companies face now have some kind of framework, model or set of principles to guide corporate behaviour.
As investors, we use these very frameworks to engage companies and encourage better corporate performance. The good news is that we’re already starting to see initiatives emerge to help us look at AI, ethics and risk.
What’s happening already?
A number of well-respected organisations have published principles and frameworks targeting different audiences. The OECD AI Principles, for example, are aimed at governments and provide a specific set of guidelines for policymakers. The Alan Turing Institute has developed a guide for AI in the public sector.
Most interesting for us as investors are the UNESCO’s Recommendations of the Ethics of AI. Comprised of 10 core principles, covering safety, security, privacy and transparency, the recommendations provide a human-centred approach to the ethics of AI and form a key part of a new initiative that will hopefully transform how AI is used and reported on by companies.
The Thomson Reuters Foundation is home to the Workforce Disclosure Initiative, an investor collaboration that encourages companies to publish data on their employee base and workplace practices. The Foundation has partnered with UNESCO to develop this new AI Company Data Initiative (AICDi).
See also: What comes next for SFDR: Final recommendations and a future in the balance
A key part of the Initiative is a survey based on the UNESCO recommendations that the Foundation will send to companies. The aim is that by collating corporate responses, the Foundation will be able to build a dataset on corporate AI adoption. In turn, this will help to improve transparency and create a benchmark for best practice. The AICDi will be supported by investor signatories, that will use their shareholder influence to encourage companies to disclose.
Although still in its early stages of development, the AICDi has the potential to help investors build up a broad picture of AI in practice. It will help fund managers to assess how the companies in their investment portfolio are utilising AI and managing the risks that come with its application. The framework and survey are tried and tested ways of investors extracting information from companies and using it to inform their company engagements. We hope the AICDi proves as valuable as many of the other frameworks in use to address other ESG issues.
Clearly, investors want to see companies embrace AI and use it to drive efficiencies and product development. Company management needs to reiterate that innovation using AI is good, but should not be at the expense of ethics. A strong corporate culture can help deliver that message internally and good external corporate reporting can demonstrate those high standards to shareholders and other interested stakeholders.