• info@esgwise.org

Keeping up with AI

With the technology moving fast and policymakers struggling to stay apace, investors can help to regulate the space.

It’s no secret that the pace of rulemaking is oftentimes slow compared to the speed at which markets evolve. But this is perhaps truer than ever in the case of AI.

As such, experts have been pointing out the key role that investors can play in keeping companies accountable, ensuring that technological progress isn’t being made at the cost of fundamental rights and protections.

The regulatory piece is critical, and in an ideal world, we would get good, robust legislation,” suggests Jonas Kron, Senior Vice President and Chief Advocacy Officer at Trillium Asset Management. “The EU AI Act might be that, but we can’t just wait around for regulation. Investors in particular need to make sure they do their part to press the companies on ESG disclosures and best practices.”

Kron describes investor engagement in the space as being in its early days – a “let many flowers bloom, exploratory part”, he says. He reports being aware of three investor initiatives – none of which are public yet.

“It’s sort of exciting and good right now to see all the activity – it’s not dissimilar to other stages of investor interest on climate change, biodiversity, or water,” Kron says. “There is a proliferation of frameworks, regimes and guidelines. We’re likely to see a lot of that over the next year to 18 months. Some of them will catch on and consolidate, and some will go away.”

Again, the issue is the pace at which the industry is moving – which creates challenges for investors too.

“Investors who want to put together these frameworks trying to play catch up – which is why there is a lot of interest at the governance level, as you can cover a lot of bases with good governance,” Kron argues. “You do want to dig down into the details of particular social and energy impacts, but governance is really important right now, as the industry takes off.”

In 2022, Trillium filed a shareholder proposal at tech giant Alphabet, requesting for the company to go “above and beyond its existing disclosures and provide more quantitative and qualitative information on its algorithmic systems”. The proposal received 19% of votes in favour.

Proactive approach

Kron’s thoughts were echoed at this year’s PRI in Person conference in Toronto, during which a panel of experts discussed how to invest responsibly in AI.

Pointing to regulators’ ceaseless struggle in the face of technology’s speed and complexity, moderator Walter Viguiliouk – Managing Director for Sustainable Investing and Private Markets at Manulife Investment Management – questioned the panel on the role of investor engagement, including with policymakers.

“[There is] space for investor engagement with portfolio companies to try to set ethical framework and guidelines that will govern the implementation of AI in the workplace and in companies’ business operations – address[ing] some of the concerns through the private ordering process, as opposed to waiting for regulation,” said Brandon Rees, Deputy Director for Corporations and Capital Markets at the American Federation of Labor and Congress of Industrial Organizations. “As we’ve seen with climate change, we can’t wait for government to solve the problem.”

Investors can maximise the value of their portfolio companies by ensuring that they’re adopting AI in a thoughtful way that includes employee engagement and empowering workers, Rees argued.

Proactive engagement with regulatory developments can also offer a strategic advantage, Ty Francis, Chief Advisory Officer at compliance advisory firm LRN, tells ESG Investor – as staying ahead of regulatory shifts enables to better anticipate market trends, build trust with stakeholders, and promote transparency. “As AI continues to reshape industries, ESG investors will need to closely monitor regulatory updates to mitigate risks and foster sustainable returns,” he adds.

Designating ‘unsafe AI’ as the latest type of ESG risk for investment portfolios, index provider ISS STOXX recently recommended investors monitor where shortfalls can occur.

“New regulation and industry efforts point to sources of portfolio risk — particularly of the social type — for both developers and deployers of AI systems,” ISS STOXX said in a statement. “As the technology advances quickly, governments and companies are stepping up efforts to ensure it is not prone to misuse and abuse. New rules such as the EU AI Act and companies’ disclosures can provide guidance and key information to mitigate risks.”

Back at PRI in Person, Jessica Cairns, Head of ESG and Sustainability at Alphinity – a boutique active manager of Australian and global equity funds – argued that while AI is not a “risk in itself”, it can’t only be seen as an opportunity either – urging investors to address the technology’s potential to exacerbate established risk factors.

We agree that [AI] is a transformational technology that’s going to change how we all work,” she said. “But the interface with existing ESG factors like privacy, cybersecurity, diversity and inclusion, human rights, emissions, is really the way we think through how best to understand that risk and opportunity, and how best to engage and analyse the risk from a company perspective.”

In a recent paper co-authored with Open MIC, an investor-backed initiative aiming to foster corporate greater accountability on digital technologies, the Interfaith Centre on Corporate Responsibility (ICCR) drew similar parallels with existing human rights risks.

“It really mimics long-term areas of risk for investors and companies when dealing with supply-chain issues,” says Nadira Narine, Senior Director of ICCR’s Advancing Worker Justice Program. “One of the major risks deriving from the technology is discrimination, as there is inherent bias embedded within AI-based systems. Some suppliers say they are just providing the tool and their customers choose how to use it – but it’s actually the responsibility of both parties.”

It befalls investors, as much as companies, to ensure that human rights abuses are prevented and mitigated across supply chains, Narine insists.

With AI-related risks being inherently global in nature, the hope and anticipation is that there will be a suitably robust regulatory environment wherever the technology is deployed, Michael Connor, Executive Director at Open MIC, tells ESG Investor.

“Many companies have global activities, so we need to ensure that if they are applying a higher standard in Europe due to EU regulations, this should be the case wherever they operate to foster greater corporate accountability in the deployment and use of digital technologies,” he says.

Global landscape

In most jurisdictions though, AI regulation is at a nascent stage, with efforts and stages of development fairly uneven across the globe.

“The EU is by far the most advanced, and Singapore/South East Asia are the opposite – taking a light touch approach for now, which is fairly sensible since the burdens that the speed of AI development and technologies place on lawmakers to keep up with them are onerous and expensive,” says Frank Meehan, Chair at Improvability AI, which offers sustainability reporting automation services.

In force since 1 August, the EU AI Act is widely perceived as the most advanced, albeit prescriptive, form of regulation in the space – drawing criticism, particularly from US peers.

“While the EU’s regulations show an admirable understanding of the technologies and implications – particularly in areas such as defence – they are also premature and could potentially stifle innovation,” Meehan argues.

His thoughts were also echoed at PRI in Person, with Cameron Schuler, Chief Commercialisation Officer at the Vector Institute for Artificial Intelligence, arguing that the EU’s General Data Protection Regulation already went a step too far. “The important piece for this is interoperability, making sure you’re focusing on things that organisations can do around the world,” he added.

Over in Asia, several countries including Vietnam, Thailand, Taiwan, South Korea, Singapore, Malaysia, Japan, Indonesia and India have published draft bills or intend to do so. Hong Kong is expected to release its first policy statement on AI use by the end of the month.

China is the only Asian country to have formally introduced AI legislation, with a first set of laws released in 2022-23, and further draft regulations on generative AI released earlier this year.

“Asia hasn’t had many actual laws passed – a lot of what we’re seeing is still at the guidance stage,” says Marian Waldmann Agarwal, Partner at law firm Morrison Foerster. “What we’re seeing across the board is similar concerns around the promotion of principles underlying the technology, which basically go back to responsible use, transparency, and inclusivity.”

In Australia, the Department of Industry, Science and Resources recently closed a consultation on mandatory guardrails for AI – stressing that previous consultations have shown the country’s current regulatory system isn’t fit to respond to the risks posed by the technology.

“It’s a real challenge for us in the Australian landscape, as we don’t have any regulation yet around this,” said Alphinity’s Cairns at PRI in Person. “[For companies], there is a lot of concern around moving too quickly when regulation hasn’t been put in place. If the regulatory environment is going to be very strict like what we’re seeing in the EU, they might have to redo things later. But there’s also a risk in being too slow and getting left behind.”

In the US, the absence of overarching federal legislation means individual states have taken the lead.

“The regulatory environment in the US is relatively weak and nascent,” says Narine. “There’s a growing field at the state level with a few bills that have been circulating, and more conversations are happening at the federal level about what is needed.”

But as ever with US regulation, those bills are contending with strong corporate lobbying, Narine explains.

“It is helpful to think of existing and future US legislation through a national security lens: in many ways, AI is viewed by our government as presenting simultaneously a race to the moon, an arms race, and a talent war,” suggests Joshua Klayman, US Head of Blockchain and Digital Assets at Linklaters. “At the federal level, Biden’s sweeping AI executive order should be viewed as a North Star directing various agencies to act, rather than setting forth binding regulations.”

Financial regulators such as the US Securities and Exchange Commission have begun to tackle AI-related risks, including through enforcement actions, while the Commerce Department is poised to play a key role in setting forth national standards – in line with the executive order.

Nineteen states have passed laws to address deepfakes, while some including New York have introduced laws concerning online harms and limiting the use of addictive algorithms by social media platforms. But the most forthcoming state has certainly been California, which last month converted 18 out of 38 draft AI bills into law.

The most controversial bill, however – SB 1047 on Safe and Secure Innovation for Frontier Artificial Intelligence Models Act – was vetoed by Governor Gavin Newsom. The proposal is said to represent a major shift in how individual states have sought to regulate AI to date – including by requiring developers to implement a ‘kill switch’ and be subject to third-party compliance audits.

Governor Newsom felt the bill wasn’t well-suited for the way the market is moving, focusing on the need for the regulation to be nimble to keep up with technological advancements,” says Trillium’s Kron. “This really gets at the fundamental challenge that all public policy regulators face right now, which is that the industry is moving incredibly quickly, and democracy moves much more slowly. That delta feels enormously difficult and frustrating for anybody who believes that public policy is a good thing in this field to succeed.”

Having to accommodate a variety of rules and regulations globally does create “a lot of problems” for AI developers, companies and investors alike, Meehan argues. The tri-partite legally binding international treaty on AI signed by the US, EU and UK during a Council of Europe conference last month was intended in part to soothe those pains.

“These are early days in terms of identifying the right approach, with some conflict even among the experts,” says Open MIC’s Connor. “Before we write a lot of regulation or legislation, we should make sure we’re addressing the right issues and understand them correctly – and a lot more research has to be done for that.”

The post Keeping up with AI appeared first on ESG Investor.

Leave a Reply

Your email address will not be published. Required fields are marked *