• info@esgwise.org

Is AI the Answer to Board-level Lack of ESG Knowledge?

Helle Bank Jørgensen, Founder of Competent Boards, and Dr Henning Stein, Finance Fellow at Cambridge Judge Business School, explain how the technology can empower boards to address sustainability challenges.

ESG considerations have been rising up corporate agendas for years, yet meaningful knowledge of them often remains poor at the highest levels. A recent study by Competent Boards and Copenhagen Business School provides alarming evidence of this problem.

According to an analysis of publicly available data on Fortune 500 companies in the US and Europe, the boards of many of the world’s largest businesses still lack sustainability expertise. Incredibly, some appear to have no relevant competences whatsoever.

Just 2% might be described as ‘sustainability superstars’, which is to say their boards possess substantial sustainability acumen. This represents a shockingly small minority in an age when financial performance is increasingly affected by not-yet-financial factors.

The most obvious way of dealing with this issue is for board members to undertake training explicitly focused on sustainability, governance and stewardship. This allows them to develop a necessary understanding of all the risks, opportunities, standards and regulations in this space.

But might technology also have a part to play? In light of the tremendous advances witnessed during the past few years, would a board’s approach to sustainability and ESG be enhanced if there were a seat at the table for artificial intelligence (AI)? Could progress in fields such as generative AI, natural language processing and machine learning assist in tackling major challenges such as climate change, social equity and ethical governance?

We believe so. With many boards knowing far too little about sustainability and ESG, there is a compelling case for utilising a form of tech that seems to know everything. It is vital to understand, though, that such an approach must be conditioned by a number of significant caveats.

Processing power fuels diversity of thought

No-one could reasonably deny that AI is able to draw on an epistemic base far greater than that of the average board member. It is unbeatable in terms of acquiring and processing information.

What may be less appreciated is that it is also capable of producing genuinely novel ideas. This supports diversity of thought, which is today rightly recognised as a fundamental characteristic of effective management.

We can illustrate this by briefly switching to another kind of board – the one on which Go is played. Go was invented around 2,500 years ago, but arguably the most dramatic moment in its long history came when AI entered the fray. In 2016, in a match against leading player Lee Sedol, Google’s AlphaGo program famously made a never-before-seen move that went on to transform how humans play the game.

It hardly need be said that AI has come a long way even since that encounter. It if could exhibit disruptive thinking with regard to Go eight years ago, we should not be surprised if it comes up with game-changing concepts in the sphere of sustainability today.

For example, what might happen if a board’s audit committee were to ask AI to highlight a full range of ESG-related financial risks? By analysing multiple datasets, AI could quickly uncover previously overlooked correlations between environmental practices and long-term performance. These findings, in turn, could shape more robust strategies for risk management.

Similarly, a board might ask AI to simulate various sustainability scenarios to help determine how best to reduce a business’s carbon footprint. Here, too, positive and far-reaching change could stem from innovative insights that might otherwise never have emerged.

The threat of GIGO

This is not to say AI will invariably be correct. Quite the contrary: it can supply disturbing proof of one of the most reliable adages in computing – garbage in, garbage out (GIGO).

Data scientist Cathy O’Neil, author of ‘Weapons of Math Destruction’, a New York Times best-seller, has described algorithms as “opinions embedded in code”. AI is far from immune to this failing, because it is a creation of humans – and humans tend to have biases.

Imagine, for instance, that every coder on the planet were to believe climate change is a hoax. This extreme predisposition would be reflected in an AI board member’s conclusions, rendering its contribution nigh on useless.

The prejudice does not even have to be so manifest. In the style of a butterfly effect, outputs might be skewed by a single byte of false data. Bias may also be unconscious or even accidental – a consequence of a fleeting lapse of judgement, a momentary oversight or insufficiently nuanced programming.

Take the researchers who used machine learning to try to teach a robot vacuum cleaner not to crash into things. The device was expected to grasp the importance of avoiding objects, but it instead began reversing into them – because its programming penalised it only for head-on collisions.

This underlines that someone has to tell an algorithm what is right. In other words, someone has to define ‘success’. This is comparatively easy in relation to a game of Go but altogether trickier in relation to a robot vacuum cleaner – so how hard might it be in the context of an arena as hugely complex as sustainability and ESG?

AI informs, humans decide

This brings us to trust. Just as it should underpin every engagement or transaction in the physical realm, trust is an essential bridge between AI and its users.

Unfortunately, it is by no means a given. Warning it could further polarise “perceptions of reality”, the World Economic Forum’s Global Risks Report 2024 identifies AI-generated misinformation/disinformation as the most serious threat to humanity over a two-year time horizon.

It follows that boards must treat AI outputs with a healthy scepticism. Applying critical scrutiny and bringing their own experience and intuition to bear, they need to remember not just what state-of-the-art tech does well but what they should do well themselves.

AI represents a resource of extraordinary and perhaps even unprecedented potency. It is a technology that absolutely excels at informing decisions. Yet its function is to empower rather than overpower, because the job of actually making decisions still falls to board members.

This is why blindly accepting AI’s interpretations and recommendations flies in the face of fiduciary duty. Such a responsibility cannot be delegated to a machine. It amounts to an abrogation of accountability and could lead to outcomes that are unethical and even illegal.

It is therefore imperative to look on AI as a sort of co-pilot. It can act as a remarkable guide – in many ways probably the finest a board could ever wish for – but it is not an oracle. It is not a fount of irrefutable wisdom. It is certainly not the boss.

Omniscience – no; opportunity – yes

AI has near-instantaneous access to practically every shred of recorded knowledge. Its capacity to sift that vast accumulation of facts, figures and opinions is unparalleled. This alone could justify its presence on a board whose own awareness of a topic may be lacking.

While generic AI solutions such as off-the-shelf bots or a ‘standard’ version of ChatGPT might not suffice, the scope for customising AI tools to meet a board’s sustainability and ESG needs is considerable. Yet even if AI seems to know everything about sustainability and ESG – or any other subject we might care to name – it should not be feted as truly omniscient, because the enormous difference between knowing about and knowing how means human judgement is still indispensable.

AI should be used principally to carry out the ‘heavy lifting’ of board activity – the tasks homo sapiens would require days, months or even years to accomplish. It can add valuable layers to short-term and long-term thinking. It can save time, energy and money. It may even generate views, proposals and innovations that its flesh-and-blood counterparts would never conceive.

To this extent, it really could revolutionise how boards address sustainability matters. It could help them ask the right questions and so arrive at the right answers. Even an enquiry as ostensibly simple as “What are our business’s biggest environmental risks?” or “How can we improve our sustainability practices?” could help unlock a company’s untapped ESG potential – provided the responses are used as building blocks for discussion, duly critiqued and cross-referenced with expert opinion.

And this is the most crucial point: AI must always be a servant rather than a master. In tandem, it must never become an excuse for board members to keep wallowing in ignorance of sustainability and ESG – otherwise precisely that role reversal could all too easily occur.

Ultimately, as has long been the case with technology, the ideal course lies in blending the best of both worlds. By carefully combining their own talents with those of AI, boards can clearly further the cause of sustainability. At a time when continued high-level unfamiliarity with this field has never been more damaging, dangerous and indefensible, this is undoubtedly an opportunity that demands attention.

The post Is AI the Answer to Board-level Lack of ESG Knowledge? appeared first on ESG Investor.

Leave a Reply

Your email address will not be published. Required fields are marked *