• info@esgwise.org

Growing demand for ‘responsible AI’ amid governance risk concerns

Investors should be prioritising discussions on AI governance, transparency and long-term resilience or face an ‘AI tax’ further down the line, according to Rita Wyshelesky, senior ESG analyst at Carmignac.

Wyshelesky explains in a note how ‘responsible AI’ has become a material issue for companies and investors, and therefore should be an engagement priority.

“AI has rapidly evolved from a niche technology to a central driver of corporate transformation, accelerated by the rise of Generative AI (GenAI),” she commented. “Its impact spans sectors; from financial firms using predictive analytics to optimise portfolios, to healthcare systems applying machine learning for early disease detection. Yet the same technologies that create value also introduce risks, including bias, misinformation, and systemic disruption.”

See also: The ethics of AI 

The 2025 Stanford AI index reported a 56.4% increase in AI-related incidents in 2024, and deepfake content is projected to surge from 500,000 files in 2023 to more than 8 million in 2025, according to Carmignac research.

“With reportedly 71% of companies now regularly using GenAI, these trends underscore the urgency of adopting responsible AI frameworks that ensure innovation is aligned with safety, trust, and long-term value creation,” Wyshelesky added.

She also highlighted the “outsized reputational consequences” companies can face. For example,  Google’s Bard demonstration error in 2023 wiped roughly $100bn from Alphabet’s market value within days, while Microsoft also suffered from similar issues when Bing’s chatbot was sending users threatening or hostile messages in the same year, leading to a share price drop of 4%.

“AI incidents can undermine customer trust and brand equity far quicker than traditional operational failures,” Wyshelesky noted.

See also: AI: Reality, risk and returns in the next tech revolution

Regulation is also having an impact, largely by the fact it hasn’t been able to keep up with the dramatic developments in AI. The EU AI Act came to force in 2024. Wyshelesky said: “This stands out as one of the most comprehensive frameworks for AI governance, but remains largely reactive, emphasising risk classification and compliance over systemic accountability. And even that faces the possibility of dilution as a result of ‘big tech’ lobbying.”

Further, the US, UK and Asia markets have adopted their own voluntary frameworks leading to inconsistent implementation.

“While regulations set minimum compliance standards, stakeholders increasingly expect organisations to uphold higher ethical benchmarks. This shift has led to growing demands for greater transparency, algorithmic fairness, and responsible environmental management.

“The question is no longer whether AI will reshape industries, but how responsibly that transformation will occur. In the event that large-scale unemployment emerges without effective economic reallocation, the introduction of an AI tax would become increasingly inevitable.”

Framework

As a result of the advances in AI, and potential concerns around governance, Carmignac has created a ‘Responsible AI’ framework to address company usage and impact on risk and return. This analyses design, deployment and oversight of AI systems and how these affect human rights, ensure safety and build trust. This is divided into five pillars:

  • Fairness – AI systems should not systematically disadvantage individuals or groups (eg gender, race, age, or disability).
  • Explainability – Stakeholders should be able to understand, at the right level of detail, how an AI system arrives at its outputs.
  • Privacy – Systems should have robust data protection and constraints on secondary use of data and protection of individual’s human rights.
  • Robustness, Security, and Safety – Mechanisms should be in place to manage risks, prevent undue harm, and allow systems to be overridden or decommissioned.
  • Accountability – Clear human accountability must sit above automated systems. Responsibilities for AI risk management, model validation and incident response need to be assigned at management and board level, and documented in policies, controls and escalation procedures.

“These values are integral to minimising operational and reputational risk. Poorly governed AI can result in biased credit decisions, unsafe medical recommendations or flawed trading signals, translating into an increase in legal exposure, remediation costs and regulatory penalties for companies,” Wyshelesky said.

See also: What AI really means for asset management: How firms can prepare

Leave a Reply

Your email address will not be published. Required fields are marked *