• info@esgwise.org

Inclusivity Key to Minimising AI’s Social Harms

Experts underscore need for broad dialogue on regulation and governance, after developing countries largely left out of AI Safety Summit. 

Efforts to limit social risks from artificial intelligence (AI) at last week’s AI Safety Summit fails could fall short without a more inclusive approach to regulation and oversight. 

Convened by UK Prime Minister Rishi Sunak, the summit brought together large technology firms and governments to address concerns over a wide range of potential social harms from use of the fast-evolving technology.  

But investors and industry experts have argued that protection of human rights and prevention from potential biases and abuses will depend on a wider dialogue.  

Simone Andrews, Responsible Investment and Governance Specialist at APG Asset Management, noted said that the current lack of “sufficient” governance for AI enabled services and platforms risks impacting users’ “fundamental human rights”, including freedom of expression, privacy and right to nondiscrimination.  

There were approximately 150 representatives at the two-day summit, held at Bletchley Park, but developing world representation was noticeably scant.  

Kenya, Nigeria and Rwanda were the only three African nations in attendance. Just Brazil and Chile attended from South America and there were no delegates from Central America. 

Tomas van der Heijden, CEO at Berlin-based software firm Briink, told ESG Investor that if there is no input or influence from the Global South on how AI models are trained they are ultimately “not going to be tailored” to their needs or priorities. 

“When we talk about reliable, safe ethical AI, we need to have more stakeholders in the room that really should have a stake in these discussions,” he added. “We need to not make it super elite or restrictive and open it up to a wider scope of society.” 

Importance of inclusive AI design 

Ahead of the event, UN Secretary-General Antonio Guterres cited research that said no African country is in the world’s top 50 for AI preparedness, and that 21 out of the 25 lowest scoring countries were African. 

Tess Buckley, AI Ethics Senior Analyst at AI-powered ESG disclosure platform EthicsAnswer, underlined the diversity of Africa at the summit as an example of “prioritising and valuing some opinions rather than others”. 

“It’s just not their priority and, to be frank, how would they fully understand all the harm being caused because no one who’s actually being harmed is in the room to discuss it?” Buckley added. 

Van der Heijden noted that the majority of big tech companies involved in AI are mainly based in the West or China. AI firms in these regions train large AI models which are then “used by everybody”. 

Buckley said that if models are built without including the developing regions or minorities that it risks “reinforcing [and] exacerbating negative social patterns”. 

Van der Heijden described the push towards safer AI and the need for more human oversight as “critical” issues. In July, Briink partnered with Berlin-based AI startup Nyonic to develop “safe and trustworthy” AI models that comply with Europe’s new privacy and safety standards. 

If AI systems are developed and used without appropriate levels of human involvement or oversight there is a risk of “lots of inaccurate outputs”, which would result in a “lack of trustworthiness and reliability” from the AI, he warned. 

APG Asset Management’s Andrews noted “growing concern” around the lack of sufficient governance for AI-enabled services and platforms and warned that the risk from the “unbridled deployment of AI is huge”.  

“Worst case, we could see parts of society struggle from loss of job opportunity, social stigmatisation, economic loss especially from communities already on the margins,” she added. 

Andrews said that companies should be encouraged to adopt a “clear approach” on ethical standards for the development and deployment of AI-based policies that centre on human rights. 

Buckley underscored the need to make civil society aware of and involved in discussions around AI, and not just individuals that claim to represent the most vulnerable. 

She also noted that due diligence by investors and other stakeholders is required to make sure AI is designed ethically and responsibly. “If it is not, it isn’t sustainable [or] future proof,” she added. 

In a recent article for ESG Investor, Buckley identified the development of frameworks and guidance aimed at promoting an ethically-focused approach while designing AI-based solutions. 

Slow to surmount social risks 

The AI Safety Summit saw 28 countries sign the Bletchley Declaration, recognising the “urgent need” to understand and collectively manage potential AI risks. However, the initial set of signatories was dominated by the West, US, China and other developed nations. 

The statement noted “the importance of a pro-innovation and proportionate governance and regulatory approach”, as well as inter-country collaboration on “common principles and codes of conduct” and “increased transparency by private actors”. 

The summit also launched the AI Safety Institute, which will be responsible for testing new types of frontier AI, before and after they are released, to address risks, from social harms like bias and misinformation, to more extreme risks, such as humanity losing control of AI completely. 

The UK government agreed two partnerships with the US AI Safety Institute, and the Government of Singapore to collaborate on AI safety testing.  

Joe Lamming, Industry Analyst at research and advisory firm Verdantix, said that the Bletchley Declaration and AI Safety Institute only moved the needle on policy to address social risks “very little”. 

“It is very hard to tell if they are taking social risks seriously,” he added. “I feel like they’re a little bit too focused on the technology side.” 

Buckley acknowledged that the summit saw the industry “getting on the same page” and called the Bletchley Declaration a “step forward in public agreement”.  

“I’m curious to see changes on the ground when we’re talking about social impact and harms,” she added. 

The Center for AI and Digital Policy also welcomed the Bletchley Declaration, but emphasised the need to balance AI safety discussions with the AI fairness agenda with a focus on human-centric and trustworthy AI that protects fundamental rights. 

Recent policy action includes the G7 Code of Conduct and an executive order from US President Joe Biden setting out standards for AI safety and requirements for developers and users of AI systems. This follows the EU’s Artificial Intelligence Act, which was passed by the European Parliament in June to harmonise the region’s rules on AI systems. 

Lamming suggested that there is a “huge gap” in the regulation for smaller models of AI. While he noted their potential for innovation, he also highlighted it becoming “much easier” to create misinformation and produce harmful content. 

“With the small foundation models, you can fine tune them in such a way that they will always give you an answer, even if it is harmful,” he added. 

Van der Heijden said a “big concern” was strict regulations on AI hindering the development and accessibility of open-source technology. 

“We do need more regulations, particularly to drive this push towards safer and more reliable and trustworthy AI,” van de Heijden added, but cautioned against a “one size fits all solution”. 

The post Inclusivity Key to Minimising AI’s Social Harms appeared first on ESG Investor.

Leave a Reply

Your email address will not be published. Required fields are marked *