Can standards save the world from ‘out-of-control’ AI?

International electronics standards agencies insist that standards and conformity assessment can help ensure the responsible and safe development of artificial intelligence (AI).

AI has been in the news a great deal recently because of the recent launch of AI-powered chatbots likes ChatGPT.

While the cognitive abilities of ChatGPT have left many marveling there have also been widespread concerns about how this technology could be misused.

In a recent open letter, the Future of Life Institute (FLI), a global non-profit, expressed concerns over the potential risks of advanced AI systems.

In its open letter, the FLI noted that AI could represent “a profound change in the history of life on Earth,” and as such needs to be “planned for and managed with commensurate care and resources.”

However, it said: “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

In response the electronics standards agencies IEC, ISO and ITU released a joint response defending the role of standardization in helping to mitigate against dangers posed by AI.

According to the agencies, international standards can provide appropriate guidelines for responsible, safe and trustworthy AI development. In their response, the agencies said that their work helps mitigate many of the risks associated with AI systems and can underpin regulatory frameworks.

IEC, ISO and ITU together comprise the World Standards Cooperation (WSC) alliance, a joint body set up 20 years ago to strengthen and advance their voluntary consensus-based international standards systems.

In their statement published on the WSC’s website, the agencies pointed out some of their work in advancing standards around AI.

They include: the ITU's AI for Good initiative, which is helping stakeholders to align AI innovation with the UN Sustainable Development Goals and involves the cooperation of 40 partner UN agencies.

IEC and ISO, meanwhile, have a joint technical committee developing standards on AI known as SC 42. The committee covers the entire AI ecosystem, including governance, risk and other management standards, ethical considerations and terminology.