Landmark regulations seek to take control of AI

Two important regulatory milestones were reached this month with the publication of a new ISO/IEC standard and the EU’s new AI Act.

The growth in popularity of generative AI tools like ChatGPT has pushed regulators. Picture: Pixabay

The EU's the AI Act, the first comprehensive AI regulations anywhere in the world, was finalized last week after a 36-hour negotiating marathon.

Under the terms of the new AI law developers like OpenAI and others will face transparency requirements, and there will be heavy restriction on the use of AI in facial recognition systems. Also this month, a new international standard, ISO/IEC 42001 (artificial intelligence management system, or AIMS) has also been published.

ISO/IEC 42001 provides a certifiable AIMS framework in which AI systems can be developed and deployed as part of an AI assurance ecosystem. According to ISO, the new standard is “intended to help the organization develop, provide or use AI systems responsibly in pursuing its objectives and meet applicable requirements, obligations related to interested parties and expectations from them.”

The huge growth in AI, particularly in the last year with the popularity of generative AI tools like ChatGPT, has pushed legislators and standards developers to look to ways to ensure the technology is developed and used responsibly.

The European Parliament will vote on the AI Act in early 2024, with the legislation set to come into effect in 2025. Meanwhile, the US, China and the UK are also looking to introduce their own AI legislation. EU Commissioner Thierry Breton called the EU AI Act “historic”, providing “clear rules for the use of AI”.