18 nations push for global ‘secure by design’ AI development

27 Nov 2023

Image: © Who is Danny/Stock.adobe.com

Tech giants such as Microsoft, Google and IBM all contributed to the guidelines, which focus on keeping AI systems secure as the sector continues to grow rapidly.

A consortium of various national cybersecurity organisations have released guidelines to support the secure development of AI systems.

The guidelines aim to ensure that AI systems are built in a way that functions as intended and work without revealing “sensitive data to unauthorised parties”. This document was published by the UK’s National Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency (CISA) and organisations from 16 other countries.

The guidelines also included contributions from research centres and various companies that are involved in the development of AI, such as Google, IBM, Amazon, Anthropic, Microsoft and OpenAI.

The document claims that AI has the potential to bring many benefits to society but that these systems need to be deployed in a secure and responsible way. It also highlights the additional cybersecurity risks that AI is facing, which are described as “adversarial machine learning”.

The guidelines said examples of these vulnerabilities include prompt injection attacks and “data poisoning”, which involves deliberately corrupting either training data or user feedback behind large language models. These attacks could be used to make AI systems perform unintended actions or to extract sensitive data, according to the document.

Secure by design

The document says security needs to be a “core requirement” when developing AI systems and urges developers to adopt a “secure-by-design” approach. This means ensuring security is factored in across the entire development life cycle of a product.

The guidelines break down its security recommendations into four key areas, which are design, development, deployment, and operation and maintenance.

The advice is wide-ranging and includes recommendations such as raising staff awareness around cybersecurity risks, monitoring AI supply chains, ensuring continuous protection of AI models and releasing AI products in a responsible way.

Paul Brucciani, a cybersecurity advisor at WithSecure, said the cybersecurity organisations involved in these guidelines worked “with impressive speed” to develop this list of signatories and compared the early days of AI to “blowing glass”.

“While the glass is fluid it can be made into any shape, but once it has cooled, its shape is fixed,” Brucciani said. “Regulators are scrambling to influence AI regulation as it takes shape.”

“It is interesting to note that responsibility to develop secure AI lies with the ‘provider’ who is not only responsible for data curation, algorithmic development, design, deployment and maintenance, but also for the security outcomes of users further down the supply chain.”

The rush to secure the AI sector

Various countries have been working to ensure AI development is conducted in a safe way, as the technology skyrocketed in popularity this year.

At the start of November, various countries and the EU attended the AI Safety Summit in the UK and signed the Bletchley Declaration, which aims to establish “shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety.”

However, some experts have criticised this declaration and warned that it won’t have any real impact on how AI is regulated globally. Meanwhile, other initiatives have been gaining momentum to regulate in this rapidly developing sector.

“The strict rules of the EU’s AI Act will have a big global impact especially [considering] that the AI Liability Directive (distinct from the AI Act) will create a ‘presumption of causality’ against AI systems developers and users, which would significantly lower evidentiary hurdles for victims injured by AI-related products or services to bring civil liability claims,” Brucciani said.

“China has similar initiatives relating to AI governance, though the rules issued apply only to industry, not to government entities.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com