Security

ETSI issues standard for AI model security

By Ray Le Maistre

Jan 16, 2026

  • Every company wants to make use of AI’s capabilities
  • But security is an ongoing and major concern
  • Specifications body ETSI has published a globally applicable standard for AI model and system security, addressing threats such as data poisoning and model obfuscation

Building on its initial set of cybersecurity requirements for AI systems, ETSI has published a new, enhanced standard that “provides baseline cybersecurity requirements for AI models and systems” and which the industry specifications body claims is the “first globally applicable European Standard (EN) for AI cybersecurity”. 

The standard has been developed to provide a framework to “shield AI systems from growing and increasingly sophisticated cyber threats,” and “will be instrumental for stakeholders throughout the AI supply chain, from vendors to integrators and operators”, noted ETSI in this announcement

Its scope covers AI systems “incorporating deep neural networks, including generative AI, and is developed for systems intended for real-world deployments.” And for telcos, 2026 is set to be a key year for real-world AI deployments that can advance the strategies that have been in development for the past few years but which are still in their very early stages in many cases – see Agentic AI “not even a teenager” – Orange exec.

The ETSI standard “guarantees a mature, structured and lifecycle-based set of baseline security requirements for AI models and systems… [it] defines 13 principles and requirements across five phases: Secure design, secure development, secure deployment, secure maintenance, and secure end of life.” 

ETSI notes that AI represents a “distinct cybersecurity challenge that traditional software has not offered. Traditional software introduced the world to the need for cybersecurity awareness. Today the risks emerging from AI require cyber defences that account for these new and unique characteristics. These risks include data poisoning, model obfuscation, indirect prompt injection, and vulnerabilities created by complex data management and operational practices,” it added. 

The standard, which sports the snappy name of ETSI EN 304 223, has been reviewed and approved by multiple national standards organisations, “giving it a broader international scope and strengthening its authority across global markets,” noted ETSI. 

The standard “represents an important step forward in establishing a common, rigorous foundation for securing AI systems”, stated Scott Cadzow, chair of ETSI’s Technical Committee for Securing Artificial Intelligence. “At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy and secure by design.” 

- Ray Le Maistre, Editorial Director, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.

Subscribe

Cookies

TelecomTV uses cookies and third-party tools to provide functionality, personalise your visit, monitor and improve our content, and show relevant adverts.