AI Security Baseline Playbook: My Take on ETSI TS 104 223
Cut through the 72-control checklist and operationalize ETSI TS 104 223
AI Security just moved from cocktail-party theory to a numbered checklist. ETSI TS 104 223 bundles thirteen life-cycle principles into seventy-two baseline requirements, giving every CISO and Chief AI Officer a benchmarking tool. Use it and you’ll show the board hard evidence that security keeps pace with your fastest LLM rollout. (SecurityBrief UK)
Why ETSI TS 104 223 Deserves Space on Your Roadmap
ETSI’s spec lands at a perfect inflection point as regulators tighten AI laws while engineering teams race to ship features. The standard covers design through retirement and calls out threats older guidance glossed over like prompt injection, data poisoning, model obfuscation, and more.
I like that ETSI kept jargon light yet demanded proof. Every principle (e.g., Secure the Supply Chain) maps to explicit control evidence. Think model SBOMs, red-team reports, logging formats so auditors don’t settle for jedi mind-tricks The committee also promised a small-enterprise implementation guide, hinting that adoption will stretch beyond large corporation.
Data point: The spec’s public draft drew more than 400 written comments in eight weeks, double the traffic received by ETSI’s last network-security release, signalling broad industry curiosity.
How Heavy Is the Lift? Compliance Math in Real Life
72 controls look scary until you stack them against what you already run. If you follow ISO 27001 or NIST CSF, half of the list maps straight across like identity, logging, change control. The net new asks live in three buckets:
Model provenance: dataset lineage, hyper-parameter history, signed artefacts.
Adversarial stress testing: concrete red-team evidence, not “we planned a test.”
Run-time policy enforcement: per-tenant guardrails with tamper-proof logs.
The biggest cost centers are the stress tests and run-time guardrails. Some companies I advise estimate up to an extra $350K dollars for year-one adversarial testing (external talent plus GPU time). SaaS firms already using attack-simulation tooling peg the uplift closer to $100 K dollars.
For smaller shops, ETSI throws them a bone: TR 104 128 walks through minimal-viable evidence collections and flags controls that can be deferred until revenue or risk grows.
My verdict: painful but doable. Any org that embraced CI/CD can fold most tasks into existing gates. The ones that still treat model files like dark magic will scream.
Where the Spec Stops Short and Why That Matters
I applaud ETSI for publishing, but gaps remain:
Agentic and self-modifying systems: The document speaks to static models. It stays silent on agents that write or fine-tune their own code overnight.
Third-party composite models: Principle 7 touches supply chain, but guidance ends at “evaluate external components,” with no weightings for closed-source APIs.
Continuous alignment monitoring: The spec mandates drift alerts yet leaves scoring thresholds to implementers, which means variance across industries.
Legal interplay: Traceability guidance in TR 104 032 dives into IP rights but never reconciles with open-source licensing or copyright chaos in generative training sets.
These omissions won’t tank the standard, but leaders must back-fill with complementary frameworks (NIST AI RMF for governance town planning, ISO 42001 for transparency, CARE RUN rules for day-to-day ops).
Practical Integration: Applying CARE Without Slowing Sprints
Create phase
Embed model cards as regulated artefacts. Train teams to log dataset lineage the same way they log code commits. Link those records to CARE “Create” deliverables.
Adapt phase
Feed adversarial-testing results into a risk queue scored by your AI risk-management engine. Pull high-risk items into sprint triage, not after-hours heroics.
Run phase
Adopt zero-trust for model inputs. Gate every API call through a policy engine that can inject sanitizers or deny malicious prompts. ETSI’s language is blunt: keep a trail for each invocation.
Evolve phase
Schedule red-teaming as code. Integrate purple-team attack libraries into CICD so you test with every model update. Align this with RISE “Evolve” loops to show executives closed-loop learning.
Tackle controls in bursts:
Tag existing artefacts that already satisfy a control.
Automate evidence collection where possible.
Run a quarterly gap audit until coverage hits eighty percent.
Take Action
Commission a 30-day gap analysis against ETSI TS 104 223. Use a cross-functional task force of security, data, and legal teams.
Update your vendor security schedule. Require new suppliers to show an ETSI alignment roadmap within ninety days of contract start.
Budget for adversarial testing. Treat it like penetration testing for models. Set a board-level OKR: “zero critical findings older than ninety days.”
Align with EU signals. ETSI plans to transpose the spec into a European Norm, making it a likely reference point for EU AI Act Article 15 enforcement.
Publish progress. Quarterly dashboards with control coverage percentages beat “we’re working on it” every time.
Key Takeaway: ETSI TS 104 223 gives leaders the first globally recognised AI Security scorecard; adopt it early and turn compliance cost into competitive speed.
Call to Action
Want an honest look at your readiness? Book a Complimentary Risk Review and let’s map ETSI’s to your reality without choking your release cycle.