Artificial Intelligence 12-12-2025

The NSA and CISA Just Confirmed Why Intelligent Trust Matters More Than Ever

Lakshmi Hanspal
AI Blog Hero

On December 3, 2025, the NSA, CISA, Department of Energy, and several other international cyber agencies released new guidance on safely integrating AI into operational-technology (OT) environments. Their message is clear: As AI becomes embedded in industrial control systems, critical infrastructure, and automation pipelines, organizations must establish strong identity, robust verification, and continuous trust—or risk introducing vulnerabilities at unprecedented scale. 

At DigiCert, we welcome this guidance because it validates something we’ve been helping customers prepare for: AI can only be trusted with cryptographic identity at its core. 

AI in OT introduces a new class of identity and integrity risk

The new NSA/CISA guidance reinforces that AI introduces risks that are fundamentally different from traditional automation. These risks include: 

  • AI agents making autonomous decisions 
  • Exposure to model manipulation or spoofed inputs 
  • Incorrect or unsafe automated actions with direct physical consequences 
  • Difficulty validating the authenticity of AI-generated outputs 

These concerns mirror patterns already emerging across enterprise and industrial environments. AI isn’t just another workload—it’s becoming a decision-maker, interacting directly with machines, sensors, networks, and other agents. Without secure identity and verified authenticity, even a well-trained model can become an attack vector or a source of operational instability. 

AI is expanding the threat landscape, and trust must expand and follow it.

Identity is the root of trust for AI agents, devices, and OT systems

The guidance calls for strong authentication, data integrity protection, and auditability across AI components—requirements that map directly to longstanding trust principles in OT security. This is where DigiCert plays a key role. 

Public key infrastructure (PKI) provides: 

  • Cryptographic identity for AI agents, OT devices, and control systems 
  • Non-repudiation for autonomous or semi-autonomous actions 
  • Integrity assurance for models, data pipelines, and distributed decision flows 
  • Secure authentication across human, machine, application, and AI systems 

Whether AI is making micro-adjustments to manufacturing equipment, optimizing energy distribution, or interpreting sensor data, each action must originate from a trusted, verifiable agentic identity. 

Automation and lifecycle management: Essential for safety and scalability

The NSA/CISA guidance also emphasizes the need to reduce human error, maintain oversight, and ensure continuous protection throughout the lifecycle of AI systems. These requirements become especially important in OT environments, where AI systems may operate faster than humans can respond and where mistakes can carry safety or availability consequences. 

DigiCert’s investments in automation, policy enforcement, and lifecycle management directly support these needs. Automated certificate issuance and renewal, centralized policy controls, and scalable lifecycle operations through DigiCert ONE and Trust Lifecycle Manager help organizations maintain trust at the speed and scale AI introduces—without adding operational burden. 

In environments where AI systems adapt, learn, and make decisions dynamically, trust can’t be manually determined. It must be continuous, automated, and embedded across the lifecycle.

Automation isn’t just an efficiency gain—it’s a safety requirement.

AI-era trust: Extending security to the next generation of autonomous systems 

AI is no longer limited to analytics or cloud workloads. It’s interacting directly with physical systems, energy grids, transportation networks, and manufacturing lines. As organizations adopt AI-enabled operations, they must begin treating AI agents as first-class identities—just like people, devices, or applications. 

DigiCert is already helping customers: 

  • Assign certificates to AI models, agents, and decision engines 
  • Protect training data, model integrity, and update pipelines 
  • Validate every action through signed, auditable identity 
  • Build secure, post-quantum–ready cryptographic foundations

We believe the future of AI depends on trust—and trust depends on verifiable identity. 

A turning point for operational technology

The NSA/CISA guidance is more than a policy document; it marks a critical inflection point for how operators, manufacturers, utilities, and other OT-dependent organizations must approach intelligent automation. AI is expanding operational capability, but it’s also expanding the risk surface. Organizations that establish strong identity, maintain continuous trust, and embed integrity protections throughout the lifecycle will be positioned to deploy AI safely, confidently, and at scale. Those that delay may face widening gaps in resilience, compliance, and system reliability as AI adoption accelerates. 

At DigiCert, we’re committed to helping critical-infrastructure providers meet these expectations with solutions that strengthen identity, automate trust, verify authenticity, and maintain lifecycle protections—all built on cryptography; that’s ready for the post-quantum era. This is the future the guidance anticipates, and it’s the future DigiCert is building with our customers today. 

Explore DigiCert ONE to see how we’re helping organizations establish the identity, automation, and lifecycle trust AI now demands. 

Subscribe to the blog