← Insights

Securing Intelligence: AI’s Rise as a National Infrastructure Concern

Advisory to Enforceable Guidelines for the Deployment of AI at a State Level

AI is rapidly transitioning from an emerging innovation challenge to a central national security and regulatory concern. Governments are moving beyond ethical principles and voluntary codes of conduct toward enforceable guidelines, mandating standards, and a full-spectrum regulatory environment. The past several months, as tracked by WA’s Atlas Newsletter coverage, mark a decisive point in how nations approach AI deployment. This transition is occurring as recognition of AI system’s unprecedented capabilities and risks continue to evolve.

Effective regulation now requires translating ethical expectations into enforceable obligations, including enforceable governance mechanisms that mandate accountability, transparency, and oversight throughout the AI lifecycle. Developers must conduct appropriate risk assessments, implement model audits, maintain incident reporting pipelines, and provide technical documentation on model training and outputs. Standards such as the OECD AI Principles, EU AI Act, and NIST AI Risk Management Framework are increasingly shaping regulations whilst national AI strategies are including explicit references to cybersecurity controls, such as robust identity verification, encryption-by-default, real-time anomaly detection, and monitoring of model drift. There is also a growing consensus that an AI Bill of Material (AIBOM) is necessary to ensure the security risks associated with the AI supply chain are mitigated to protect the vast amount of data incorporated into open-source models and datasets.

Implementation necessitates a critical layer of enabling infrastructure. Governments are prioritising investment in sovereign AI capabilities, including national compute resources, large-scale model evaluation labs, and independent AI safety institutes, supporting the deployment of secure-by-design AI, integrating cybersecurity principles directly into model development pipelines. Ultimately, states are beginning to treat AI as a foundational technology, where security is mandated, and the appropriate infrastructure must be built to enforce compliance.

Securing the AI development lifecycle presents challenges, facing threats such as data poisoning and adversarial attacks that may compromise the model’s integrity. Therefore, the AI development process should be separated into distinct phases to counter vulnerabilities at each stage through specialised security checkpoints. Organisations may for example utilise emerging AI security solutions or simulate defence systems for their AI models utilising emerging tooling such as Gen-AI penetration testing or AI Red Teaming pre/post-deployment to identify vulnerabilities.

AI as a National Regulatory Priority

Many leading nations have been laying the groundwork for AI governance for years framing AI as an economic opportunity. Countries like the US, UK, France, and Germany launched national AI strategies and ethical framework guided by principles such as OECD. However, since October 2024, the strategic direction of government publications has changed surrounding AI, shifting the discussion from the responsible use of AI to constructing the legal, technical, and institutional frameworks. The following country analysis demonstrates AI standards gradually maturing into enforceable policies.

In the US, the pivot was marked by the October 2024 National Security Memorandum on AI, which formalised centrality to national defence and global tech competition. This memorandum and expanded oversight powers granted to agencies like NIST and the Department of Commerce, set the US on a path where AI regulation tied directly to strategic deterrence, supply chain protection, and public safety. Moreover, in 2024, nearly 700 AI-related legislative proposals were introduced by US states with 113 being enacted into law. These bills focus on high-risk AI uses, digital replicas, Deepfakes, as well as government AI applications and 2025 is set to see 45 states addressing AI legislations.

The UK followed a similar trajectory. By the end of 2024, the UK had announced plans to introduce legislation targeting developers of powerful AI models. The outcomes of the UK’s Frontier AI Taskforce and the Bletchley Park AI Safety Summit underpinned the shift, where the UK positioned itself as a global broker of international norms. On March 4, 2025, the Artificial Intelligence (Regulation) Private Members’ Bill was re-introduced into the House of Lords, if enacted the Bill would require the creation of an AI Authority that would become a new regulatory body for AI. Governance of AI is now embedded in the UK’s broader national security and foreign policy agenda.

European leaders, particularly France and Germany, deepened their commitment to structured AI regulation through their stewardship of the EU AI Act, which is moving toward enforcement. Pre-October, their focus remained on ethical AI development and industrial competitiveness. But post-October 2024, both nations began preparing national enforcement mechanisms for the Act and investing in AI oversight authorities and infrastructure. Germany for instance emphasised AI’s role in cybersecurity and data integrity through BSI and digital sovereignty programs.

The post-October 2024 period is defined by a convergence of rhetoric and regulation. AI is no longer viewed as a niche technological issue but considered a pillar of national policy. Risk mitigation is being operationalised through enforceable guidelines, institutional expansion, and increased public investment in regulatory infrastructure. These developments indicate a geopolitical shift that the governance of AI is no longer confined to advisory bodies or academic forums but is taking root in legislation, defence planning, bilateral treaties, and budgetary commitments. Countries are no longer asking if AI should be governed but rather determining how swiftly and how strictly that governance should be applied.

The Future Direction of AI Cybersecurity and its Supporting Infrastructure

AI is continuing to be deployed across critical sectors and systems, from energy grids to public services, expanding the need for resilient cybersecurity solutions. Traditional cybersecurity architectures are being re-engineered to secure AI-specific assets including model integrity protection, data poisoning defence, robust red-teaming protocols, and AI specific intrusion systems. Moreover, securing model supply chains, from training data and hardware to APIs and deployment endpoints is now a central policy concern.

Countries are expanding their AI infrastructure footprint. Central to this expansion is the growth of high-security, high-performance data centres. These facilities are not only needed to train and deploy large AI models but also host government-run model testing environments, regulatory sandboxes, and national AI observatories. The UK through the forthcoming introduction of the Cyber Security and Resilience Bill, has officially designated data centres as a Critical National Infrastructure, in part due to the increasing reliance on AI. Additionally, both Canada and the UK have committed public funds to build sovereign compute capacity reducing reliance on foreign hyper-scalers, and the EU has proposed a ‘trusted cloud’ framework to ensure that AI workloads comply with regional data protection and cybersecurity laws.

The elevation of AI to a national strategic priority demands a convergence of cybersecurity, data governance, and operational resilience. As AI systems are embedded across critical infrastructure - spanning energy, healthcare, transportation, and public services - the institutions responsible for data protection, AI compliance, and national cyber defence must collaborate more closely. This shift is driving the creation of joint oversight bodies, mandatory incident reporting regimes for high-risk AI systems, and real-time vulnerability disclosure mechanisms. Measures once limited to traditional critical infrastructure are now being extended to the AI ecosystem, including data centres and model deployment environments. With AI increasingly powering the systems that keep societies running, attacks on models and their supply chains are no longer theoretical—they represent tangible national-level risks. For both public and private sector leaders, securing AI must become a core operational concern, not a peripheral consideration.

Recommendations for executive teams and CISOs.

1. Align AI Security Strategy with National and Sectoral Regulations

Organisations should anticipate and prepare for compliance with enforceable AI regulations emerging across jurisdictions. This includes adopting AI-specific cybersecurity controls such as model integrity verification, monitoring for model drift, robust identity verification, and maintaining AI Bills of Materials (AIBOM). Executives and CISOs should map internal AI use cases to regulatory obligations in their region and sector (e.g., EU AI Act, UK AI legislation, U.S. federal and state-level laws), and ensure security-by-design is integrated from model development to deployment.

2. Treat Data Centres and AI Workloads as Critical Infrastructure

As AI compute becomes essential to both business operations and national resilience, organisations should evaluate their AI workloads and data centre dependencies through the lens of critical infrastructure protection. This includes investing in or partnering with high-security, regionally compliant data centre providers, enforcing encrypted data flows, and implementing model sandboxing or red teaming practices for high-risk use cases. Proactive engagement with regulators around the classification and treatment of AI infrastructure can position firms for long-term operational stability and compliance.

3. Establish Cross-Functional AI Risk Governance Boards

Given the convergence of cybersecurity, data governance, and AI compliance, organisations should establish or evolve governance boards that include CISOs, Chief Data Officers, compliance leaders, and legal counsel. These boards should oversee AI risk assessments, approve deployment of high-risk models, and manage incident response planning specific to AI threats. Cross-functional oversight will be essential to meet future mandatory reporting and disclosure requirements under emerging legislation.

Chat to us

*All fields required