Fortinet is shifting more data centre security into the infrastructure layer
As enterprises race to operationalise generative AI and scale high-performance computing (HPC), data centres are being redesigned around accelerated infrastructure—GPUs for training and inference, high-throughput networking, and increasingly, specialised processing units that offload networking and security. In that context, Fortinet’s move to bring its AI-driven data centre security capabilities onto NVIDIA DPUs (data processing units) signals a broader industry pivot: security controls are being pushed closer to where data moves, rather than bolted on after the fact.
The announcement centres on integrating Fortinet’s security and networking functions with NVIDIA’s DPU-based architecture—technology commonly used to offload and accelerate network, storage, and security tasks. The aim is to support modern workloads—especially AI clusters—while preserving performance, simplifying operations, and strengthening segmentation across increasingly complex environments.
Why DPUs matter for modern AI infrastructure
Traditional data centre security stacks often rely on host-based agents, in-line appliances, or virtualised network functions running on general-purpose CPUs. That model can become a bottleneck when organisations deploy AI workloads that demand low latency, east-west traffic visibility, and massive throughput between compute nodes.
DPUs are designed to handle infrastructure tasks that would otherwise consume CPU cycles, including packet processing, encryption, telemetry, and policy enforcement. By moving these functions onto a DPU, organisations can:
- Preserve CPU capacity for AI training, inference, and application workloads.
- Improve determinism by reducing contention between security services and application compute.
- Strengthen isolation because policy enforcement can occur outside the host OS, reducing the blast radius of compromise.
- Scale more cleanly as clusters grow and traffic patterns shift.
What Fortinet is trying to achieve with NVIDIA DPUs
Fortinet’s strategy aligns with a common pain point in large-scale AI and cloud environments: security must keep up with the speed and distribution of modern workloads. When AI clusters scale, traffic is no longer primarily “north-south” (in and out of the data centre). It becomes “east-west”—node-to-node communications that can overwhelm legacy inspection models.
By bringing security functions onto NVIDIA’s DPU platform, Fortinet is positioning its capabilities to run closer to the data plane. In practical terms, this approach is intended to enable high-performance segmentation, threat detection, and policy enforcement without forcing organisations to trade off performance for protection.
This also reflects a wider trend in cybersecurity and networking: the industry is moving toward distributed enforcement and zero trust-style segmentation, where controls are embedded into the fabric of the infrastructure rather than centralised in a few chokepoints.
Industry context: AI growth is changing security economics
The economics of AI infrastructure are different from traditional enterprise IT. GPU time is expensive, power is constrained, and data centre capacity is under pressure. As a result, organisations are increasingly motivated to reduce overhead and avoid “security tax” that consumes compute resources or introduces latency.
At the same time, the risk profile is rising:
- AI training pipelines often involve large, sensitive datasets and intellectual property.
- Model supply chains introduce new concerns around data poisoning, model theft, and exfiltration.
- Distributed architectures expand the attack surface across containers, virtual machines, Kubernetes, and high-speed fabrics.
Security teams are therefore seeking approaches that are both high assurance and high performance. Offloading parts of the security stack to infrastructure accelerators like DPUs is a natural response to that dual requirement.
What this means for data centre and cloud operators
For organisations building or modernising AI-ready data centres, Fortinet’s DPU-focused direction reinforces several practical considerations:
- Security architecture is becoming “infrastructure-native”, designed into the fabric rather than appended.
- Procurement decisions are converging—networking, compute acceleration, and security are increasingly evaluated together.
- Operational models are shifting toward policy-driven automation that can keep pace with elastic, distributed workloads.
It also underscores the competitive landscape: as NVIDIA’s ecosystem grows, security vendors that integrate tightly with accelerated infrastructure may gain an advantage in AI-heavy environments where throughput and latency directly impact business outcomes.
Conclusion: security is moving to the data plane
Fortinet bringing AI data centre security onto NVIDIA DPUs is more than a product integration story—it reflects a structural change in how modern infrastructure is secured. As AI workloads scale, the industry is increasingly treating security as a performance-sensitive, always-on capability that must live close to the data plane. For enterprises investing in accelerated data centres, DPU-based security could become a key lever for achieving both stronger isolation and better utilisation of costly compute resources—without sacrificing the speed that AI demands.
Reference Sources
SecurityBrief Australia – Fortinet moves AI data centre security onto NVIDIA DPUs
NVIDIA – Data Processing Units (DPUs) overview







Leave a Reply