TelcoNews Australia - Telecommunications news for ICT decision-makers
Story image

Trend Micro unveils AI Factory to boost agentic AI security

Yesterday

Trend Micro has adopted the NVIDIA Agentic AI Safety blueprint, aiming to strengthen safety and security measures for agentic artificial intelligence systems throughout their lifecycle.

The company outlined its approach with the introduction of the "Trend Secure AI Factory," which is built on Trend Vision One and Trend Vision One – Sovereign Private Cloud platforms. This framework is aligned with the NVIDIA Agentic AI Safety blueprint and seeks to provide enterprises with comprehensive security from the initial adoption of AI models through to their deployment and ongoing usage.

Lifecycle focus

According to Trend Micro, effective security within AI factories requires controls and monitoring at multiple levels, covering everything from data and models to the supporting infrastructure and user endpoints. The Secure AI Factory includes integration with NVIDIA NeMo—a model assessment and customisation framework—to enable scalable and reliable model safety evaluation across enterprise deployments.

Mick McCluney, ANZ Field CTO at Trend Micro, commented on the current state of AI system adoption and the corresponding security imperatives.

"Global organisations are racing to innovate with agentic AI systems, and there's a critical need to ensure the safety and security of these systems. The NVIDIA Agentic AI Safety blueprint provides an important enabling technology that works in conjunction with Trend's threat intelligence to support safety across all phases of the AI lifecycle – from model adoption, deployment, and runtime protection — allowing customers to innovate with AI faster."

To support the aim of providing AI system safety, Trend Micro is integrating its own large language model, Trend Cybertron, via NVIDIA NIM universal microservices. This enables scalable and secure inference that can be deployed in cloud, hybrid, or on-premise settings, with a specific focus on detecting and responding to threats in real time.

Technical integration

Trend Micro highlighted several technical components of the Secure AI Factory. Firstly, it tightens model safety by integrating with NVIDIA NeMo for continuous evaluation and improvement. Secondly, it offers safeguards against data poisoning and misuse during AI training and evaluation phases. The firm's container security solution is used to secure deployment environments—such as NVIDIA NIM and other AI agents—against adversarial attacks or exploitation of resources.

Additionally, sensitive data can be protected using Data Risk Posture Management (DSPM), which utilises components of NVIDIA AI Enterprise including NVIDIA Morpheus, NVIDIA RAPIDS, and the NVIDIA AI Safety Recipe to help manage privacy and compliance in both the training and post-training stages.

Operational security for users and agent interactions employs Trend Zero Trust Secure Access (ZTSA) AI Service Access, aiming to provide guardrails and network protection for AI agents when interfacing with users. The Secure AI Factory also aims to fortify sovereign AI deployments with what it describes as trusted security controls through the Sovereign Private Cloud option.

Industry perspective and collaboration

Pat Lee, Vice President of Strategic Enterprise Partnerships at NVIDIA, commented on the value of integrating security measures into AI operational environments:

"Embedding real-time, autonomous threat detection into enterprise AI factories empowers organisations to confidently scale innovation without compromising on protection. By integrating advanced cybersecurity directly into AI factories with Trend Micro and NVIDIA Agentic AI blueprints, enterprise data, models, and workloads can remain resilient and trusted —unlocking the full potential of AI in a secure, accelerated environment."

The Secure AI Factory approach covers risk mitigation in all areas: model safety, infrastructure, workloads, data privacy, and user trust. The company's solution is designed for organisations looking to implement agentic AI systems at scale while maintaining compliance with various data protection and security requirements.

Trend Micro's announcement also received commentary from Justin Vaïsse, Director General at the Paris Peace Forum, who emphasised the role of cross-sector initiatives in establishing AI trust:

"As AI becomes increasingly embedded in critical systems, its safety and security must be treated as global priorities. We welcome the role of companies like Trend Micro in advancing responsible AI by contributing tangible, scalable solutions to multi-actor partnerships. This kind of cross-sector collaboration is essential to fostering trust and resilience in the technologies shaping our shared future."
Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X