SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
New Zealand
Virtana adds AWS Bedrock Guardrails support to AI Factory

Virtana adds AWS Bedrock Guardrails support to AI Factory

Fri, 1st May 2026 (Today)
Sean Mitchell
SEAN MITCHELL Publisher

Virtana has added support for AWS Bedrock Guardrails to its AI Factory Observability product, extending its monitoring tools to enterprise large language model deployments running on AWS Bedrock.

The addition is intended to give security and operations teams more visibility into how Bedrock Guardrails behave in production, including patterns in blocked requests, token use, and failures linked to model activity.

Businesses have increased their use of generative AI in core workflows, but oversight has lagged behind deployment. Virtana cited its own research showing that 75% of enterprises report double-digit AI job failure rates, while more than half say operational strain is increasing security exposure as AI workloads grow.

AWS Bedrock Guardrails is designed to block harmful content, mask personally identifiable information, enforce denied topics, validate contextual grounding, and run automated reasoning checks. Virtana's software sits above that enforcement layer, tracking behaviour across Bedrock environments to help customers determine whether unusual activity stems from configuration problems, performance issues, or hostile probing.

Operational focus

The update is part of Virtana's broader push into AI observability across different environments, including a recent extension of its AI Factory Observability offering for Nutanix agentic AI environments. It is positioning observability as a way to bridge the gap between governance policies and day-to-day AI operations.

The Bedrock integration lets customers monitor Guardrails intervention rates, blocked-topic patterns, and intervention trends by model. It also tracks prompt and completion token volumes, Time to First Token, and request throughput, helping identify anomalous consumption patterns or possible attempts to probe systems.

Another element of the monitoring covers request failure rates across foundation model deployments. Those signals may indicate credential misuse, attempts to evade controls, or other adversarial behaviour, while historical trend analysis is meant to help teams compare current spikes in activity with known events or identify unexplained anomalies.

The software also provides a single operational view across foundation models in a Bedrock environment. It supports on-premises deployment, tenant-level data segregation, and customer-managed language models for organisations with strict data sovereignty and compliance requirements.

Paul Appleby, Chief Executive Officer at Virtana, linked the announcement to broader changes in how companies are using AI in production.

"Enterprises are making significant investments in generative AI across an expanding range of environments, and the governance expectations around those investments are rising fast," said Appleby. "Running AI in production means being accountable for how it behaves wherever it is deployed. Virtana AIFO gives security and operations teams the operational intelligence to meet that standard across infrastructure, platforms, LLMs, and services like AWS Bedrock."

Security pressures

Virtana argues that content-level controls alone are not enough as more businesses adopt agentic AI systems and run several foundation models for different tasks. In that environment, maintaining common governance standards across models and workflows becomes a more complex operational issue, especially in regulated sectors such as healthcare, financial services, and government.

That reflects a wider shift in enterprise AI spending, as the question moves from whether a model can be deployed to whether it can be managed safely once live. For cloud customers using Bedrock, guardrails can enforce policy at the point of inference, but the practical challenge is understanding patterns over time and determining whether repeated interventions point to routine use or a coordinated attack.

Amitkumar Rathi, Chief Product Officer at Virtana, said the Bedrock support is intended to address that layer of analysis.

"Agentic AI systems introduce attack surfaces that content-level enforcement alone cannot address," said Rathi. "By extending AI Factory Observability into AWS Bedrock environments, we give organizations visibility into the behavioral layer that sits above content filtering, such as token consumption patterns, Guardrails intervention rates and request anomalies, so security and platform teams can identify active threat campaigns and understand the full operational context of their LLM estate in production."