SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image

Netskope One upgrades boost AI data protection & visibility

Today

Netskope has announced new advancements to its Netskope One platform aimed at broadening AI security coverage, including enhancements to its data security posture management (DSPM) features and protections for private applications.

These updates come as enterprises continue to expand their use of artificial intelligence applications, generating a more intricate digital landscape that heightens the complexity of security challenges. While several security vendors have focused on facilitating safe user access to AI tools, Netskope said its approach is centred around understanding and managing the risks posed by the widespread adoption and development of AI applications. This includes tracking sensitive data entering large language models (LLMs) and assessing risks associated with AI models for informed policy decisions.

The Netskope One platform, powered by the company's SkopeAI technology, provides protection for a range of AI use cases. It focuses on safeguarding AI use by monitoring users, agents, data, and applications, providing complete visibility and real-time contextual controls across enterprise environments.

According to research from Netskope Threat Labs in its 2025 Generative AI Cloud and Threat Report, organisations saw a thirtyfold increase in the volume of data sent to generative AI (genAI) applications by internal users over the past year. The report noted that much of this increase can be attributed to "shadow AI" usage, where employees use personal accounts to access genAI tools at work. Findings show that 72% of genAI users continue to use personal accounts for workplace interaction with applications such as ChatGPT, Google Gemini, and Grammarly. The report underscored the need for a cohesive and comprehensive approach to securing all dimensions of AI within business operations.

Netskope's latest platform improvements include new DSPM capabilities, giving organisations expanded end-to-end oversight and control of data stores used for training both public and private LLMs. These enhancements allow organisations to prevent sensitive or regulated data from mistakenly being used in LLM training or fine-tuning, whether accessed directly or via Retrieval-Augmented Generation (RAG) techniques. DSPM plays a key role in highlighting at-risk structured and unstructured data across SaaS, IaaS, PaaS, and on-premises infrastructure.

The strengthened DSPM also enables organisations to assess AI risk in the context of their data, leveraging classification capabilities powered by Netskope's data loss prevention (DLP) engine and exposure assessments. Security teams are then able to identify priority risks more efficiently and adopt policies that are better aligned with those risks.

Policy-driven AI governance is further facilitated by Netskope One, which now automates the detection and enforcement of rules about what data can be used in AI, dependent on data classification, source, or its specific use. When combined with inline enforcement controls, this provides greater assurance that only authorised data is involved in model training, inference, or responding to prompts.

Sanjay Beri, Chief Executive Officer of Netskope, said, "Organisations need to know that the data feeding into any part of their AI ecosystem is safe throughout every phase of the interaction, recognizing how that data can be used in applications, accessed by users, and incorporated into AI agents. In conversations I've had with leaders throughout the world, I'm consistently answering the same question: 'How can my organisation fast track the development and deployment of AI applications to support the business without putting company data in harm's way at any point in the process?' Netskope One takes the mystery out of AI, helping organisations to take their AI journeys driven by the full context of AI interactions and protecting data throughout."

Customers are currently using the Netskope One platform to enable business use of AI while maintaining security. With these updates, CCTV customers can secure AI across almost any scenario in their AI adoption journey.

Using the new capabilities, organisations can form a consistent basis for AI readiness by comprehending what data is used to train LLMs, whether through public generative AI platforms or custom-built models. The platform supports security and trust by supporting discovery, classification, and labelling of data, and by enforcing DLP policies. This helps prevent data poisoning and ensures appropriate data governance throughout the lifecycle.

Netskope One also provides organisations with a comprehensive overview of AI activity within the enterprise. Security teams are able to monitor user behaviour, track both personal and enterprise-sanctioned application usage, and protect sensitive information across both managed and unmanaged environments. The Netskope Cloud Confidence Index (CCI) provides structured risk analyses across more than 370 genAI applications and over 82,000 SaaS applications, giving organisations better foresight on risks such as data use, third-party sharing, and model training practices.

Additionally, security teams can employ granular protection through adaptive risk context. This enables policy enforcement beyond simple permissions, implementing controls based on user behaviour and data sensitivity, and mitigating "shadow AI" by directing users toward approved platforms like Microsoft Copilot and ChatGPT Enterprise. Actions such as uploading, downloading, copying, and printing within AI applications can be controlled to lower the risk profile, and the advanced DLP can monitor both prompts and AI-generated responses to prevent unintentional exposure of sensitive or regulated data.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X