AI Safety stories
Security flaws in 17 AI companion apps used by 150m people could expose intimate chats, photos and voice messages to attackers.
Token Security launches intent-based controls to govern AI agents' access by purpose, aiming to curb over-privileged, autonomous system behaviour.
Coalfire's new DivisionHex service hunts shadow AI and rogue agents as most firms report AI-driven security incidents without proper oversight.
Rushing to embrace AI, most firms are easing identity controls despite visibility gaps around powerful non-human and AI-linked accounts.
RAIDS AI joins Drata and Prescient to deliver ISO 42001-based AI governance, blending automation, monitoring and independent certification.
Obin AI exits stealth with USD $7 million to build auditable AI agents for heavily regulated financial workflows and asset managers.
Lineaje launches UnifAI, a security and governance layer to centralise control, discovery and policy for enterprise agentic AI deployments.
Mphasis tells CTOs to overhaul legacy cores as agentic AI scales, backing ontology-driven knowledge graphs to curb automated errors.
Agentic AI promises effortless digital delegation, but its admin-level access to data and systems creates profound privacy and security risks.
OutSystems named a Leader in G2's Spring 2026 AI Agent Builders Grid, after earning top scores for ease of admin, trust and governance.
HackerOne launches live Agentic Prompt Injection Testing to expose real-world AI exploit paths as prompt injection threats surge 540%.
JFrog launches an MCP registry to centralise and secure AI coding agents, extending software supply chain controls to agent workflows.
R Systems has unveiled EXIQO, an AI Studio to help enterprises scale governed agentic AI across engineering, operations and legacy systems.
TrendAI and Nvidia deepen collaboration to embed layered security and governance into OpenShell, protecting long-lived autonomous AI agents.
Island has rolled out a SASE design that shifts inspection to the endpoint, cutting proxy backhaul and avoiding default SSL/TLS break-and-inspect.
Seekr and GDIT team up to deliver secure, explainable agentic AI platforms for sensitive government operations across cloud and edge.
Polygraf unveils a desktop AI overlay that flags sensitive data in real time as staff type, aiming to curb leaks across workplace tools.
Menlo launches a browser-based platform to govern human users and AI agents with unified security controls as machine traffic surges.
Backslash adds cross-tool governance to discover, vet and monitor 'Skills' powering AI coding assistants like Cursor, Claude Code and Copilot.
AI rollouts are eroding UK identity controls, as firms ease safeguards for machine accounts despite glaring gaps in oversight and governance.