AI Safety stories
Most boards are using AI for routine tasks, but only 3% have woven it into risk oversight, leaving organisations exposed to fresh hazards.
Boards across software are seeking directors with AI and governance expertise as New Relic adds Wendi Sturgis to oversee its next phase of growth.
As AI moves into production, enterprises face gaps between data governance and runtime controls that can expose sensitive information and policy breaches.
Without proper oversight, rapidly growing AI agent workforces could leave firms blind to who can access systems, data and privileges.
Researchers can now report AI misuse and harmful agent behaviour under a separate programme that could expose risks in ChatGPT Agent and Browser.
Enterprises racing to deploy AI tools are risking sensitive data leaks unless security moves from discovery to runtime protection, F5 and Forcepoint say.
Pressure to adopt AI is outpacing safeguards, with most firms saying governance and legal controls have lagged behind deployment.
Longer AI-generated tracks are now available to businesses, developers and subscribers as Google widens access to its music model across products.
Distributed sites will get tighter controls as HPE adds AI prompt filtering, recovery and encryption updates to guard against data leakage and attacks.
Pressure is outpacing governance in Australian companies, with many approving AI systems before legal, security and training gaps are closed.
Businesses risk biased outputs and compliance failures unless older data estates are rebuilt for AI, as the ODI and SAP launch research and governance work.
The platform aims to curb risks from AI agents accessing data and triggering workflows inside businesses, with runtime controls now in place.
Customers can now use AI tools to update live project records in Smartsheet, with early adoption topping 4,000 users and 1.74 million actions.
The move aims to curb access and trust risks as companies deploy autonomous AI agents across internal systems and third-party services.
The two-hour glitch exposed company and user data to unauthorised staff, fuelling calls for tighter controls over autonomous agents.
Most firms cannot tell AI agent activity from human use, leaving access controls strained as autonomous software spreads across production systems.
Security teams now have a beta tool to probe large language model apps for prompt injection, jailbreaks and data theft before attackers do.
Security teams are being given earlier warning of employee-built AI agents that could expose data, credentials and internal systems.
A shortage of approved classroom AI tools is leaving most Australian teachers eager for training but unable to use them with students.
Widespread use of AI in Irish offices is outpacing training and controls, with some staff handling contracts and confidential data unsafely.