AI Safety stories
Bias in AI systems could widen unless more women help shape the technology from the start, the Inde Women's Network warns.
Enterprise teams can now impose one policy layer across Zapier workflows, agents and SDK-built apps as AI use outpaces governance.
Many firms cannot pause AI systems quickly or explain failures to regulators, according to ISACA's European survey of 681 professionals.
The customer experience software provider is courting UK and European brands as it passes USD $100 million in annual recurring revenue.
Boards face mounting pressure to fix AI-found code flaws faster, as CrowdStrike and partners launch a service to rank exploit risks.
Codex and ChatGPT users get a model that OpenAI says performs better on coding, research and office work while using fewer tokens.
Businesses testing AI in infrastructure management may gain tighter control over network data, compliance checks and change planning through the new server.
As cyber tools become more powerful, Anthropic is limiting access while OpenAI is widening it, raising fresh fears over misuse.
Most firms are still flying blind on AI-generated code, even as 89% say they can secure it and 86% have already adopted it.
Enterprises using autonomous AI agents could get tighter controls as the tie-up adds governance and live monitoring to Google Cloud deployments.
Unapproved AI agents are already exposing firms to hidden security gaps, with LevelBlue saying many are running tools without oversight.
Enterprises get tighter controls for autonomous AI agents and Cloud SQL backups as Rubrik expands its Google Cloud security stack.
Sensitive fusion research will stay inside First Light Fusion's air-gapped Oxford systems as Locai Labs rolls out a tailored AI coding assistant.
The Edinburgh conference will put AI trust and governance centre stage as speakers from OpenAI, OpenUK and academia address business risk.
AI moderation tools may treat abuse unevenly, with a Queensland study finding political personas shift judgments without hurting accuracy much.
Native checks will now flag prompt injection and data leakage across more of the AI agent stack as enterprises push systems into production.
Security chiefs say unauthorised access to Anthropic AI's Mythos model shows generative tools could speed phishing, scanning and exploit discovery.
The tie-up could speed secure AI adoption for regulated Japanese firms, with NEC set to roll out Claude to about 30,000 staff.
Singapore companies face rising cyber risk as AI agents and machine accounts gain access without proper oversight, Delinea research shows.
The three-year spend will expand local cloud capacity, boost cyber defences and train millions of workers as demand for AI grows.