AI Safety stories
Organisations using AI in software development will get training on secure coding and governance as vulnerabilities and data risks mount.
Its research aims to show developers why deterministic software is becoming crucial as AI robots move into shared, safety-critical spaces.
Growing AI security fears are driving Proofpoint’s European expansion, with the Paris site aimed at helping customers meet local regulatory demands.
The move gives researchers and regulators a more neutral way to probe model deception and harmful behaviour as AI safety scrutiny intensifies.
Enterprises adopting AI in regulated sectors face fresh risks from model tampering and agent misuse, which Cognizant aims to address.
Developers can now let AI agents pay for paid content and services in real time, with US East, US West, Europe and Asia Pacific support.
The shift to autonomous IT is stalling because teams will only let AI act when its decisions are transparent, explainable and controlled.
Vetted security teams will get fewer refusals on authorised tasks as OpenAI tightens access around its most permissive cyber model.
Users may notice fewer errors as the chatbot’s default switches to GPT-5.5 Instant, which OpenAI says cuts hallucinations by 52.5%.
Developers get new ways to boost Claude agents’ accuracy and scale, as Anthropic rolls out memory, grading and parallel task handling.
Developers could soon build voice apps that handle tasks and translations in real time, as OpenAI adds three new audio models to its API.
The tie-up could help security teams cut false alarms and patch faster as automated attacks shrink defenders’ reaction time.
Adults using ChatGPT can now name a trusted contact, giving OpenAI a new way to alert someone in serious self-harm cases.
AI agent workflows are being targeted by a fake OpenClaw skill that installs Remcos RAT and GhostLoader on Windows, macOS and Linux.
Most firms are now running AI in production, with hybrid clouds and security controls becoming crucial as inference overtakes training.
NHS clinicians using the tool reclaimed more than four million hours of capacity, while paperwork time fell and burnout eased across pilots.
The new facility will link students and faculty to industry problems in healthcare, education technology and finance, as India pushes applied AI research.
Security teams could cut alert backlogs, while enterprises gain a way to inspect AI skills for hidden tampering and backdoors.
Indian firms are moving to tighten software controls as AI agents and code generation raise new security and auditability risks.
Organisations using AI-assisted development can now get specialist secure coding training as KnowBe4 expands its library for technical teams.