AI Safety stories
More than 40 critical software groups will use Claude Mythos Preview to hunt flaws, as Anthropic commits USD $100 million in credits.
Most Global 2000 companies are using AI without clear ownership, raising risks as systems increasingly shape hiring, spending and compliance decisions.
Most technology leaders are still finding their feet as companies race to deploy AI despite skills gaps, data problems and compliance pressure.
Enterprises struggling with fragmented files and AI governance now get a new platform aimed at giving staff and agents safer access to data.
Defenders may gain faster vulnerability discovery, but the same AI leap is also sharpening concerns that attackers will exploit flaws in minutes.
Security researchers say long automated jobs can make Claude Code’s deny rules fall back to user prompts, weakening protections in CI/CD pipelines.
Geopolitical risk is clouding Gulf AI investment, after Iran named OpenAI’s Stargate campus in Abu Dhabi as a possible target.
The new platform targets regulated firms seeking auditable AI processes, after Felix raised USD $1.7 million to expand beyond legal work.
The world may face faster job losses and cyber risks than many expect as OpenAI urges governments to debate AI rules before decisions turn urgent.
Access to advanced AI security tools will be limited to vetted groups as Anthropic backs open-source defenders with USD $100 million in credits.
Workers using AI agents at work now have a vendor-neutral course to help them spot risks, manage oversight and distinguish them from chatbots.
The update could let customer success teams automate renewals and risk response with AI agents while keeping existing access controls intact.
The deal gives OpenAI a direct line to builders and users of artificial intelligence, while TBPN keeps editorial independence for its show.
Many firms are preparing to let software bargain and buy for them, even as consumers remain wary of giving AI free rein over spending.
The Birmingham deep-tech firm is raising GBP £725,000 as demand grows for tools that govern AI behaviour in live settings.
Poor governance could expose Australian firms to legal, reputational and operational risks as they deploy autonomous AI agents at scale.
Most AI projects are missing their targets as 65% of Chief Information Security Officers lack confidence in data security controls, a study shows.
Enterprises using the platform will be able to test and monitor AI agents more closely as Sprinklr broadens automation across service, marketing and insights.
The pact secures 3.5 gigawatts of next-generation chip capacity from 2027 as enterprise demand for Claude surges past USD $30 billion a year.
Authenticated AI payments in Singapore and Malaysia could set the standard for cross-border commerce, with banks weighing fraud and consent risks.