AI Safety stories
The tie-up aims to help firms scale AI agents with better governance, tracing decisions and proving business impact beyond pilot projects.
The hire comes as companies face mounting pressure to validate AI features and core software before release, boosting demand for Testlio's services.
Law firms can now automate more routine work as the platform adds off-the-shelf tools and customisation for specialist legal workflows.
Analyst recognition highlights rising demand for AI governance tools as banks and governments face tighter compliance risks from poor data controls.
It aims to cut wasted search time for coding agents after tests found most of their work was reading files rather than editing code.
Web attacks are driving browser makers to bake security in by default, as Norton Neo adds VPN, phishing blocks and anti-fingerprinting tools.
Many security teams are deploying AI before proving it works, with readiness scores as low as 30% despite 78% confidence.
Governance concerns are pushing regulated firms to demand audit trails and human oversight as AI agents move into live operations.
Detection of malicious code can collapse when AI reviewers are fed large files packed with harmless text, Cloudflare's research shows.
Operational gaps are emerging as most large companies push AI agents into production before staff believe they are ready.
Privacy regulators in Canada say the chatbot maker failed to obtain valid consent for training data, prompting ongoing oversight and reform.
Banks could cut anti-money laundering case reviews from hours to minutes, as the new system keeps data and audit trails inside FIS's controlled environment.
A lack of visibility is leaving many European organisations unable to tell whether AI-powered attacks have already breached their systems.
Most Australian firms expect AI agents to outrun security controls within a year, as only 22 per cent say they can fully see them.
Most Australians would adopt AI sooner if tougher safeguards were in place, yet only 1% say they completely trust the technology.
Despite widespread trust and security fears, 15% of Singapore consumers have used autonomous AI in the past six months, EY found.
The year-long trial will test whether conversational commands can reliably direct autonomous marine vehicles in remote, low-connectivity conditions.
Only 10% of small firms train staff on AI security, leaving many exposed as adoption grows and cyber fears rise.
The French AI group is targeting sensitive public-sector and enterprise uses in Singapore, where stricter controls can slow deployment but boost credibility.
Charities are being urged to move beyond AI trial use as a new four-week course tackles governance, ethics and practical deployment.