AI Safety stories
CIOs face rising risk as agentic AI moves into production faster than most data platforms can govern, retrieve and act on reliably.
Security risks are rising as AI agents handle emails, code and financial tasks, prompting Gen to add new protections in Norton 360.
New Zealand firms face mounting identity fraud losses of NZD $2.2 million a year, as 90% fear AI-linked weaknesses in document checks.
The funding will help Vapi scale its voice AI platform as enterprise demand surges and more than 1 billion calls flow through its agents.
The deal aims to give enterprise AI a live view of operations, while also adding planning and forecasting tools to Celonis's platform.
IT teams could cut repair times as Phoenix47's new agent mines past incidents and internal documents to guide engineers live.
Developers using generative AI will get hands-on lessons on prompt injection and data leakage as AWS expands Bedrock adoption.
Security teams face a broader threat as criminals and state-backed actors use generative AI to speed hacks, phishing and malware.
The new platform aims to close a governance gap as autonomous software agents increasingly access sensitive systems and data without oversight.
The Sydney company is betting creators can monetise audience demand with paid AI personas across WhatsApp, SMS and web chat.
Users risk mistaking agreeable chatbot replies for understanding, as Smudge says commercial AI rewards flattery over accuracy.
The deal could speed up onboarding for banks and other regulated firms by automating identity checks while keeping an audit trail inside Claude.
Consulting firms urged to slow AI rollouts as Trend-Setters Consulting Chief Executive Officer Sam Shar warns of rising cyber risks and rushed deals.
CurricuLLM rolls out a school AI monitoring tool in Australia and New Zealand, flagging 21 harm types from academic offloading to personal revelations.
Salesforce survey finds Australia and New Zealand workers using AI agents daily, but accountability, privacy and trust remain the biggest concerns.
Worries over accuracy and human skills are tempering the rapid rise in personal use of generative AI, despite wider adoption across five markets.
Organisations using AI in software development will get training on secure coding and governance as vulnerabilities and data risks mount.
More companies will need dedicated monitoring as AI deployments mature and governance risks rise, Gartner says, with adoption reaching 40% by 2028.
Argyll Data Development launches UK sovereign AI inference cloud with SambaNova, targeting regulated firms seeking local control over data and systems.
Worries over cyberattacks, bias and weak data systems are driving calls for AI rules that protect trust, jobs and security.