AI Safety stories
Trust concerns are pausing nearly half of planned AI spending at medium and large firms, with explainability now outweighing regulatory uncertainty.
Most firms expect autonomous tools to outstrip guardrails within a year, leaving agent actions hard to see, control and roll back.
Businesses risk disruption if they hand security decisions to AI, as experts argue human oversight is needed to keep responses in context.
Australian firms under productivity pressure can now offload routine work to an always-on agent that links Gmail, Slack and calendars.
By handling emails, calendars and routine requests in the background, the tool aims to cut admin for businesses wary of autonomous AI risks.
More than half of organisations have shipped AI tools, but quality problems and weak testing are leaving many projects stranded before production.
Human review remains central as 77% of security professionals back AI tools in operations, with 88% already adding guardrails.
Consumers are set to encounter AI in robots, transport and personalised shopping, as Forrester says business returns will arrive sooner than expected.
Security teams will get Claude tools inside TrendAI Vision One as the firms target AI-driven attacks and faster incident response.
Rising use of AI assistants is making software harder to understand, prompting teams to revive stricter testing, controls and oversight.
Thousands of vetted cybersecurity staff will gain broader access to OpenAI tools as the company loosens safeguards for defensive research.
A trust-backed board majority now gives Anthropic tighter oversight as it seeks to balance rapid AI growth with its public benefit mission.
Most firms lack the live, governed data needed for autonomous AI, with 66% of executives saying real-time access is non-negotiable.
The hire signals Applause’s push into AI-driven testing as enterprises seek tighter checks on software before customer releases.
Routine admin tasks can now be handled in the background, though Wingman will still ask before sending messages or altering key data.
The grant lets the London startup train an air-gapped coding model on UK infrastructure, bolstering supply for defence and other sensitive sectors.
UK regulators are racing to assess whether Anthropic’s Mythos model could speed up attacks on banks and unsettle financial stability.
Ransomware pressure on US firms is intensifying debate over whether broader AI hacking tools will help defenders or aid criminals.
Organisers say the two-day programme will tackle deepfake hiring, data sovereignty and the mounting risks of AI-driven cyber attacks.
The premium handset lands in local stores as HONOR seeks a stronger foothold against Apple and Samsung in Australia.