Story image

When AI goes rogue - a look into its possible futures

28 May 18

What happens when artificial intelligence (AI) goes bad? According to the Electronic Frontier Foundation, AI and machine learning will bring benefits in diverse areas such as transport, health, art and science, but we’ve already seen things go horribly wrong.

Today’s computers are inherently insecure so they’re a poor choice for high-stakes machine learning systems and AI – and according to the Electronic Frontier Foundation, we need to consider the implications these new technologies may have for computer security.

Earlier this year the Electronic Frontier Foundation was one of six institutions that released a report called The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

Also involved in the report was the Future of Humanity Institute, the University of Oxford, the Centre for the Study of Existential Risk, the University of Cambridge, the Center for a New American Security, and OpenAI.

The report looks at AI’s potential impact on digital security, physical security, and political security.

It says there are specific security-relevant properties of AI, including its dual use for civilian and military purposes; its scalability; the ability for its algorithms to be rapidly distributed; and its ability to exceed human capabilities.

They can also expand existing threats, introduce new threats, and alter the typical character of threats, allowing attacks to be more versatile, effective, and targeted.

In terms of digital security, AI could influence email attacks such as spear phishing to become more automated – and it could even eliminate the need for the attacker to speak the same language as the target.

“Many important IT systems have evolved over time to be sprawling behemoths, cobbled together from multiple different systems, under-maintained and — as a consequence — insecure. Because cybersecurity today is largely labour-constrained,” the report notes.

AI could also target malware’s behaviour that it becomes impossible for humans to control in a manual way. The Stuxnet malware is a clear example of how a malware cannot receive commands once it infects computers.

In addition to automation of social engineering attacks, AI could also automatically discover vulnerabilities, automate hacking processes by evading detection and responding to behavioural changes from the target; it could mimic human-like denial-of-service attacks, and exploit legitimate AI itself.

Althrough offensive use of AI has only publicly been disclosed through experiments by white hat hackers, the report says it’s only a matter of time before it is used for malicious consequences – if it is not already happening.

AI could disrupt physical security by repurposing commercial AI systems for terrorism – for example using autonomous vehicles to cause crashes. It could enable distributed swarming attacks for surveillance, and it could increase the scale of attacks.

AI could also affect political security by allowing states to automate surveillance platforms.

“State surveillance powers of nations are extended by automating image and audio processing, permitting the collection, processing, and exploitation of intelligence information at massive scales for myriad purposes, including the suppression of debate,” the report says.

It could also achieve highly realistic videos to support fake news reports, manipulate information availability; automate influencing campaigns; and hyper-personalise disinformation campaigns.

The report recommends four approaches to responsible AI use:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Symantec releases neural network-integrated USB scanning station
Symantec Industrial Control System Protection Neural helps defend against USB-borne cyber attacks on operational technology.
SingleSource scores R&D grant to explore digital identity over blockchain
Callaghan Innovation has awarded a $318,000 R&D grant to Auckland-based firm SingleSource, a company that applies risk scoring to digital identity.
Ramping up security with next-gen firewalls
The classic firewall lacked the ability to distinguish between different kinds of web traffic.
Spark Lab launches free cybersecurity tool for SMBs
Spark Lab has launched a new tool that it hopes will help New Zealand’s small businesses understand their cybersecurity risks.
Gartner names LogRhythm leader in SIEM solutions
Security teams increasingly need end-to-end SIEM solutions with native options for host- and network-level monitoring.
Cylance makes APIs available in endpoint detection offering
Extensive APIs enable security teams to more efficiently view, enrich, and contextualise real-time intelligence collected at the endpoint to keep systems secure.
SolarWinds adds SDN monitoring support to network management portfolio
SolarWinds announced a broad refresh to its network management portfolio, as well as key enhancements to the Orion Platform. 
JASK prepares for global rollout of their AI-powered ASOC platform
The JASK ASOC platform automates alert investigations, supposedly freeing the SOC analyst to do what machines can’t.