Story image

When AI goes rogue - a look into its possible futures

28 May 2018

What happens when artificial intelligence (AI) goes bad? According to the Electronic Frontier Foundation, AI and machine learning will bring benefits in diverse areas such as transport, health, art and science, but we’ve already seen things go horribly wrong.

Today’s computers are inherently insecure so they’re a poor choice for high-stakes machine learning systems and AI – and according to the Electronic Frontier Foundation, we need to consider the implications these new technologies may have for computer security.

Earlier this year the Electronic Frontier Foundation was one of six institutions that released a report called The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

Also involved in the report was the Future of Humanity Institute, the University of Oxford, the Centre for the Study of Existential Risk, the University of Cambridge, the Center for a New American Security, and OpenAI.

The report looks at AI’s potential impact on digital security, physical security, and political security.

It says there are specific security-relevant properties of AI, including its dual use for civilian and military purposes; its scalability; the ability for its algorithms to be rapidly distributed; and its ability to exceed human capabilities.

They can also expand existing threats, introduce new threats, and alter the typical character of threats, allowing attacks to be more versatile, effective, and targeted.

In terms of digital security, AI could influence email attacks such as spear phishing to become more automated – and it could even eliminate the need for the attacker to speak the same language as the target.

“Many important IT systems have evolved over time to be sprawling behemoths, cobbled together from multiple different systems, under-maintained and — as a consequence — insecure. Because cybersecurity today is largely labour-constrained,” the report notes.

AI could also target malware’s behaviour that it becomes impossible for humans to control in a manual way. The Stuxnet malware is a clear example of how a malware cannot receive commands once it infects computers.

In addition to automation of social engineering attacks, AI could also automatically discover vulnerabilities, automate hacking processes by evading detection and responding to behavioural changes from the target; it could mimic human-like denial-of-service attacks, and exploit legitimate AI itself.

Althrough offensive use of AI has only publicly been disclosed through experiments by white hat hackers, the report says it’s only a matter of time before it is used for malicious consequences – if it is not already happening.

AI could disrupt physical security by repurposing commercial AI systems for terrorism – for example using autonomous vehicles to cause crashes. It could enable distributed swarming attacks for surveillance, and it could increase the scale of attacks.

AI could also affect political security by allowing states to automate surveillance platforms.

“State surveillance powers of nations are extended by automating image and audio processing, permitting the collection, processing, and exploitation of intelligence information at massive scales for myriad purposes, including the suppression of debate,” the report says.

It could also achieve highly realistic videos to support fake news reports, manipulate information availability; automate influencing campaigns; and hyper-personalise disinformation campaigns.

The report recommends four approaches to responsible AI use:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

SecOps: Clear opportunities for powerful collaboration
If there’s one thing security and IT ops professionals should do this year, the words ‘team up’ should be top priority.
Interview: Culture and cloud - the battle for cybersecurity
ESET CTO Juraj Malcho talks about the importance of culture in a cybersecurity strategy and the challenges and benefits of a world in the cloud.
Enterprise cloud deployments being exploited by cybercriminals
A new report has revealed a concerning number of enterprises still believe security is the responsibility of the cloud service provider.
Ping Identity Platform updated with new CX and IT automation
The new versions improve the user and administrative experience, while also aiming to meet enterprise needs to operate quickly and purposefully.
Venafi and nCipher Security partner on machine identity protection
Cryptographic keys serve as machine identities and are the foundation of enterprise information technology systems.
Machine learning is a tool and the bad guys are using it
KPMG NZ’s CIO and ESET’s CTO spoke at a recent cybersecurity conference about how machine learning and data analytics are not to be feared, but used.
Seagate: Data trends, opportunities, and challenges at the edge
The development of edge technology and the rise of big data have brought many opportunities for data infrastructure companies to the fore.
Popular Android apps track users and violate Google's policies
Google has reportedly taken action against some of the violators.