Q&A: Darktrace explains how AI influences cybersecurity
What does artificial intelligence mean for cybersecurity today, and how is this likely to change over the next 10 years? What are the implications for conventional cybersecurity?
The fundamental value of AI is that it enables computers to deal with the increasing complexity and subtlety of cyber attacks. This is a significant benefit to cybersecurity defenders who are fighting to overcome their business's own complexity, diversity and scale.
The challenge for cybersecurity defenders to meaningfully understand and monitor a business is already beyond the ability of typically-sized security teams. That's because every person and device behaves in a unique way, and they often number in the tens or hundreds of thousands spread across a country or the globe.
Asking cybersecurity teams to recognise the repeated patterns of previously known historical firewall or antivirus attacks is reasonable using conventional computing and software techniques. But asking these teams to identify the strange and the out-of-character actions that might be the hallmarks of a novel attack or a disaffected employee, is not so achievable due to scale and inability to guess everything that might go wrong.
And it's not achievable using standard software programming approaches due to the overall complexity and subtlety of daily behaviours. This is why attacks can emerge within a business, having quietly been incubated for months or years, and become a crisis without anyone knowing about it until the attackers decide to reveal their crime.
Does AI facilitate new kinds of cyber attacks, and if so, what are they? Are these potentially more dangerous or threatening?
AI techniques will open up new opportunities for criminals to operate at greater scale and to pursue new models of criminality. For example, imagine a piece of malicious software on your laptop that can read your calendar, emails, and messages. Now, imagine that it is supported by AI that can understand all of that material and train itself to understand how you communicate with different people. It could then contextually contact your co-workers and customers, replicating individual communication styles to spread itself.
Perhaps you have a diary appointment with someone, and it sends them a map reminding them where to go and hidden in that map is some malicious software. Perhaps you are editing a document back and forth with another colleague and the software can reply while making a small innocuous edit and again include the malicious software. Will your colleagues open those emails?
Absolutely they will, because they will sound like they are from you and be contextually relevant. Whether you have a formal relationship, informal, discuss football or a new restaurant opening, all of this can be learnt and replicated. Such an attack is likely to explode across supply chains. If you want to go after a hard target like an individual in a bank or a public figure, this may be the best way.
Then there are situations where someone attacks all the connected smart TVs and video conferencing systems installed in meeting rooms across an organisation, like a law firm for example. Typically, these devices have substantially inferior security compared to a modern laptop. Say you then activate the microphones and stream the audio of meetings to an AI driven translation and transcription service – they type already available from Google and Amazon.
Given the transcripts of these meetings, a simple AI model could automatically alert the criminal to topics of interest such as unannounced deals or contracts or the details of preparations for a particular trial, and suddenly the criminal has easily scalable approaches for ambient surveillance of a company without having to actually listen to any meetings themselves.
Ambient surveillance has previously been the work of spies and not criminals not because it accesses uninteresting content, but because it doesn't scale very well. AI completely changes this, taking advantage of the fact that businesses are busily engaged in the sprinkling of our environments with cameras and mics.
Third, data has become a proxy for the beliefs of an organisation. While the examples above are possible now, future attacks may involve deliberately altering data.
This might mean that an oil and gas firm's executives bid for drilling and mining rights in the wrong location or a series of random bank account balances are subtly and consistently tweaked in the bank's digital backups before being changed in operational systems resulting in an inexplicable set of books and major loss of consumer confidence.
Such attacks are more elaborate and would rely on smart software that was able to manipulate data in a manner that is believable at first glance but that becomes disruptive at scale. It's not unreasonable to believe this is achievable within the next decade through the application of AI.
To what extent can AI help to strengthen cybersecurity? Where are such approaches used in cybersecurity, and how might this change in the future?
AI offers opportunity to supercharge cyber defence effectiveness and has an enormous role to play in exponentially improving existing protective approaches like anti-virus and firewall approaches. It is also enabling fundamentally new approaches that can learn the normal behaviour of everyone and everything, and respond to subtle changes indicative of emerging threats that are already within the business, but that can be handled before they become a crisis.
Realistically, there are no other scientific developments on the horizon that can facilitate cybersecurity adapting to the ever-increasing scale, diversity and complexity of digital businesses in the foreseeable future.