SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image
Interview: Microsoft's Steve Guggenheimer explores AI in security
Thu, 5th Apr 2018
FYI, this story is more than a year old

Steve Guggenheimer is Microsoft's AI guru. He is the corporate vice president of Microsoft's AI business and in addition to developing AI solutions and its broader conversation, he enjoys connecting with customers and partners to see how technology shapes the world.

SecurityBrief talked with Guggenheimer to discover what Microsoft is doing with AI in security and across its general business, and how attackers and defenders are using AI.

How do you see the state of AI in security right now and where does Microsoft sit in this space?

“We are still in the early stages of AI. It seems like an odd thing to say considering that we have been working on it for decades, but the foundation of cloud computing, big data and algorithm advances are just getting started.

“At Microsoft we have been doing AI research for over 20 years across all key areas including computer vision, speech recognition and natural language processing. These advances make it possible to look at security end-to-end from identifying network anomalies to processing input from CCTV cameras.

“We use AI for network threat detection, securing Windows and the reliability of our networks.  We are also starting to take our learnings and make them available to customers via offerings for security that build on our AI work like our DDOS Protection.

I understand Microsoft launched beta AI technologies across Azure Security Center how do they contribute to customers' overall server protection?

“Configuring servers to be secure while also being usable can be complicated because of how hard it is to create the right policy without a lot of management overhead.

“Preview technologies like Adaptive Application Controls in Azure Security Center uses machine learning (ML) to analyse your VMs, create a baseline for applications and make recommendations for policy rules.

“In this case ML assists the security team by recommending a security policy so the team could focus their efforts on adjusting the recommendation rather than trying to figure out how the VM is used and then creating the security policy."

Security vendors are offering AI and organisations are using it. On the flipside, cyber attackers are also harnessing AI to hone their attacks. Is it a case of somebody can win one battle but nobody will win the war?

“This scenario is not unique to AI. As new technologies are developed it is used for both good and for bad. For example, in the early days of the internet email was a great tool for communicating and then attackers learned that it could be used for phishing.

“Phishing caused the email providers to create adaptive spam filters to combat evolving phishing attacks and users had to be educated on safe browsing habits. AI will play out the same way where security will rely on a combination of AI advances and user education."

There are a few downsides to using AI in security – for example, false positives. What other downsides are there to using AI and should security teams continue to have faith in the technology?

“The quality of the data used to train your models will have a large impact on the results that you get from AI. If you used biased data, then you will also get biased results. Being aware of this issue lets security teams be proactive in designing systems that take this into account from the beginning.

“With security threats evolving, it is important that reinforcement learning techniques are used so that the system evolves over time and does not become stale. Like all new technologies there will be a learning curve, so security teams that understand the current capabilities could have faith in the technology. It becomes a problem when a security product promises more than what the state of AI can deliver."

What other security issues are involved in AI?

AI works best when it is assisting others. AI works today in a narrow context, but we don't have the general AI that we see in science fiction. End-to-end security with AI will require tasks that are executed by AI and tasks where AI is assisting people."

How will Microsoft use AI in its corporate strategy going forward – not just in security but across all areas of the business?

“Microsoft uses AI in three ways. The most known one is the AI platform where we make many of our AI technologies available to developers to build on. Cognitive services and the bot framework are two examples of this.

“Infusing it into our products is the next thing. We have been infusing AI into products for decades and we are doing the right thing if users don't know it is included but the products work better. Solutions are our third area where we are combining our AI and product development expertise to create custom solutions for customers.

“At the core of this strategy is Microsoft Research which has world class AI researchers and are working on the future innovations like quantum computing that will be the foundation for the future.

Guggenheimer says AI is still being hyped up, but it's about taking the plunge in a single area.

“There is a lot of hype around AI and many people are uncertain how to get started. It is too early to do everything with AI and too late to do nothing. Find a security area to work on where you think AI can help and get started.