SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image
AI in physical security: Opportunities, risks, and responsibility
Fri, 15th Mar 2024

The rise of artificial intelligence (AI) has ignited both excitement over the possibilities of the technology and concerns over its risks. As AI technology evolves, industries worldwide are stepping up their exploration. In its Worldwide Artificial Intelligence Spending Guide, IDC forecasts that global spending on AI-centric systems will reach $154B in 2023, an increase of nearly 27% over 2022. Legislation to regulate AI technology is evolving as well. Data from Stanford University's 2023 AI Index shows that 37 bills related to AI were passed into law throughout the world in 2022, with many more in development. 
 
AI technology, or more accurately its subsets of machine learning (ML) and deep learning (DL), stand to transform the physical security industry. If we don’t understand their potential, though, these technologies may either fail to meet our unrealistic expectations or lead to unnecessary fear and uncertainty. This brief primer elaborates on how subsets of AI are used in physical security technology, use cases, risks, and responsibility to help security professionals assess the suitability of AI-based technologies. 
 
What is AI in a physical security context? 

In the research community, artificial intelligence (AI) refers to a fully functional artificial brain that is self-aware and intelligent, and that can learn, reason, and understand. That does not exist. What does exist is technology based on subsets of AI technology that is developed to “learn” from and use data sets to enable computers to perform tasks that normally require human intelligence.  
 
Machine learning (ML) and deep learning (DL) are the subsets of AI typically used in physical security systems. These algorithms use learned data to accurately detect and classify objects. When working with data collected by physical security devices such cameras, doors or other sensors, machine learning uses statistical techniques to solve problems, make predictions, or improve the efficiency of specific tasks. Deep learning analyzes the relationship between inputs and outputs to gain new insights. Recognizing objects, vehicles, and humans, or sending an alert when a barrier is breached are examples of what this technology can do in a physical security context.  
 
Machines are exceptionally good at repetitive tasks and analyzing large datasets (like video). This is where the current state of AI can bring the biggest gains. The best use of machine and deep learning are as tools to comb through large amounts of data to find patterns and trends that are difficult for humans to identify. The technology can also help people make predictions and/or draw conclusions. 
 
Physical security technology does not typically incorporate the subset of AI called large language models (LLM). This is the model used by Chat GPT and other generative AI. It is designed to satisfy the user as its first priority, so the answers it gives are not necessarily accurate or truthful. This is dangerous in a security context. Before it can be applied in security settings, LLM technology must first provide reliable output. Today, Chat GPT and similar tools are all online, and every text prompt is used to train the next version. Security use cases would need to take approaches where the models are trained offline, on-premises, on a contained, accurate dataset. So, while this technology is advancing fast, there’s still a lot of work to be done for it to be used widely and safely in physical security applications. 
 
Physical security use cases for AI  

AI is being used to help security teams do what they already do, just faster and with greater accuracy, across huge datasets. Some examples are: 
 

  • “Watching” hundreds of hours of video to find a red car so a security operator can focus on other tasks.   
  • Automating people counting in retail, airports, and enterprises to manage occupancy, monitor lines, and alert staff to address as needed. Retailers are using the data to improve conversions, stadiums to control crowds, and transit to understand and address peak travel times. 
  • Maintaining traffic flow at stadiums, hospitals, large venues, and city areas to detect backups, alert staff to issues, automatically re-route, change signage, and more. 
  • Recognizing license plates to aid in investigations, enable touchless parking payment, and more. 
  • Detecting objects - critical infrastructure facilities use this technology to secure perimeters, corrections facilities to detect goods being smuggled in, and airports to identify left luggage/bomb threats. 
  • Integrating with other data sources such as airport baggage systems, retail point-of-sale systems, smart building systems, etc., to analyze, predict, and respond to a range of business-specific situations. 

 
Physical security is one of the fastest-growing arenas for AI technology application. IDC predicts that Threat Intelligence and Prevention Systems and Fraud Analysis and Investigation will be two areas that will see the largest spending on AI this year.  
 
Myth-busting AI to set accurate expectations 

Here are a few of the biggest misconceptions about AI in physical security and what the reality actually is: 
 
MYTH: AI can replace human security personnel:  
What is really possible: AI technology can automate repetitive and mundane tasks, allowing human security personnel to focus on more complex and strategic activities tasks. However, human judgment, intuition, and decision-making skills are still crucial in most security scenarios. AI can assist in augmenting human capabilities and improving efficiency, but it requires human oversight, maintenance, and interpretation of results. 
 
MYTH: AI-powered surveillance systems are highly accurate and reliable:  
What is really possible is that AI systems make mistakes. They are trained based on historical data and patterns, and their accuracy heavily relies on the quality and diversity of the training data. Biases and limitations in the data can lead to biased or incorrect outcomes. Moreover, AI systems can be vulnerable to attacks where malicious actors intentionally manipulate the system's inputs to deceive or disrupt its functioning. 
 
MYTH: AI can predict security incidents:  
What is really possible: AI can analyze large amounts of data and identify patterns that humans might miss, but it is not capable of predicting security incidents. AI systems rely on historical data and known patterns, and they may struggle to detect novel or evolving threats. Additionally, security incidents can involve complex social, cultural, and behavioural factors that may be challenging for AI algorithms to fully understand and address. 
 
MYTH: AI technology is inherently secure:  
What is really possible: While AI can be used to enhance security measures, the technology itself is not immune to security risks. AI systems can be vulnerable to attacks, such as data poisoning, model evasion, or unauthorized access to sensitive information. It is crucial to implement robust security measures to protect AI systems and the data they rely on. 
 
Taking a responsible approach to AI    

Any manufacturer using AI in its offerings has a responsibility to ensure that the technology is developed and implemented in a responsible and ethical way. At Genetec several principles guide us when creating, improving, and maintaining AI models:  
 

  • We use only datasets that respect local data protection regulation. 
  • Our solutions are designed to comply with current privacy regulations, and we will ensure they also comply with upcoming AI regulations as soon as they become ratified. 
  • We treat datasets with care and ensure access to that data is granted only to authorized users. 
  • Wherever possible, we use synthetic data which does not contain any identifiable information to protect data privacy. Synthetic data allows our data scientists to feed machine learning models with data to represent any situation. Synthetic test data can reflect 'what if' scenarios, making it an ideal way to test a hypothesis or model multiple outcomes.  
  • We make sure that our AI models are not used to make critical decisions. We ensure that a human is always in the loop and the data is presented in a way so that the human can make an informed decision.  
  • We continuously improve the confidence in our models by regularly adding more data and experimenting with the algorithms.  
  • We minimize bias in our models by continuously testing variables. We also use synthetic data to overcome data bias by generating a more diverse and representative dataset.  
  • We rigorously test our AI models before we ship.  

Striking a balance 

Like any new technology, acknowledging the risks of AI doesn’t eliminate its potential benefits. With judicious application and proper oversight, AI can increase efficiency and security while also minimizing negative impact. 
  
AI offers significant advantages in automating repetitive tasks that often consume a significant amount of human time. By sparing operators countless hours spent searching for specific individuals within lengthy video footage, it allows them to redirect their efforts towards bringing added value to their other responsibilities and to other areas of the organization. 
 
Ensuring this technology is used responsibly is everyone’s job. Regulation can help maintain checks and balances on the road to a more connected and AI-enabled world, but providers also need to innovate to create those solutions.