SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image

Why your AI policy must fit the needs of your company

Tue, 13th Aug 2024

If you’re running a business, you should have an AI policy already in place or, at the very least, be seriously thinking of implementing one.  

A little over a year ago, company AI policies were few and far between. Today, thanks to the explosion of Large Language Model tools like Chat GPT, responsible AI use is front of mind for business leaders and conscious consumers alike. 

Even if you aren’t intentionally using AI in your business, it’s more likely than not that your staff, and even your suppliers, are already using it. As a technology that is pushing up from the bottom of your organisation, trying to avoid Shadow AI creeping into your business is almost impossible. For example Microsoft’s CASB (Cloud Access Security Broker), which can scan all your traffic to the internet, has 477 tools already categorised as Generative AI, meaning users have many choices already. 

The European Parliament’s ‘Artificial Intelligence Act’ is the world’s first piece of comprehensive AI legislation and is soon to be adopted. Meanwhile, the UK is underway drafting regulations to govern the use of AI, and plenty more countries are underway drafting their own.

It’s only a matter of time before New Zealand adopts its own piece of AI legislation and it will look to overseas jurisdictions to inform its development, which means you should too.
Of course, other legal considerations apply here, too. The Office of the Privacy Commissioner confirmed in late 2023 that the Privacy Act applies to the use of AI tools, and issues surrounding copyright breaches, biased outputs and data loss arising from AI use present new and unchartered territory for businesses to navigate too.

Despite this, AI is an exciting and revolutionary new tool to be embraced by businesses, not feared. The key is to develop an AI policy that meets the unique needs of your company while ensuring it’s legal, ethical and considers privacy and information security principles. 

Here’s some things to consider when developing an AI policy for your business.

Scope & objectives 
The first step is to understand how and why AI might be used in your organisation – and by whom. Employees may use AI for a range of different tasks – from writing meeting minutes to generating code. Addressing as many scenarios and use cases as you can will ensure your policy is as comprehensive as possible.

Consider what data AI may be able to access and what data, particularly sensitive data, should be out of bounds for AI.

Responsibilities & expectations 
The next step is to set clear expectations for how your employees should use AI in the workplace. 

For example, you may want to set the expectation that the IT and data team should understand how all tools or services they bring into the business use AI and how that AI was trained. Or that users of AI tools should be aware of the potential to unintentionally bias queries to the AI system in a manner that favours the user’s objectives or inadvertently share copyrighted or private client or customer information.

AI prompt training and education around the dangers and benefits of AI may complement the implementation of an AI policy as they help tech employees about how to effectively and efficiently use AI in their jobs.

Employee usage
Setting out guidelines that employees can follow when interacting with an AI or machine learning system (AI/ML) ensures everyone knows how they can and can’t use AI within a workplace. 

These guidelines may look like a list of dos and don’ts, which could include:

Do:

  • Review any Licence Agreement or Terms of Use before using any public or unapproved AI system, and only use systems in a way that abides by these agreements. 
  • Get advice from your manager or information security, data, legal or other domain owners if you aren’t sure about the risks involved in using AI systems at work. 
  • Treat AI-generated predictions (e.g. financial forecast data or behavioural risk scores) with scepticism – they’re indicative, not absolute fact.

Don’t:

  • Paste or copy any company, partner or customer source code into an AI tool.
  • Paste any confidential, sensitive or personally identifiable information data into AI tools.
  • Paste business processes or strategic documents into AI tools.
  • Rely on AI/ML generated responses and content without first sanity checking or peer reviewing the response for reasonableness and accuracy.

Organisational usage
AIs are powerful tools, but ultimately, they receive their information from humans, which means our underlying biases and limitations are baked in.

For organisations where employees can purchase or develop their own AI tools, organisational usage guidelines help ensure employees understand the limitations of the tools and how those tools may impact the business.

When implementing new AI tools, employees should conduct a Privacy Impact Assessment to determine if the system will process personal data and test the AI before, during, and after implementation to understand its limitations and biases. When developing AI ensure data training sets have sensitive data anonymised and ensure the data is as representative of the business’ customer base and unbiased as possible. 

You should also understand the difference between a private AI that only your organisation can access and a public AI where anyone can access it. Any AI doing important work should be private. You should also understand if the AI provider is training the AI on your data and if you are happy with this.

Third-party considerations
Whatever processes you put in place for your business, you’ll want to make sure your suppliers or partners who may also have access to your data and systems follow suit.
Ensure during your procurement process, you implement some form of AI questions, and inform your current partners of your policy around AI so they understand what your guidelines entail. 

Measurement & evaluation
Putting measurements in place to gauge the effectiveness of your AI usage and policy helps to understand where the gaps are and how AI is impacting your business. 

Consider implementing the following metrics: 

  • Number of times a month a staff member is found to be using an unsanctioned AI. 
  • Number of tools using AI that haven’t been reviewed by before use, each month.

An AI policy that clearly sets out how it can be used responsibly and productively is essential in ensuring your business is best set up to harness its capabilities without compromising your company.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X