Artificial Intelligence

AI Governance Series: Keeping regulation and compliance concerns front of mind during AI project planning

AI Governance Series: Keeping regulation and compliance concerns front of mind during AI project planning

As I write this, media in New Zealand this week have been talking about the use of facial recognition technology in big retail. This follows closely on the heels of a very similar situation across the Tasman involving consumer protection groups and the retail sector holding very different opinions over what is an acceptable use of artificial intelligence. In the Australian case a complaint was filed with the regulator which led to one of the three retailers voluntarily pausing their use of the technology while there is an ongoing investigation by the regulator. 

This week’s local story shows a similar set of concerns being raised, but confined to a discussion in the media with no mention of investigation by regulators. It is easy to see the motivation on both sides of this particular debate, especially with retailers currently seeming to face more and more theft and personal security issues on a day to day basis. 

This is a challenging balancing act for any organisation looking to use these advanced AI technologies to solve real world challenges. The focus of today’s discussion though is on Compliance and its close twin - Regulation. Elsewhere in this series we discuss Ethics and Stakeholders in AI through a Governance lens. 

In the case highlighted in Australia - the complaint to the Office of the Australian Information Commissioner (OAIC) is reported as based on concerns about ‘unreasonably intrusive use’ of technology and potential breaches of privacy laws. It seems the OAIC has yet to reach a determination on the specific complaint raised, but has issued guidance on compliance with privacy law as well as recommending retailers consider customer and community expectations as well as the impact on their privacy. 

Another hot topic causing interest and debate at the moment is that of AI Generated content and the potential issues of copyright. Whilst generative AI models assert care has been taken to avoid copyright infringement, their terms of use also usually include a disclaimer of liability. It seems more likely that someone using an AI generated image that raises copyright issues will be the party answering any action taken as a result. 

This certainly means taking a little care to ensure originality or right-to-use before including AI generated content in anything likely to be construed as commercial use. 

Whilst legal compliance and more optional obligations (like considering community expectations) are often mentioned together in this context, what does become obvious is that regulation is lagging well behind the development and deployment of AI technology in most markets. 

For directors, it is then helpful to consider compliance in two parts;

  1. What are the legal obligations of the organisation intending to deploy an AI solution now?
  2. What are they likely to be within the next few years, as the anticipated successful return on a project is delivered? 

The first requires a little research and understanding, the second more of a ‘crystal ball’ approach. 

At present in New Zealand the most obvious risks of non-compliance or infraction stem from privacy issues. Many useful AI applications are built using data sets that include personal information or data that can be used to determine identity - albeit strictly for the purposes towards which the AI project is designed. A board has obligations to ensure that privacy laws have been adhered to during the design and build of the project. They also have obligations to ensure that any of this type of data collected is kept secure from unauthorised access or use. These are the obvious obligations or ‘low hanging fruit’, but there will be more depending on the type of AI and use case being considered. 

A little less obvious, but by no means obscure is the issue of discrimination. In New Zealand, compliance is largely governed by the Human Rights Act. Under this act it is illegal to discriminate against people on several grounds, including things like Gender identity, Race, Age and Ethnicity. Consider the process of employment. The early stage of a recruitment process is a rich area for automation using AI. Training a model to read large quantities of CV’s and reducing them to a shortlist for consideration by a human is a very compelling investment (Especially to those who have had to read 50 or 100 CV’s before having the model).

But imagine bias being subtly but surely ‘baked in’ to the model as it is designed and then trained. The result might be a sleek recruitment process, but emerging patterns (or even preserved patterns) of discrimination at the early filtering stage of the process. If the model is automatically discriminating, then the organisation is in breach of the legislation - now just doing it faster and at a larger scale. On top of this, from the board’s perspective there will be a lack of diversity in the organisation - growing undetected at first. 

The crystal ball 

For any director concerned enough with potential future compliance issues it would probably help to look into the legislation which is a work in progress in the EU jurisdiction. Currently named The AI Act, this is a body of legislative regulation which is intended to become law in the EU. I recommend that any directors considering significant investment into AI look into it on one simple premise - it is probably a reliable indicator of any legislative framework coming into force in New Zealand. When GDPR was signalled and then introduced in the EU, operating businesses there out of New Zealand forced me to understand the new act and how best to comply. It certainly had teeth. But the compliance was a worthwhile investment that made it easier to meet NZ privacy laws as they were updated later. 

This new act, (a work in progress) contemplates fines as one means of enforcement and they are potentially significant in the tens of millions of Euros, sometimes based on a percentage of world-wide annual turnover. The act will apply ‘extraterritorially’ to any application, product or service that reaches the EU market. It will be risk-based and would explicitly ban specific uses of AI - for example general purpose social credit scoring or exploiting vulnerable community groups. It will contemplate high-risk uses like employment process, law enforcement and border control. The Act document is over 100 pages and a dense read. There is a helpful summary published here which offers a quick explainer and offers advice on how to consider the requirements. 

So where to start? 

For lovers of detail and deep dives, the same web site that publishes the current version of the EU Act has various tools including an assessment tool called Cap AI developed by University of Oxford researchers and designed to help organisations understand how they might assess compliance with the developing Act. The tool follows an ethics-based auditing approach to AI Governance and has  a simple way of illustrating an approach that links compliance, technical diligence and ethics together.  

Cap AI's sand cone model of cumulative capabilities, as applied to AI trustworthiness

It is a reference from a separate study that shows how finessing ethical issues is easier once a solid legal compliance basis is set and then a technically robust use of AI is designed. 

For those who prefer a simple approach, asking questions of a proposed project or just during a discussion about AI in an organisation towards compliance is a great start. Privacy requirements provide an obvious place to begin, then further requirements like non-discrimination can follow. If an issue arises and is unclear, most legal advisory firms will have a partner who specialises now. There are specialist advisory firms like Simply Privacy or Kindrick Partners available. Or you could suggest a workshop session using AI specialists for the board to better understand these implications. 

The compliance bar seems a little low here in NZ as is probably the case with many other markets as discussion and formation of public policy lags behind the development of the technology. But it does seem sure that this bar will rise as this changes. For some, being ahead of this change through understanding the requirements will be a wise choice.

In addition to his position as Executive Chairman of ElementX, Richard McLean has over 20 years of experience helping New Zealand businesses tackle growth challenges and bring new products to market.

Richard's AI Governance series can be found here:

  1. An introduction: Does Artificial Intelligence deserve a place on the board agenda?
  2. Consideration of key stakeholders in the board decision making process
  3. Preparing for and avoiding roadblocks to adoption

Subscribe below to be the first to know when new posts are published, or follow Richard on LinkedIn.

Back to blog