home
navigate_next
Blog
navigate_next
Artificial Intelligence

AI Governance Series: Preparing for and Avoiding Roadblocks to Adoption

AI Governance Series: Preparing for and Avoiding Roadblocks to Adoption

This image is not a photo: it was generated using DALL E 2. DALL E 2 creates realistic images and art from a natural language description, and it's just one of many examples of disruptive AI that is becoming more and more mainstream.

This is part three in our series on AI Governance. Visit part one or part two.

Artificial Intelligence is more of a frontier than a technology. That is to say, the term covers a wide range of applications and underlying types of machine learning models, other software models and hardware integrations that provide sources of data. Like any frontier of old, there will always be a few ‘pioneers’ who fall victim to the many hazards associated with new or uncharted territory.

With such rapid advances in research and development, this frontier is expanding rapidly. Whilst it is certain that developments in AI will continue to advance and improve the delivery of goods and services in marketplaces everywhere – what is much less certain is the success of individual AI projects, even if the technology is already proven.   

For any organization, it can make good sense to consider a ‘proof of concept’ styled approach to any Artificial Intelligence project. This is a well proven approach to testing the value of investing significant time and money into the full production and integration of a solution. The theoretical progression is something like;

Of course, this is a simplified view of what can be a complex series of decisions. For a Governance team though, it is helpful as a simple way in which to consider several key concepts, namely;

  1. That ‘into production’ is a general term reflecting the phase after a POC and where the model is built out and integrated to the organisation’s workflow and existing technologies.
  1. That advances in AI development - especially where providers are large and well-funded – are allowing faster and easier POC development. This by giving access to AI platforms and reducing the need for R&D into building the models themselves. Essentially this means often a POC can be a layer on top of an existing model that had already been trained on huge datasets. Examples include GPT3 from OpenAI, which has pretrained models that mostly work ‘out-of-the-box’ and Google’s AutoML where customers ‘bring their own data’. More locally, Arcanum have released a platform to allow businesses to accelerate their ML projects.  
  1. That just because a particular application of Artificial Intelligence shows promises of delivering new competitive advantage or differentiation to the organisation, it doesn’t mean that advantage will materialize once a project is completed. There are a few potential roadblocks for boards to be aware of and ask about early on.
  1. That a concept may be technically feasible and proven to work from a pure technology point of view, but not be practical to implement for an organization.

The point of this article is not to warn boards off AI projects, rather quite the opposite. The point is to be aware of and consider the potential roadblocks, internal and external as early as possible and thus significantly increase the chance of a successful POC and perhaps more importantly, the chances of the project making it into production. Current statistics reveal nearly half don’t make it, so it is worth the time spent early to de-risk where possible.

Preparedness

Preparedness is definitely a thing when it comes to AI projects. Often, they ingest large amounts of data (especially in the training phase) and they require access to specific types of data. If the data needed is not readily available and accessible, then this can become an early roadblock that slows or stops a project, which happens more often than might be obvious. To generalize terribly, large organisations are more likely to have achieved a stage of digital maturity and therefore are likely to have more useful data for a project, but if that data is widely distributed in many formats and therefore difficult to access and use – then there is going to be a significant requirement to prepare it.  Understanding the downstream requirements before commencing a POC is healthy practice, as is establishing sound data governance (a separate subject in its own right).

Questions worth asking on this front are:

  • Where does the data (training data or data feeds for the project itself) needed for the POC reside and do we have easy access to it?
  • If the project progresses to production, which types of data will it need and again, is it accessible?

Culture

Whilst the benefits of a successful venture into AI may be obvious to sponsors and a board, the impact on people and culture may not. Considering stakeholders and the organisation’s purpose is an important step, but so too is being sure to understand internal views and beliefs held towards the technology.

If there are personally held concerns about the introduction of AI technologies or even wider beliefs that AI is not a good thing, then this will significantly impede the success of your project and can be a significant barrier to adoption when it comes to being in production. The complete opposite can be an impediment too. Unrealistic expectations of AI - such as 99% accuracy for an out-of-the-box model applied to a new process can be a poor start for any project. An internal survey or informal team discussions can help a board understand where attitudes lie.

Change management

In a similar vein, the concept of change management as it applies to embedding new technology is important. For directors, understanding the likely changes in workflow or systems used for people and how they will be assisted through these changes matters. These existing technology systems and structures create inherent boundaries and constraints, which will need to be overcome.    

Good questions to ask around this include;

  • Once this anticipated approach to AI reaches the ‘production’ stage, which of our existing technology systems will it need to integrate to?
  • How will the project change the way our people use these systems and how they work?
  • Has any work been done to understand these facets of the project?

If the board is looking at a proposal simply on the merits of the technology and don’t have access to at least some investigation and understanding of these potential roadblocks to success then it is very likely worth asking for this to be done before committing. 

In addition to his position as Executive Chairman of ElementX, Richard McLean has over 20 years of experience helping New Zealand businesses tackle growth challenges and bring new products to market.

Richard's AI Governance series can be found here:

  1. An introduction: Does Artificial Intelligence deserve a place on the board agenda?
  2. Consideration of key stakeholders in the board decision making process
  3. Preparing for and avoiding roadblocks to adoption

Subscribe below to be the first to know when new posts are published, or follow Richard on LinkedIn.

arrow_back
Back to blog