
At a time when it seems like there couldn't be more uncertainty added to our environment, organisations (or their boards) are being asked to make decisions about investing in artificial intelligence projects without the sort of certainty they would normally prefer for significant capital allocation decision-making.
2026 has fast shaped up to be the 'Year of Agent-Assisted Coding' among other things (commonly referred to as Vibe Coding). This is a meaningful shift with significant implications for labour markets, established software companies and consumers of software technologies. There are many articles and posts speculating on the implications of this. I work as a board chair, so I am less interested in being perfectly right about these implications and where they lead; and much more interested in understanding how boards can still make great decisions in these uncertain periods.
When it comes to adopting and implementing these new agentic tools, the downside risks are mostly obvious. Reckless innovation can lead to 'agentic off the rails' carrying reputational and even operational risks for an organisation. Locking into AI applications or onto technology stacks might lead to dead-end technology choices in the current race for model and inference dominance. A clear leap forward and significant advantage gained could even be lost almost overnight, as major vendors release new tools and render your newly built application practically obsolete.
These downside risks of investing in agentic AI are all real and need to be carefully considered. They need to be weighed against expected and measurable gains, outcomes, and improvements for the organisation.
But what about the risk of doing nothing?
At a time of such uncertainty and volatility, often the safest choice appears to be to wait on the sidelines until the dust settles in the expectation that the best path will be revealed by the pioneers who have gone before (not to mention the bodies of those that didn't make it).
Whilst this will sometimes without a doubt be the best choice for an organisation, it is important that the risk of doing nothing is measured and assessed in order to reach that conclusion. For clarity, the concept of doing nothing should be specified. In this context, I think doing nothing means putting any planned agentic or AI initiatives on hold, or stalling them at inception whilst asking for more information to support a proposal. For some organisations, it might mean locking down or locking out the use of generative AI tools.
A widely cited MIT paper late last year suggested that a high proportion of organisations that have invested in AI failed to see a meaningful return. The methodology and underlying data has since been challenged, but the implications drawn created a lot of debate. Yet at the same time, there are many examples of organisations with foresight and enough access to AI capability, making choices to significantly reduce their headcounts and cost bases, relying significantly on these tools to make the difference. Regardless of the proportions, this means there are organisations expecting significant gains through these investments.
In this context, the choice to do nothing is not actually a zero-cost path. Time spent doing nothing will mean time spent experiencing declining relative productivity compared to those competitors who have invested and executed successfully. Your organisation may have an operational 'moat' created over time by finding the best way to serve customers. If a competitor uses agents to successfully automate complex workflows, they may soon achieve a cost structure that you cannot match.
Time spent doing nothing is not just about shifts in relative productivity. For this kind of decision making, time dimensions need to be considered. It takes time to get to a successful outcome with AI tools; especially when it comes to your workflows and complex operational processes. Finding and organising your data supply, tuning and training your models, developing your organisation's unique context. These requirements all take time.
If you believe your people and organisational culture is your significant advantage, then consider that the best talent in the market will likely gravitate to AI-forward businesses. Where they can learn skills and develop their careers to suit the direction of travel we can all see. Otherwise they will lean into and employ the tools at home on their own time. Perhaps also, they will create shadow AI projects in your workplace, which of course introduce risks of their own.
So whether you are starting a general board discussion about your organisation's position with respect to the adoption and use of AI tools or whether you are considering a specific business case for a project, make sure you have teased out the main risks of doing nothing for your unique situation and that these risks are included in the discussion and the decision making process.