I sit on a couple of boards and I’ve spent a fair bit of time recently talking to other board members and to CEOs who are actively working to expand their own understanding of AI. They want to know what’s changed and what’s changing. How do they need to update their thinking and their practices in order to be better leaders for their organisations?
Here are some of my current thoughts. I hope they prove useful as jumping off points for your own journey of discovery.
Will AI take cost out and/or increase productivity?
We are in the very early days of generative AI based software and services so there are absolutely no givens. It seems a reasonable guess that prices for inference (querying a model and receiving an answer) will trend down for 3-5 years both because of fierce competition for enterprise dollars and because there is a lot of engineering effort being spend on making the required computation more efficient. After that timeframe of course we may well see practices that exploit vendor lock in and drive drives back up.
But in the medium term future, I’d bet against significant price increases. So you can and should expect to see a solid back-of-the-envelope estimate of usage costs. (Don’t let your teams forget to scale these up to what full adoption would look like. I know that sounds obvious but I’ve seen it happen!)
Now have a robust conversation about what net costs you would actually be able to remove by adopting the AI service.
Will the licensing / usage costs of the third party service be the only cost you add in? Is there a need to improve the quality of the third party service by ‘fine tuning’ model responses on your own data? (highly likely yes) What will the upfront integration and data cleaning work for that fine tuning look like? What are the ongoing costs to keep the relevance high? What level of internal or contract development resources will you need to fund to support that work?
For example consider using a Gen AI based solution to shift more customers to self service issue resolution with the upside of taking headcount out of a 24x7 call centre. A common mistake in these early days of adoption is failing to account for the fact that you could be trading off call centre staff costs for development costs. No problem at all if this is done with eyes open and with a view to raising the literacy and competency of your workforce, really challenging if these costs are overlooked in the first round of the business case and then discovered when funds and time have been committed.
It’s becoming commonplace for AI commentators to say that there are demonstrable productivity gains from using Generative AI tools in the workplace. However when you follow the trail of references, the cited studies are small, short term and often measured in ‘perceived productivity lift’ of the participants.
My bet would be that there are gains in productivity to be made in many knowledge work based jobs. And in a nice twist, the less well organised and efficient your company is today, the bigger the ‘low hanging fruit’ gains may be. But, and it’s a big but, with the current level of tooling, I don’t see this being a scalable productivity gain that lifts the workforce as a whole and endures through staff turnover.
In my mid term forecast, Gen AI assistants will largely remain as ‘single user’ productivity boosters as savvy and impatient knowledge workers dive into the new tech and craft their own ‘glue work busters’. As those who remember the age of Lotus Notes will recall, it’s really easy to build a rats nest of customised timesavers with powerful single user tools.
My bet is we will see productivity boosts from Gen AI based tools over the longer term but in the interim we will see a lot of apparent time saving which dissipates. As a board member, I wouldn’t be betting future solvency on Gen AI delivering sustainable cost savings through general productivity increases.
Does the use of AI align with your brand promise and what reputational risk does it pose?
Just as we have seen a rise in activist investors challenging companies who are slow to get on the climate change train, my guess is that the use of generative AI within a company will come under scrutiny from customers, investors, regulators and the community at large.
Active cases are working their way through the courts around copyright and fair use of data in the public domain. Expect to see heightened scrutiny on the work practices of AI vendors with respect to their least advantaged workers. There are already some deeply disturbing accounts of marginalised people, including children, working in contingent labour, sweatshop-esque conditions labelling data, and training AI models to avoid giving inappropriate responses. We know that there is CSAM data mixed into the training data of the frontier foundation models and we know that the original publicly released models could wander into some pretty dark spaces pretty quickly. It’s easy to see that that would have been a whole lot nastier for those exposed to the raw output.
Then there is the question of providing a living wage, jobs that are engaging vs jobs that dehumanise, worker retention vs disposable workforces. There will be all sorts of ‘right answers’ depending on the organisation but the prospect of pervasive adoption of generative AI means engaging with these questions in a way many boards won’t have needed to do in the past. It’s no coincidence that we’re seeing renewed discussion of Universal Basic Incomes alongside the developments of the past two years. What is your company stance?
I expect to see AI supply chain legislation (similar to the carbon reporting that is coming on line now) emerge in reasonably short order because as a ‘transparency booster’ it’s a relatively easy halfway house for governments caught between a rock and a hard place. Boards and execs should be considering now how impactful being aligned with socially unpleasant practices would be. There is no ‘one size fits all’ answer to this question but being eyes open will be critical to navigating what could turn into regulatory chaos, particularly for companies operating in multiple countries.
Will your business model be eroded by the widespread adoption of AI?
Most board members today will have lived through the widespread adoption of the internet as a tool for daily life. Change was rapid in some areas, slower in others, but looking back over the past three decades the change has been extraordinary. At present my belief is that the changes kicked off by the productisation of powerful generative AI will be similarly transformative, in a similarly patchy and drawn out way.
As a specific example, for specialist professions with traditionally high barriers to entry and time based billing models, I see disruption on the horizon. I’m not one of those who thinks all lawyers, doctors and accountants will be out of business replaced by an app. But it’s getting to be a pretty safe bet that some of the tasks within some of the workflows within some of those roles will be commoditised and parcelled out to an ever patient and attentive AI process. While that might initially feel like a positive, it’s going to reek havoc with a credential-gated, time-based revenue model. How do you build consulting teams in a traditional pyramid: high skilled partners with a light touch over many lesser skilled and (much) lower paid graduates when substantial parts of the discovery and synthesis work traditionally done by graduates is now performed in essentially zero time by a fine tuned, task specific LLM? Dollar cost averaging a blended rate goes out the window. Forward think consulting and legal firms are all over this change right now because the challenge is clear in those cases. How could this be impactful inside your own business model?
A lot of people are talking about unbundling and rebundling of tasks in a traditional workflow. We’ve seen this type of change before - Uber, AirBnB, Amazon. How can this type of rethink impact your business? While it will not be cheap to build past the current alpha release quality to something robust and maintainable, with sufficient scale really expensive engineering can be a great investment.
While I’m not on the same page with Sangeet Choudary about how soon AI agents will be commercially useful, there’s a lot to think about in his recent white paper on unbundling and I recommend taking the time to read and ponder.
How will your workforce adapt?
This change will be cultural as much as technological. Will you be working with an aligned human workforce who has bought into a mutually beneficial AI adoption journey or an oppositional, potentially unionised workforce pushing against apparent or actual enshittification of their working day?
Recent surveys of public opinion about AI including this one from the Human-Centered AI group at Stanford make interesting reading. It isn’t hard to construct a narrative that explains the demographic variations in enthusiasm about AI where “individuals with higher incomes and education levels are more optimistic about AI’s positive impacts on entertainment, health, and the economy than their lower-income and less-educated counterparts”. Not everyone stands to benefit from AI agents booking overseas holidays and organising social calendars for busy working parents and it shows in the metrics.
I am extremely curious about how this will all play out. We will definitely see profit driven narratives which seek to downplay the negative externalities and social impacts. I’m betting we will also see the rise of ‘artisan’ businesses which champion their continued use of humans as a socially responsible point of difference.
This is and will continue to be a muddy space and regional variations are close to a certainty. Savvy boards will be discussing approaches to adoption now, investing in literacy uplift and building change management muscles.
Short term, medium term, long term
While it’s easy to get focussed on quarterly earnings, a critical part of any board role is to take the longer term view as well. Absent any crystal balls, this is demanding at the best of times and so much more demanding right now with truely impactful releases and technologies overlayed with world class marketing, spin and hype.
Balancing a short and long term view means being an early adopter and engaging thoughtfully with less buoyant views of the AI future. I’ve found a lot of food for thought in the conversation about the social media journey over the past decade and the evolution of search from expansive traffic director to walled garden. This somewhat dense but very thought provoking paper from UCL’s Institute for Innovation and Public Purpose is worth a read as, absent a time machine, you try to make sense of the possible economic paths ahead and steer your company through the choppy waters and squalls.
Till next time
Wherever your week takes you, I hope you find a moment to step outside and participate in the changing seasons. Here in Australia, we’re moving through a crisp autumn into a chilly winter and the rosellas are out in force in my neighbourhood, brighten any walk around the block.
Photo by Stas Kulesh on Unsplash