This is part two of a multi-part series on the many dimensions of AI inside product development and product management. You can read this one out of sequence but if you want the full picture, check out the first instalment too.
AI regulation is going to be like quicksand
The huge surge of interest in AI this year driven by the commercial release of gob smackingly good generative AI models, means everyone is thinking and talking about regulation of AI. Whatever you think about the motivations of certain players in the space, and for my money commercial gain via monopolisation is never far from the surface, clearer regulation seems both worthwhile and fraught with challenges.
As a practitioner working with and around products that are powered by AI, it will be important to stay aware of where regulation conversations are at and how they are moving. We will almost certainly see regulation islands emerge, with different jurisdictions taking substantially different approaches over different timelines.
If you operate in multiple countries, this has the potential to be really impactful and is certainly something to keep in mind as you design both your training and inference architecture if building your own, or your partnership agreements when using third party APIs. It’s also a good reason to be parsimonious in your use of AI - use it where it makes sense but not without good reason as it has the potential to create both heavy weight technical debt and compliance obligations.
One issue that I think needs more attention than it gets is that compliance to regulation is hard and auditing of compliance is likely to be slim to non-existent. Note that that is NOT a suggestion to ignore your compliance obligation (I personally believe and practice staying ahead of regulation because it is very often the ‘front page test’ right thing to do). If anything it’s a cry for caution and an entreaty to any regulators reading this post to think even harder than you do today about how to make the language of regulation accessible and executable.
Creepy lines are going to move around
Looking for certainties in a newly commercial subfield of research isn’t usually a fruitful exercise but one certainty in this space is that as humans, our acceptance of what we’re willing to see AI do is going to change over time. That’s just our species, we change our minds about social mores a lot.
Bikinis may have been illegal in 1900, but they were all the rage in ancient Rome.
Emily Spivack, Smithsonian Magazine
In the case of AI in general and generative AI specifically at this moment, change in acceptance will also be driven by greater familiarity and evolution of the model capabilities.
Unsurprisingly, the acceptance of AI varies by application and by country. Again, the implications to those of us building products powered by AI that will deployed across multiple countries are significant.
As a species we are still figuring out what is acceptable and what isn’t and that understanding will evolve and definitely be driven by large and public failures and breaches of trust as happened with Facebook and Cambridge Analytica from 2014 to 2018 and in the Robodebt tragedy in Australia from 2016 to 2019.
Your users will blame you … so your responsibility is pretty broad
I’ve heard a lot of conversation lately about where responsibility lies for all the pieces of an AI product puzzle. If you incorporate a third party product into your AI pipeline, do you take on responsibility for the ethical construction of that third party product? How about the responsibility for errors made by that third party product?
I have no idea where the legal answer to that question will land but I’m pretty sure I know how your users will respond - your product, your responsibility.
So think about where you plan to use third party AI in your user workflows. Workshop the potential harms in the event of ambiguous or incorrect responses. Start in areas where bad responses are annoying at worst.
This is one of the reasons I think we will see a lot of early applications for large language models in customer assistance and support and also why human user augmentation is going to be big. ‘Agent assist’ has been a focus of automation and semi-automation for many years for exactly this reason (along with the cost saving potential where human agents struggle to learn a patchwork of backend systems built up over decades) and LLMs have already taken off in this space.
For next time
If you missed it, you can catch up on the first post in this series below.
Next time I’ll look at
Implications of the Gen AI rush to market
Collect the data for tomorrow’s products today
Costs to build and run