Kobi Leins on data & AI in 2024 and beyond
"The push for nuclear power to feed the seemingly insatiable AI hunger is also an interesting advancement that will shape global geopolitics long term"
Kobi describes herself as a ‘reformed lawyer’ which always makes me chuckle as for many years I referred to myself as a ‘lapsed physicist’. She has experience in digital ethics, disarmament and humans rights and acts as a technical expert for Standards Australia. Intriguingly she loves both dachshunds and container ships!
Kobi, 2024 has been another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?
Although I advise Non-Exec Boards and corporates now on data and AI management and governance with my business partner, Kate Carruthers, my gaze is international as my background is in international affairs, diplomacy and peace work.
I am loving that Boards are now asking questions about AI and engaging experts to uplift their capabilities and organisational management and governance - and connecting it to other risks and strategic goals.
Also interesting has been the recent United Health story of the algorithms used to deny care and how that is happening across many industries and causing immense harm.
As much as I read, I also listen carefully for what is not being reported. What topics are not being covered? Who is funding what conversations?
One shift this year has been the move of Open Philanthropy into Australia to fund and shift the conversations around AI governance to safety only, which is of some concern. If anyone says that they are fielding ‘existential risk’, check their funding and motives. More about why this is of deep concern here.
Another silence that is becoming more socially acceptable is the conversation about the environmental cost of AI - which needs to be considered in its use. The push for nuclear power to feed the seemingly insatiable AI hunger is also an interesting advancement that will shape global geopolitics long term.
You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.
AI is not new and the concerns that the creator of the very first chatbot had when he created it (with 200 lines of code) about human responses to technology remain as true today as when he raised them:
What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.
Joseph Weizenbaum, creator of the first chatbot, 1964
The impact on humans of the tools is still not often deeply considered - in all kinds of ways - and we are seeing that play out in ways that could have been managed better with a little training and expertise to guide the introduction of the new wave of ‘chatbots’. Rolling out Copilot with a corporate was a fantastic opportunity to see how to uplift curiosity and care in their use - creating places for staff to raise concerns or issues and to understand the possibilities - and limitations - of the tools.
It’s been a heady couple of years with 2024 almost as frothy as 2023. What's one common misconception about AI that you wish would go away?
That innovation and regulation are in opposition. Believing that aligning the use of your tools (including AI) with Board risk appetite, strategy and values will somehow slow you down. When you have the policies, processes and people aligned, you can avoid the wrath of regulators and the cost of litigation - AND work magic in corporate contexts to move much more quickly and safely. I know that this is so because I’ve seen it in action.
The festive season is almost upon us, so many readers will have a bit of extra time to read / learn / reflect. Who do you follow to stay up to date with what’s changing in the world of data & AI?
Gary Marcus, Baldur Bjarnason (with great pictures of Iceland), Kate Carruthers, Marco Almada (with great pictures of otters), Geomastery Advisory, Miah Hammond-Errey
Leaning into your dystopian side for a moment, what’s your biggest fear for/with/from AI in 2025?
I am watching current global politics and increased hostilities with deep unease, as are many others. The use of data, data brokers, and technology to hunt and harm, in violation of human rights and international law, concerns me greatly.
This situation has been a long time in the coming and the making. Defence and weapons have been deliberately carved out from international governance in the Council of Europe Treaty, the EU AI Act, the Standards, and more. IEEE has engaged with a white paper recently, but global agreement is needed with experts who understand how the systems work. The political will isn’t there now, but needs to be.
And now channeling your inner optimist, what’s one thing you hope to see for/with/from AI in 2025?
I am looking forward to helping more organisations get ahead using the right tools and uplifting their policies, processes and people as the engagement in test projects using AI has made many more curious and discerning.
You can read more from Kobi at her website, LinkedIn and on Bluesky