Kate Bower on data & AI in 2024 and beyond
"My biggest fear is that we fail to act ethically in pursuit of profit, that we will continue to do things we know are wrong in the name of innovation."
Kate is awesome and if you can possibly talk to her, you should. We met at a workshop some years ago. I managed to get her for a one on one chat, sitting outside on a balmy Sydney afternoon, and I remember drinking my beer really slowly so I could listen for longer.
As the founder of the consumer data team at CHOICE, Australia's largest advocacy organisation, Kate spearheaded campaigns on privacy reform, facial recognition technology, and AI regulation. A leading consumer and digital rights advocate dedicated to ensuring fair and safe AI and comprehensive privacy protections for Australian consumers, Kate currently champions these causes at Digital Rights Watch, a nonprofit organisation committed to upholding fairness, freedoms, and fundamental rights for all people in the digital world.
Kate, 2024 was another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?
It sure has been another busy year! I’m going to break the rules and highlight three news stories that I think point to an emerging development - that is social upheaval and disruption due to the spread of AI across all areas of our lives, from our workplaces to our homes to our families and our health. In previous years, we’ve seen discrete or one-off failures or incidents of AI products in specific industries or sectors but I think 2024 is the year we’ve started to get a sense of how significant the societal effects of this period of rapid technological advancement will be. Firstly, the Woolworths warehouse strike became a flashpoint of the struggle between claimed productivity gains of workplace automation/AI and the dignity and wellbeing of human workers and I think there will be more to come. Secondly, the murder of the United HealthCare CEO might seem unrelated to AI, but United Health is subject to a lawsuit in Minnesota alleging that their algorithm used to assess rehabilitation needs for elderly patients had a 90% failure rate. So while we don’t know the motive of the accused, the violent incident and the almost gleeful public response is a warning sign. There are real people at the other end of automated decisions and they are becoming increasingly frustrated with unfair and unjust outcomes. And the last and I think most worrying is two lawsuits against Character AI - one involving the suicide of a teenager and another involving an incident where the chatbot told a teenager to kill their parents for threatening to take away their devices. By connecting these incidents and taking a wider view, we can see some of the ethical challenges that we will be grappling with for the next decade or more. We need to reflect often on what we’re trying to achieve when using AI and whether it is making our lives better or worse.
You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.
Data quality, data quality, data quality. Data quality has always been important but it’s only become more so as AI tools and generative AI have become more widespread. Inaccuracies, bias and personal information in datasets are problems for analytics, marketing and customer experience teams within businesses already but with AI tools being built from these datasets or being used to finetune LLMs using RAG, it’s a recipe for disaster and you can’t put the Genie back in the bottle.
It’s been a heady couple of years with 2024 almost as frothy as 2023. What's one common misconception about AI that you wish would go away?
Again I’m cheating by picking two! The first is the misconception that LLMs can reason, think or have ideas, this is a fundamental misunderstanding of how the technology works and it leads to automation bias, misplaced faith and using LLM chatbots for tasks for which they are unsuited. The second is the misconception that AI automatically improves productivity. Many of the claims about AI and productivity treat AI like some kind of magic sauce you can drizzle on everything but the reality is that AI implementation is tricky and complex and should start with asking the question of whether you need to use AI at all for the problem you’re trying to solve. I think both misconceptions lead to AI being misused and erode trust in the technology.
Who do you follow to stay up to date with what’s changing in the world of data & AI?
This is a hard question, there’s many smart and insightful people writing and talking about AI. On LinkedIn I find that Kobi Leins, Eddie Major, Simon Kriss and Raymond Sun always have their finger on the pulse and share interesting AI content relevant to Australia. And I love keeping up with the work of my former colleagues at the UTS Human Technology Institute, they’ve been doing some excellent research on AI corporate governance and AI regulation that is worth following. I’m also a fan of the many researchers working on this topic, including those at ARC Centre of Excellence for Automated Decision Making in Society (ADM+S) and Centre for AI and Digital Ethics.
Leaning into your dystopian side for a moment, what’s your biggest fear for/with/from AI in 2025?
My dystopian side could likely talk for hours about my fears for or from AI but I could sum them up by saying that my biggest fear is that we fail to act ethically in pursuit of profit, that we will continue to do things we know are wrong in the name of innovation. Currently, as a society we are wilfully ignoring the underpaid workers doing data labelling and content moderation, the extreme environmental costs of the data centres powering AI and the copyright infringement and theft of creative works amongst many other harms. The time to act is now but instead I fear we are normalising these injustices by talking about ‘balancing the risks and opportunities of AI’. We need to be asking more critical questions about what this trade off means for real people and who is getting to decide what the appropriate level of risk is? And why is it always the people with the most to gain and not those with the most to lose?
And now channeling your inner optimist, what’s one thing you hope to see for/with/from AI in 2025?
I’m encouraged by the many people asking critical questions of AI and Big Tech and working on making AI safer, more ethical and more human centred and I’m delighted to see some of these conversations enter the public consciousness. More of this please in 2025 and beyond. I’m also hopeful that Australia will implement some form of AI regulation in 2025, we’re moving in the right direction but we need to be quicker in our response and bolder in our approach.