Sarah Kaur on data & AI in 2024 and beyond
"Now, almost everyone I meet has an opinion about AI. This makes me optimistic because meaningful oversight and responsible AI development require exactly this kind of broad civic dialogue"
Sarah has a fascinating background including a stint as video artist working with dancemakers. Today though she is a researcher and practitioner of human centred design with a specific focus on Responsible AI, and embedding human insight in machine learning and AI research and product development.
She has designed and led stakeholder engagements on projects delivering algorithmic decision-making models to users, including Australia’s Family Court’s first machine learning product amïca, helping couples separate by automating a two-party workflow to gather data, and suggesting a fair division of assets for their agreement.
Every conversation with Sarah will leave you with much to think about. This one was no exception.
Sarah, 2024 was another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?
One story is how the massive energy demands of generative AI are pushing big tech companies invest in nuclear power. Microsoft plans to revive the Three Mile Island nuclear plant to power its AI operations, Google has contracted with Kairos Power to buy energy from small modular reactors for its data centers, and Amazon is investing in four new nuclear facilities with Energy Northwest. Why nuclear over renewables? For companies needing round-the-clock carbon-free energy by 2030, nuclear offers what solar and wind cannot: stable, continuous carbon-free power.
While this nuclear pivot could help launch new power generation technologies globally, we have ongoing concerns about safety, community acceptance, and how to safely store for radioactive waste. But it could be transformative! Not just for powering AI development, also for advancing climate solutions through AI-driven investment in nuclear solutions and grid management for a net-zero future.
(The International Energy Agency convened the Global Conference on Energy and AI in early December, some of my holiday content consumption will be here!)
You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.
One constant throughout my journey in AI has been the fundamental importance of human-centered design and ethical considerations in AI development. When I started in this field, I was driven by AI's potential to improve human services - from access to justice to mental health support. In this context of how AI serves human needs, I knew that the most critical decisions in AI development weren't just technical ones about model accuracy or data fitting. Nope, AI development involves complex design decisions that go far beyond technical specifications. These include value judgments about which problems to prioritise solving, how to understand and compare the tradeoffs in the solutions, and overall, how to ensure AI systems align with our needs and expectations.
Seven years ago, securing funding for ethics reviews and technical accuracy checks in AI systems required advocacy from people who cared enough to convince those with the pursestrings. This is still the case, even though today, the conversation around responsible AI has become mainstream - perhaps even dominating professional discussions in 2024. It’s progress - we can’t stop demanding it though!
It’s been a heady couple of years with 2024 almost as frothy as 2023. What's one common misconception about AI that you wish would go away?
One long-held belief I want to challenge is the oversimplified notion that "bias in AI is always bad." While it's crucial to acknowledge and understand the biases present in foundation models and LLMs, as well as classical ML, treating bias as universally negative means we may overlook working with bias as a feature, not a bug, for deliberate positive outcomes.
I believe AI systems' biases can be recognised, managed, and sometimes even leveraged constructively. Through my work with the Diversity and Inclusion in AI team at CSIRO, I've discovered that LLMs, trained on vast amounts of diverse data, can actually help explore perspectives different from our own, potentially broadening our understanding of various human experiences and needs.
So the opportunity space for me, is working with bias.
To make this more concrete... this could be explored with a use case in recruitment. In recruitment for leadership roles, we might find that an AI system reflects historical biases toward male candidates. Instead of just trying to neutralise or “eliminate” this bias, we can deliberately engineer the system to recognise these patterns and implement corrective weighting - essentially using our understanding of the bias to create positive action. Or we might even work with an LLM when we’re writing a job ad, using an understanding of gendered language patterns to identify and rewrite job advertisements to be more inclusive. Through prompting, an LLM could recognise traditionally masculine-coded language and suggest more balanced alternatives that appeal to all candidates while maintaining the role's requirements. We could even use the system to review required criteria and highlight where unnecessarily rigid requirements might be deterring diverse candidates.
Seeing as we’re unlikely to eliminate bias in LLMs, this approach shifts us from blanket condemnation and avoidance to critical engagement with bias - using it as a lens to understand societal patterns and actively work toward more equitable outcomes.
Who do you follow to stay up to date with what’s changing in the world of data & AI?
For “fast updates” about the day-to-day developments in the AI eco-system, I like The AI Daily Brief for big tech news and AI advancements, and I follow Australian thought leaders like consumer rights advocate Kate Bower and Sam Burrett for digestible posts on the flood of reports on AI and productivity.
I also enjoy “slower” reading of books like Co-Intelligence: Living and Working with AI by Ethan Mollick, or The Singularity Is Nearer by Ray Kurzweil for practical and speculative provocations for living better with AI.
And sometimes, I think we need to slow down more and read “older” books like Kate Crawford’s Atlas of AI to understand the costs of production, and concentration of power to determine what we experience as “AI”. Another one I love pre-dates our AI fascination - Scott Rosenberg’s Dreaming in Code, chronicling the problems encountered by software developers in an open-source calendar project because it talks to patterns we see in AI development today: the tendency to underestimate real-world complexity, the tension between grand visions and practical implementation, and the realisation that development involves not just technical decisions but fundamental questions about how people think and work.
Leaning into your dystopian side for a moment, what’s your biggest fear for/with/from AI in 2025?
My biggest concern for 2025 is how AI is becoming a double-edged sword in cybersecurity. While AI gives us powerful new tools to detect and prevent attacks - like spotting malware that traditional systems might miss - it's also giving hackers new capabilities. They can now use AI to create more convincing scams, develop smarter malware, and even hack into home IoT devices. It's a new arms race: as we develop better AI defences, attackers develop more sophisticated AI attacks.
In the Australian context, I fear that AI may render some of our trusted security systems obsolete, particularly voice biometric identification. We use a “voiceprint” for access to Service Australia, the ATO, and banking services. Recent investigations have shown that with short clips of audio from social media - which most Australians have publicly available - AI can create voice clones capable of beating three popular voice ID platforms. While banks and government agencies promote voice ID as "high-tech security," AI's rapid advancement in simulating voices is creating a serious gap - or at least the perception of one.
And now channeling your inner optimist, what’s one thing you hope to see for/with/from AI in 2025?
My biggest hope for AI in 2025 stems from a dramatic shift I've witnessed in public engagement with AI. Just a few years ago, when I'd mention AI at school pickups or social gatherings, I'd often get blank looks or vague concerns about AI taking over the world. People saw AI as something too technical to engage with, and I worried about how we could foster the kind of broad AI literacy needed for meaningful civic dialogue about its development.
Now, almost everyone I meet has an opinion about AI and its potential impacts on their lives. People raise specific concerns about job security, express worries about AI safety and unintended consequences, or voice concerns about artists' livelihoods and copyright issues. The conversations have become nuanced, informed, and personal.
This surge in public literacy and engagement makes me optimistic because meaningful oversight and responsible AI development require exactly this kind of broad civic dialogue. When the general public is informed and engaged enough to voice their concerns and expectations, it creates the accountability needed to shape AI's future in a way that serves society's best interests rather than just technical or commercial ones. The fact that AI is no longer seen as just a technical issue but as a societal one that everyone has a stake in - that's a powerful foundation for influencing how AI develops in alignment with human values and needs.
You can follow Sarah on LinkedIn or read more about her work at Data Wisely