Helen Frazer on data & AI in 2024 and beyond
"Medicine needs AI. The needs of an aging population with chronic diseases are challenging the system."
I was lucky enough to meet Helen on an advisory board we both sat on and was then delighted to be in the audience when she won Innovator of the Year in the 2022 ANZ Women in AI awards!
An Associate Professor at the University of Melbourne, Helen is a radiologist, breast cancer clinician and AI researcher with over 25 years’ experience leading breast cancer screening services. She is State Clinical Director for BreastScreen Victoria and the Clinical Director for St Vincent’s BreastScreen.
Helen has been pursuing research on the use of AI in screening for a number of years and authored publications on AI datasets, AI model development and integration in screening, AI breast cancer risk prediction and the ethical, legal and social implications of AI in healthcare. She is currently leading one of the first randomised controlled trials of AI in medical imaging.
Helen, 2024 was another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?
An area now getting a lot of focus in research in healthcare is what we are calling Opportunistic AI. Eric Topol highlighted this during a presentation at the recent Radiological Society of North America conference in December 2024 – now touted as the largest medical AI conference in the world! We are learning more about the information that lies in the many screening and diagnostic tests undertaken that is of value beyond the single function that the test was designed for. For example, regarding imaging of the retina, there are published papers indicating the potential diagnostic value of these scans for clinical indications in many areas that include diabetes, Parkinson’s disease, stroke, heart and liver disease. In my field we are developing studies to examine the opportunity to predict cardiovascular disease by analysing arterial calcifications in breast mammograms with AI.
You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.
The importance of data and ground truths. The story of AI is often told through the lens of the algorithm and the computing power. I was involved working with early Computed Aided Detection models with mammography in the early 2000s in the USA, but these failed to improve accuracy. One of the challenges was the ground truths were derived from radiologist labels and not pathological cancer findings or interval cancer records (no cancer in the interval subsequent to the test). I continue to see the opportunities for the future of AI, particularly for high consequence decision making in healthcare, derived from well curated datasets and strong ground truths. Increasingly the algorithms are non-unique or converging, and the value and advantage is derived from unique datasets and curations.
It’s been a heady couple of years with 2024 almost as frothy as 2023. What’s one common misconception about AI that you wish would go away?
Geoffrey Hinton set the “cat amongst the pigeons” in my field with his statement in 2016 that we should stop training radiologists, implying radiology would be the first job to be replaced with AI. It is true that the role of the radiologist and how we train them needs to change, but what is also clear is that radiology, by the same logic, is at the forefront of the revolution of AI in medicine. Some of the largest human datasets in medicine are in medical imaging and radiologists are becoming the data clinicians for this new future. It actually has never been a more exciting time to be a radiologist if you harness AI.
Who do you follow to stay up to date with what’s changing in the world of data & AI?
Three of my regular sources for tracking developments in health care are:
- Eric Topol Ground Truths
- Stanford HAI newsletter
Leaning into your dystopian side for a moment, what’s your biggest fear for/with/from AI in 2025?
My fears are twofold – firstly that we find poorly tested and managed algorithms applied to high consequence health care decisions. Current regulatory approvals are not necessarily adequate, and many algorithms are developed with insufficient data size and diversity, and poor ground truths. All need testing (and often further development) with local data and ongoing local quality management. A major harmful event would impact our overall progress in healthcare.
Secondly that we fail to develop our own sovereign capabilities for developing, managing and owning AI services. Just as we have seen risks in overseas supply chains with medical supplies, building reliance on supply and support for algorithms with independently controlled international entities carries risks. And we have some of the best health data sets in the world where most of the value is created!
And now channeling your inner optimist, what’s one thing you hope to see for/with/from AI in 2025?
Medicine needs AI. The needs of an aging population with chronic diseases are challenging the system. I hope to see real exemplars for the transformation of the patient/clinician experience and health outcomes emerge. Such exemplars can showcase our sovereign capabilities to develop, adequately test and quality manage the datasets and algorithms in high consequence settings. I hope our randomised controlled trial of the use of AI in breast cancer screening, one of only a handful of RCTs using AI globally, can be such an exemplar!
You can follow Helen on LinkedIn and read more about her research here.