Eduard Hovy on data & AI in 2024 and beyond
"Like the shine of a new car or a new date diminishes over time, the excitement of Generative AI has steadily evolved into more serious attempts to actually make its capabilities useful in practice."
For those who live in the Melbourne area, I do hope you have visited the beautiful and inviting new facilitates at Melbourne Connect, a “collaboration between leading organisations and interdisciplinary institutions aimed at leveraging research and emerging technologies to address global challenges”. Eduard is the Executive Director of Melbourne Connect. As well as a Professor in the University of Melbourne’s School of Computing and Information Systems and a faculty member at CMU’s Language Technology Institute. So yes, rather a busy man. I was lucky to gather up a little bit of his time to get his take on how AI has evolved over 2024 and what he’s thinking about for 2025.
Ed, 2024 was another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?
Like the shine of a new car or a new date diminishes over time, the excitement of Generative AI has steadily evolved into more serious attempts to actually make its capabilities useful in practice. First-draft software coding, information gathering and synthesis, text summarization, image production, simple data analytics, and other tasks seem to be reliably sped up with appropriate guidance. The key lies in this guidance. Probably the major development over the past year, for me, has been the shift from specialised task-oriented training to more-sophisticated prompt writing. While it may sound a bit mundane, I believe that some sort of ‘prompt programming’ is going to become a big and important part of the education of most people, and I would not be surprised if software programming in a ‘traditional’ coding language like Python will largely go the way of assembly-language programming, something a few specialists do. Many more of us will become ‘Chatbot programmers’, given the ease and convenience of the stylized English-like ‘programming language’ that prompt programming will evolve, and we will increasingly rely on ever-present GenAI tools to help us through the day.
You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.
This is a great question! It points to the deeper goals and nature of AI. The early pioneers were not inspired by the desire to build system that could be used commercially. This is just what AI has become over the years. They were inspired by a desire to understand what intelligence is — what we as humans are, essentially. Many AI researchers (as opposed to AI engineers) still cherish that desire. I do. And I think AI researchers are as a whole quite surprised by how much modern GenAI can actually do, given how simple it is at heart. I don’t think anyone would have predicted that a large neural-network phrasal memory with a ‘chat loop’ that stitches together phrases into sentences would be capable of so many functions, certainly not at such a level of success.
Non-engineer AI researchers now sit with this interesting question: if we simply make the machine’s memory larger and increase its training data, and add certain kinds of meta- pattern learning, will we actually get to human-like reasoning? Is all I really am, or all my 5- year-old child is, a vast pattern-learning machine, with simple factual patterns overlaid by more-abstracted reasoning patterns? If indeed such performance turns out to be achievable, but the machine is so complex that we simply cannot create human-understandable ‘models’ of the underlying reasoning, we would face the same dilemma that neuroscientists face: an explanatory model of the mind/brain so complex that it transcends the human mind’s ability to grasp it.
So the question of intelligence is still open. But for the first time we have an AI system that realistically can claim to have passed the Turing Test: something we can study and test and inspect. And until —or IF— GenAI does in fact achieve human performance, and someone can come up with an accurate explanation of how it does so, the original questions “What is intelligence? What are we?” that animated the AI pioneers —and many philosophers before them— remain tantalizingly out of our grasp.
It’s been a heady couple of years with 2024 almost as frothy as 2023. What’s one common misconception about AI that you wish would go away?
A common refrain is that we have to create new laws to control the construction and use of AI. But you know, we already have laws that govern the way society works, and developments over the past year have made very clear that AI, and even Generative AI, are not actually doing radically new things, they’re just doing old things in more efficient (and not always 100% trustworthy) ways. The existing laws still apply. If your doctor consults an AI before making their diagnosis, and the minutes of your last meeting were compiled by an AI, and the recommendation for your travel schedule or hotel were written by an AI, does any of that absolve the purveyors of this information —the doctor, the EA, the travel agent, or their employers or employees— from their responsibility to be accurate and truthful? Of course not. Even self-driving cars belong to someone. Just as taxi drivers are covered by regulations and insurances held in the names of their employers, so with self-driving cars and buses and trucks.
We should resist the temptation to legislate before we know what the actual issues are – the last thing we want is to stifle Australasian inventiveness with premature regulations, the way the European Government has done in the EU. Not only because of the economic hit this would incur for Australasia but also because it leaves the initiative squarely in the hands of large North American companies (some of whom are not as open as their names might suggest). It is in the best interests of society at large if there is an open space for researchers and developers to explore the vast potential of GenAI.
Who do you follow to stay up to date with what’s changing in the world of data & AI?
I would never follow any one source! I read a mixture of (1) technical papers from the GenAI and NLP community, interleaved with (2) standard newsfeed about recent developments, and occasionally enriched by (3) reports from government studies and even information compilers like consultant companies. Anything that catches my eye in one of the latter two prompts me to read a little more from the technical side, just to see if I believe it, and to weed out the simplifications and hype.
Leaning into your dystopian side for a moment, what’s your biggest fear for/with/from AI in 2025?
Talking about hype: one concern stands out way above the others… that people will take seriously the doomer hype about GenAI. The doomers love talking about AGI, “artificial general intelligence”, with all its scary dangers to humanity. But have you ever seen them actually define what they are talking about? Machines with capabilities that exceed ours? What about cars and planes (for transportation), calculators (for arithmetic), hydraulic presses (for force), notebooks and computers (for memory)? We have had superhuman machines for decades, and we have worked out ways to regulate and use them safely. Assembling multiple functions under a single locus of control and is not in itself scary, because it does not by itself constitute anything more than a collection of functions. Agentive AI with autonomy are the topics of interest, the capabilities that point toward a real leap forward.
And now channeling your inner optimist, what’s one thing you hope to see for/with/from AI in 2025?
The ubiquitous presence of GenAI and the ease of ‘programming’ it leads to a development that is still very much in the making, but one that is increasingly foundational: Agentive AI. The creation and availability of dozens to hundreds of specialized GenAI systems, tailored for our personal uses similar to our personalized library of Apps on our cellphones, will lead to a family of Agentive systems, each one specialized to its task, working together to perform the tasks we need to have done. Their internal communications in English will allow us to check that they are doing the right things, and —if we grant them the power— they will increasingly be able to perform even sensitive tasks for us (like making payments, planning and reserving vacations, and scheduling our daily events). While families of semi- independent AI agents is an old idea in AI, the ability to ‘program’ Agentive LLMs in quasi- English by a vast collection of enthusiasts worldwide unlocks the development of tens of thousands of them. Before the end of this decade, most of us are likely to have our own little collection of Agentive AIs, hosted on the cloud and probably accessed from our phones, ready to perform a growing number of tasks at our request.
Before you object: when last did you use a map instead of ‘programming’ your car’s GPS to guide you somewhere? When last did you do long division? Of course you know how to do these things, but you save much time and enrich your life using such tools.
That’s not such a bad future, don’t you think?
You can follow Eduard on LinkedIn and find out more about his work here and here.