Mark Moloney on data & AI in 2024 and beyond
"A justification for AI spending is that it will help us with hard things. So let’s try to work on hard things."
Mark is the closest single-person approximation to a full service technology company I have ever met: strategy, architecture, product, engineering, pricing & sales. He is also the other half of my AI consultancy, Europa Labs.
A technical polyglot, the only language I've ever seen him break a sweat learning was Octave, and that was only because he was teaching himself the finer points of linear algebra at the same time.
I’m especially excited to bring his perspective to this conversation because he is that rarest of breeds - a still-hands-on-keyboard engineer who can also be persuaded to write in English. Where others talk, Mark builds; which makes his perspective both grounded and pragmatic.
Mark, 2024 was another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?
Ilya Sutskever said what many people were thinking, that LLM scaling was plateauing. At his acceptance talk for the test of time award at NeurIPS 2024 (shared with Oriol Vinyals and Quoc Le), he presented that pre-training as we know it will end. “Data is the fossil fuel of AI.” The vendor HIPPOs and VC fanboys loudly declare “don’t worry”. Spend big because whatever issues you’re having today, such as confidently wrong answers, should be resolved within 12, maybe 18 months. If you don’t fully buy into it, then you’re the problem because you lack vision.
I have been a fan of AI for a long time. It concerns me that current industry practices will slow progress. On the one hand, there was a real need for Applied AI that wasn’t just focused on publishing the next paper, but using engineering to integrate AI within practical workflows. I worry that the pendulum has swung too far. Where once there was an open environment for sharing ideas on AI research, and new model architectures would be published every few months it seems; now we have closed environments trying to squeeze money out of the same ideas.
We’re replacing intellectual curiosity with hucksterism. For example, the “scaling hypothesis”, that Ilya referenced in his talk (that adding more data would result in a linear boost to model capability), became the “scaling law” and the industry is happy to let you believe that it will just keep going. I tend to agree with Ilya, “Every exponential is a sigmoid when looking back”. When we landed on the moon, we thought we would step foot on Mars in 10 years.
You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.
In 2008, I co-founded a company that created a chat based question-answering system using natural language processing and the leading AI techniques of the day. The technology worked well enough. However, technology wasn’t the issue. The issue was changing habits.
For a start, users didn’t like to type in complete questions. They were used to playing “keyword bingo” on Google Search. Less context in the input meant the system reverted to being a search. I’m skeptical that chat interfaces will be the predominant AI interface. I think there is an opportunity for the human-computer interface to profoundly change, but this will take experimentation and time. I think UX for AI will be a key specialisation in 2025 and beyond.
Enterprise search was (and I expect still is) a tough market in which to sell. In theory, 80% of useful data in an organisation exists in documents, intranets, and web sites. In practice, people are used to maintaining their own stash of documents on file shares and local hard drives. In 2008, it was a case of introducing a tool instead of selling a better version of an existing tool. I think there is the same challenge today which is why we see AI features embedded into existing tools.
Changing habits will play a larger role in selling AI than pure capability.
It’s been a heady couple of years with 2024 almost as frothy as 2023. What's one common misconception about AI that you wish would go away?
There are a few misconceptions I would like to see go away. Firstly, Generative AI == AI. I think Reinforcement Learning has been the unsung hero and we’ll see more of this technique in the evolution of AI going forward.
Who do you follow to stay up to date with what’s changing in the world of data & AI?
I think everyone learns in different ways. I like to build. I like to pick a hard problem and build out the solution over an extended period of months or years. What I build in order to learn almost always then becomes useful in commercial projects. It also builds knowledge in the spaces between specialisations. To build a working application needs a user interface, data layer, etc. So, for an AI-based application, it’s not just the model but all the engineering required to make the model useful.
The downside of this learning style is that it’s time consuming. I can honestly say that I’ve probably modelled my career around my learning style instead of the other way round. Everyone has to do what makes them jump out of bed in the morning. Being a lifetime student of all aspects of technology is what I enjoy.
Leaning into your dystopian side for a moment, what’s your biggest fear for/with/from AI in 2025?
That AI is used to justify a bunch of bullshit motives. For example, we need to produce and consume power at odds with climate change to justify some imagined race for AI superiority. It seems that underneath the current fascination in AI is perhaps a current lack of confidence in humanity. As a collective, we don’t seem to be managing things as well as we might, but history shows us that to keep trying is a better strategy than to put faith in magical beings.
And now channeling your inner optimist, what’s one thing you hope to see for/with/from AI in 2025?
A gentle correction that sees the tech industry be less like Wall Street and more focused on creating solutions that matter. A justification for AI spending is that it will help us with hard things. So let’s try to work on hard things.
You can follow Mark on LinkedIn but if you are technically inclined yourself, you’ll learn much more from following him on Github.