We need more focus on successfully augmenting humans with AI
thirsty AI and why we should all learn more about the Luddites
About a month ago, I read this excellent preprint from Tim Miller and it’s been swooping around in the back of my head ever since. If you’ve been reading this newsletter for a while, you’ll have heard me muse before on the challenges of building systems where the humans and AI actually assist each other, rather than what too often happens where they end up co-exisiting in an uneasy and potentially not beneficial truce / standoff.
Tim has clearly been thinking about this too.
In early decision support systems, we assumed that we could give people recommendations and that they would consider them, and then follow them when required. However, research found that people often ignore recommendations because they do not trust them; or perhaps even worse, people follow them blindly, even when the recommendations are wrong.
In the preprint, he speaks about having a friend (‘Bluster’) you can go to for advice, one who has shown excellent judgement in the past, who will always give you one answer and always with the same level of brash confidence that it is correct. Alternatively, you can consult another friend (‘Prudence’), with similarly excellent past judgement, who doesn’t provide answers but rather provides feedback, evidence for and against, any solution you are considering. Sort of oracle vs counsellor.
The paper is well worth the read, but beyond the proposed research avenues and specific instance and ideas of an alternative approach to how to design decision support, the message that stands out for me, often overlooked in both research and commercial systems build today, is that we need to be far more thoughtful about the functioning of the complete human + AI system if we’re going to consistently get tangible and sustainable value from augmentation.
Why you should stop using Luddite as an insult and take a closer look
I’ve listened to the design podcast 99% Invisible for many years (highly recommend both the podcast and the archive if it’s new to you) and I’m frequently impressed by the way the show manages to find an angle on hot and trending topics that is both fresh and relevant.
This is definitely true of Episode 552, Blood in The Machine, where the show host Roman Mars is in conversation with Brian Merchant, author of a new book which explores “how English textile workers in the 19th century rose up against the growing trend of automation and the machines that were threatening their livelihoods”. Ooo, intriguing and rather too relevant for comfort. Queue this one up for your next piece of quiet time.
Prior to listening to the episode, I definitely fell into the camp of ‘folk who know little about the Luddites but generally see them as people who rejected technology and wanted to cling to the old ways’.
Brian’s most disturbing and intriguing takeaway? That’s exactly how the industrial overlords of the day, those who stood to benefit the most from the industrialisation of textiles, wanted the general public to think about the Luddites. They not only effectively obliterated the movement, they hijacked its legacy in our collective consciousness. A lot of uneasy parallels with the lack of transparency and corporate story telling we’re see around generative AI and it’s regulation today. I’ve got the book on preorder!
Agriculture is a big and often overlooked user of water, soon AI might be too
Finally, just in case you missed it, large language models are thirsty little things. Microsoft reported a 34% spike in water consumption from 2021 to 2022, which folks outside the company at least, are tying to training generative AI models.
“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.
The University of California, Riverside team led by Shaolei Ren, actually have a couple of papers about this work on arXiv, the one that is currently being widely cited about the water footprint of Gen AI and another more recent paper that works on ideas about equity-aware geographical load balancing, highlighting the challenge that it’s all too easy to push the environmental impacts of Generative AI to parts of the world least able to cope with them (shades of how AI labellers now tend to be nameless, faceless folk with little political clout).
I also enjoyed this April 2023 Q&A with the research team which touches on the embodied vs operational water footprint and why following the sun for cheap power and hence low carbon footprint might actually push up the water footprint.
A lot of food for thought of the unintended and often deliberately obfuscated consequences of augmenting ourselves with ever more ‘helpful’ mental labour saving devices.