6 Comments
User's avatar
Ibrahim Hamza's avatar

Great topic! I worry when terms like “human in the loop” are used but oversimplify some really complex issues.

My sense here is.. a lot of the excitement around AI agents is still more dream than reality. Look at autonomous vehicles, it’s that last 2% that prevents true independence/autonomy, which leaves room for us “imperfect” humans. For the foreseeable future, it feels more like “humans in the lead” (sorry to create new jargon!), with AI automating parts of what we do, acting as a resource to make us more effective.

I loved your point/your client’s point about the 1% error cases. Asking humans to only oversee the edge cases is bound to fail. A better frame might be: how do we design AI to partner with us in meaningful ways?

One example: the idea of a “digital me.” Super exciting in theory, but I don’t see it replacing me just yet. Rather, it’s an assistant I can tap in to to access my data and make the “actual me” more effective.

I think it was Sam Altman who said something like: AGI may not feel like the sudden shift we’re all waiting for. Adoption will be shaped by culture, trust, and governance, all of which take time. And that’s the real challenge with agents: not just the tech itself, but how we govern, embed, and build trust around them.

Expand full comment
Celeste Garcia's avatar

I think the nomenclature in general is deeply concerning. The tech is moving so fast that there isn't time to standardize on names and like you point out, there is a lack of consistency in how terms and phrases are used. I recently saw another usage for "human in the loop," in an article about gig workers cleaning training data for AI models. The data labeling and cleanup is apparently considered the "human in the loop" part of the process, while other steps are automated.

I lament that we haven't determined what the broadest term for the likes of ChatGPT, Claude, Co-pilot, Llama, etc,. should be. There are so many terms thrown around, including LLM, Generative AI, Frontier Models, and Foundation Models.

Expand full comment
Kendra Vant's avatar

Oh that’s very interesting Celeste. I’m going to try and track down that third usage you mention - quite a departure from either of the other two.

Expand full comment
Celeste Garcia's avatar

I just found it! 60 Minutes did a segment on the abuse of Gig Workers in Kenya. Leslie Stahl introduced the story, saying, "There's a global army of millions toiling to make AI run smoothly, they are called 'humans in the loop,' people sorting, labeling and sifting reams of data."

Expand full comment
Celeste Garcia's avatar

I think I saw it in a trade publication. I will see if I can find it. 😳

Expand full comment
Katrina Watson's avatar

Great article. I have a similar feeling about Agentic AI. It feels less human in the loop. I just spent some time researching and booking a holiday, and while I did use ChatGPT to help with the research, I found I got much better results when I worked back and forth with it, for example getting it to compare multiple hotels with my itinerary and personal preferences. I ended up doing probably 80% of the research myself and getting AI to help the other 20% of the time. If that ratio had of been switched, and I had of just gone with what it's suggestions the result would have been a boring, expensive holiday, full of bias based on whatever experiences and hotels are commonly talked about online.

Expand full comment