The end of 2023 wasn’t great for the self driving car industry. Cruise had a crisis of competency quickly followed by a crisis of integrity. And then we all discovered that those autonomous cars just weren’t as autonomous as we all had thought, with remote operators intervening every four to five miles driven.
Self driving cars feel like they have been just around the corner for a decade or two. But we’re not there yet with the technology. Even the boldest companies are only game to operate in very specific conditions. No snow, preset geographic areas with heavily documented roads and maps that are regularly and minutely updated. And then you start to look at the cost.
According to MIT Technology Review, a robotaxi ride can be “several orders of magnitude more expensive” than a traditional ride with a human driver. As the article outlines, robotaxis aren’t cheap. They’re fairly high end cars and they have a lot of expensive and still pretty custom sensor equipment onboard as well as software that has taken a lot of time and energy to develop. Worth it if you can scale the service perhaps, but as yet that scale has proved elusive.
So far, a rather extraordinary amount of money has been spent chasing the self driving dream but, while really compelling demos abound, a scalable service that can be cost competitive with existing offerings (i.e. humans driving factory standard Toyota Corollas) still seems well out of reach.
Even if you can, doesn’t mean you profitably can
If 2023 was the year we were all completely gob smacked by how literate ChatGPT was, perhaps 2024 will become the year where we pay more attention to the underlying economics.
At Davos last month, Nadella and Altman both avoided a direct answer to the question “Do you guys make money with this?” instead choosing to focus on how close we might be to a ‘magical moment’ akin to the introduction of the personal computer. A good hint that the answer to the moderator’s very reasonable question is at this stage a factual ‘no’ or an optimistic ‘not yet’ despite the 66 percent increase in the Microsoft share price since the release of ChatGPT in Nov 2022 (roughly three times the rise in the overall market).
And that’s arguably with an economic model that to date doesn’t reflect the true cost of build and operation. I’ve lost count of how many copyright cases are making their way through various courts at the moment. Some are ending in licensing deals, many more probably will. Hopefully there will be at least a couple which actually nudge large economies like the US and the EU into clarifying what copyright means in the age of AI and what’s actually permissible.
What seems clear right now is that, just as Uber ran well ahead of legislation in the belief that if they became ubiquitous enough they would force a law change, Open AI (and hence Microsoft) made a very conscious decisions to ask for forgiveness not permission and built products on top of whatever they could scrounge.
As Gary Marcus recently wrote “… in reality, OpenAI painted a false dichotomy. The choice is not between them building AI or not, it is between them building AI for free or building AI by paying for their raw materials.”
Yes, a fair bit of research in machine learning has in the past been carried out using data that was not entirely within the public domain. But research IS DIFFERENT from making money with a commercial product. However much folks with strong financial incentives to believe otherwise might like us all to believe.
Does it have to be a monopoly to make business sense?
Late last year, I read an eye opening article from Tim O’Reilly and collaborators at University College London on algorithmic attention rents in online platforms.
We explore the way that as platforms grow, they become increasingly capable of extracting rents from a variety of actors in their ecosystems – users, suppliers, and advertisers – through their algorithmic control over user attention. We focus our analysis on advertising business models, in which attention harvested from users is monetized by reselling the attention to suppliers or other advertisers, though we believe the theory has relevance to other online business models as well. We argue that regulations should mandate the disclosure of the operating metrics that platforms use to allocate user attention and shape the “free” side of their marketplace, as well as details on how that attention is monetized.
Algorithmic Attention Rents: A theory of digital platform market power
It’s an absolute cracker of a read which I highly recommend. I walked away with a much greater appreciation of the power of compulsory reporting and how we might be able to make a genuinely flatter playing field for all sizes of players and investors if we tighten up on the disclosure of operating metrics of all sides of platform and aggregator style businesses.
The lack of disclosure of operating metrics for the free side of internet aggregators is a gaping hole in the regulatory apparatus. Costs, revenue, profit, and other financial metrics may be sufficient to understand a business based on tangible inputs and outputs, but are not fit to purpose for information 27 businesses whose assets and activities are largely intangible and whose market power is exercised through delivery of services that are free to consumers.
Given the size of the major internet companies, these metrics need to be highly disaggregated, both on a geographical and product basis. Google parent Alphabet alone has more than nine free products with more than a billion users, yet it reports only one major business – “advertising.” The connection between its revenues and the underlying free products and services is completely opaque.
Meta too discloses little disaggregated information about products such as Facebook, Instagram, WhatsApp, and Messenger, each with billions of users. This pattern is repeated across the industry.
It’s also super interesting to see that both the Federal Trade Commission in the US and the European Commission are looking into AI tech company investments to see whether some recent investments might be reviewable under existing merger regulation.
To conclude
But Kendra, does this matter if I’m not a world bestriding mega cap? Well yes I think it does. I’m still seeing a number of non-technical executive worryingly conflate Generative AI with ‘cost savings’ while at the same time all the operators I know who are working on Gen AI implementations are reporting an emerging morass of edge cases and technical molasses that are requiring all the usual kit bag of data and software engineering skillsets to work around and patch over. And none of that comes cheap when it has to be bulletproof and scalable.
So experiment, learn, give it a go. But if you’re not doing that with a firm eye on the commercials, don’t say I didn’t holler at you from the side lines.