Build today for the data you need tomorrow
With bonus small rant on the fact that many people HAVE been pointing out the current societal harms of AI for quite a while!
Gen AI is eating the world right now but I’m with Andrew Ng that small data and ‘traditional AI’ will remain critical.
So my 🔥 hot tip 🔥 for the week is to build today for the data you need tomorrow.
If you’re on the founding team of a startup that has made it to product market fit and is in 📈 growth stage congratulations! Now make sure you have a clear data coherence and label accumulation strategy. 🗺
Most SaaS businesses handle individual core workflows and data well, particularly workflows that map to key customer journeys. So the coherence and integrity of the data within a workflow is rock solid ⛰
But unless you're content to continue to handle the basic workflows and to leave the 💲 value add 💲 data and AI products for others, as you grow you’ll need to concentrate your energy on defining and building coherence and integrity in the data across workflows.
And harnessing the work of your millions of users to generate the labels that represent correctly completed small tasks ✅
Cross workflow coherence is the 🏗 fundamental building block 🏗 of both scale and data network effects, allowing the best SaaS business to not only collect a lot of data but to be able to learn from that data across the usage behaviours of many millions of users.
And hallucination free human created labels representing ground truth are your most commercially valuable and defensible moat. 🏝 🏝 🏝 🏝
And just one small rant. Great article recently in Nature decrying the AI doomerism that has been all the rage in certain circles. The editorial points out beautifully that
The idea that AI could lead to human extinction has been discussed on the fringes of the technology community for years. The excitement about the tool ChatGPT and generative AI has now propelled it into the mainstream. But, like a magician’s sleight of hand, it draws attention away from the real issue: the societal harms that AI systems and tools are causing now, or risk causing in future. Governments and regulators in particular should not be distracted by this narrative and must act decisively to curb potential harms. And although their work should be informed by the tech industry, it should not be beholden to the tech agenda.
However, could we also acknowledge that a whole lot of researchers HAVE been pointing out the current societal harms!
For years! And have often been fired for doing so.