Six years ago I published “The New Moats: Why Systems of Intelligence are the next defensible business model.” In that blog, I postulated that startups would be able to build defensible moats using AI.
In light of all the developments in the past year, I want to revisit this framework and see what still holds true and what has changed.
I won’t rehash each and every technical breakthrough and changes in AI over the past six years. That has been well documented (or over-documented) in countless blogs, Tweet storms, and an infinite number of AI-generated avatars in my Instagram feed.
But I will, of course, point out that we wouldn’t be here without transformer models (first described in the Google Brain paper in 2017), which changed the game for language models. Progress went from gradual to rapid fire in the intervening years, and today’s AI landscape is, to borrow from a recent interview with my Greylock colleague (and Inflection AI cofounder and CEO) Mustafa Suleyman, “surreal.”
Large language models like GPT-4, PaLM2, and LlaMA are tangible examples of the way AI is becoming the enabling technology of this moment. I’ll point to essays written by my partners Saam Motamedi and Reid Hoffman, which dive into the current and potential impact of foundation models here and here.
Orienting oneself amid this Cambrian explosion of AI startups requires asking a basic question: Where is the enduring value in the market? Or, to ask myself a riff on the same question as six years ago: What are the new new moats?
To illustrate my thinking, I’ve taken a red pen to the original “Systems of Intelligence” framing, updating and amending predictions, posing new questions, and throwing out others. I hope this exercise helps to keep us grounded as we navigate the current AI hype cycle.
You can listen to a discussion of the key takeaways from this essay on the Greymatter podcast.