Throughout history, the greatest advances in technology led to a new wave of startups that went on to build billion dollar companies. The incumbents often fell, unable to adapt their already-successful products to the new paradigm (The Innovator’s Dilemma).
Interestingly, the incumbents seem to be doing pretty well keeping up with the rapid advancements in AI. Google has Bard, and has already incorporated AI into its search, mail, and productivity tools. Microsoft has done the same with Bing Search, Bing Chat, and Microsoft 365 (via a deal with OpenAI of which it owns 49%). Amazon is dedicated to hosting high powered GPUs while experimenting with Amazon Q. Apple is focused on running powerful AI on our devices.
But perhaps most notable is that these companies have amassed a large percent of the top AI talent!
So has Big Tech learned how to approach The Innovator’s Dilemma or is this an illusion, with the next 1000 successful companies quietly growing in front of our eyes?
Incumbents Are Winning on the Demand Side
AI is for obvious reasons leading to greater consumer expectations. I already expect natural language instruction vs learning a new filter or toolbox system every time I use a new product or website. Recommendations and similarity searches should be 10x better than before.
And from what I can tell the incumbents are addressing the incremental demand for greater intelligence in existing products. Just about every day I’m impressed by a new AI feature in my everyday tools.
But consumers don’t know they want what they haven’t yet tried, and this is where we will likely see the biggest surprises.
So… What About the Supply Side?
The supply side is where the uncertainty lies. Novel products will create new demand that did not previously exist, and such a powerful technology is bound to lead to many such inventions!
The most obvious example here is ChatGPT.
ChatGPT: One Chat To Rule Them All
ChatGPT was (or is) a novelty. Most of the world was introduced to LLMs for the first time via an intuitive chat interface that seemed to be able to handle any kind of request users could dream up.
And for a while this seemed like great news for 'GPT Wrappers' - apps that provide niche-focused services with ChatGPT or other models as a backend.
But OpenAI pulled the rug on its third party distributor community by launching plugins, its app store, and 'Create a GPT' product, providing in-chat, customizable services.
Does this still leave space for GPT wrapper products?
A Case for Wrappers
One of the most compelling 'wrappers' that I’ve come across is Consensus, an AI-enabled research engine.
As a personal example, I had heard many times that bilingual kids don’t learn language slower - but could this really be true?
With a quick search, I find this study. All with a key takeaway, stats, and other standardized info.
But surely this “fact" is not just based on a sample size of 60, is it?
From a Google search, I would have never known to discard this conclusion from a 1993 study with a tiny sample size, but Consensus helped me evaluate it in seconds.
I can imagine similar niche products for news (with standardized objectivity scores, cited source scores, credibility and reputation), social media, marketplaces, and more.
Note: Consensus is doing more than just passing requests to ChatGPT. They have a custom model and ranking algorithm and use ChatGPT to generate the key takeaways.
I recently came across the following example of a product that aims to help anyone in the world dream up or customize designs and facilitate the creation by quickly sourcing quotes from factories.
What about a market where you can not only design and customize, but raise funds for new designs? Could we democratize fashion houses that are traditionally built on networks and surface the best designs in a TikTok fashion?
In this world, fashion houses, Hollywood, record labels, and more will be disrupted and democratized.
Last year, I used DALL·E to illustrate a children’s book for a Christmas gift. Next year, I might be bringing the characters to life with a set of unique plushies!
This will likely drive not only the custom goods markets to explode, as everyone in the world is suddenly a designer of anything, but also the “consumer-to-market” space, with quite a bit of experimentation on the social and financial side.
A Direct Line in the Supply Chain
Another way to look at the above example of custom orders is that the supply chain may look quite different as customization increases.
Marketplaces may not just be match-makers, but communication channels between producers and customers. Might customers be interested in submitting or voting on designs, and getting them first if their chosen designs are most popular? And even sharing some of the revenue?
I, of course, don’t know, but I'm excited to find out.
The above are just a few of many examples of how novel products are emerging from AI. But how many LLMs will there be? Will Microsoft and Google maintain the majority of the market, or will we see a world with millions of interconnected models?
My current take on this is that we will have a polythaist (multi-model) world.
We are already seeing the beginning of delegation models that intelligently route requests to the best and most cost-effective models.
For example, this model saves money by deciding whether to route requests to GPT-3 or GPT-4.
We are likely seeing the beginning of an architecture that looks something like this:
From my limited understanding, this will likely manifest as large base models with many fine tuned variation models. And we’ll hopefully see much more creativity than this simple flowchart suggests.
One simple example, that I think highlights the benefits of a multi-model world, is that an old model trained a few years ago may be the best choice for writing a block of code in an old version of the language, while newer models may exclude old legacy code from its training data.
Who Will The Winners Be?
So will incumbent Big Tech retain a significant percentage this market as it shifts towards polythaism?
I would currently bet against it. It is not hard to fine-tune a model to be better if the scope is extremely narrow, and if we assume that intelligent routing improves by an order of magnitude, my conclusion is that the best answers will be sourced from a network of models.
There may be a micropayment and distributed tech (and zkML?) thesis here as well, but I’ll leave that exploration for another day.
- Loading comments...