Terminally Onchain by YB
Cover photo

Gurley's Sun-Oracle analogy for Enterprise AI

Last call for IRL NYC event, open source governors, consumer competitor sets, & a curated X list for all of you to pin

YB

YB

Welcome to the 27 new members of the TOC community!

I hope all 9629 of you had a great week


Alright the weather is beautiful outside so I'm going to keep this short...let's all go touch grass this weekend since spring is here and the seasonal depression is finally over.

Before I get into the takeaway of the week, a few updates.

IRL Events

In last week's post, I gave my reasoning as to why attending and hosting IRL events is going to be increasingly important. And I challenged myself to organize an event in NYC before the end of the month.

So on Sunday, I tweeted this...and omg having a bit of distribution is game changer. In less than an hour, there were 50 registrations and the event booked up.

If you're in the city and want to attend, please sign up on the Luma and reply to this e-mail so I can take you off the waitlist. Special privileges for people who actually read Terminally Onchain 🤝

Something unexpected that happened because of the tweet above is that I had other event organizers reach out and invite me to their events as well!

So this week, I went to my first two AI meetups. Not going to lie, I was a bit nervous and felt the same excitement I did when I first started going to crypto conferences a few years ago.

Both events ended up having demos and discussions related to MCPs (model context protocol). I got a ton of interesting insights but will wait to share until next week after I do some research and get answers to a few questions I have. But the point is that it seems like developers are locked in on MCP server development and we should be paying attention to what's happening there as well.

One thing I'm curious about is how "far behind" the AI discussion in NYC is compared to the meetups in SF. If any TOC readers are in the bay and have context, would love to get more insight on this!

I also learned about ainyc.net in case you're interested in signing up for their list of IRL events.

More chats, more context

On Monday morning, I was a bit frustrated because of how many companies there are in the crypto x AI vertical.

I had the realization that I had learned enough of the basics on AI / DeAI and it was time to start having conversations with others in this vertical. If I stick to just reading, I'll have a limited perspective and miss some important insights.

So, I reached out to 5 friends to get started and ended up having some awesome coffee chats & phone calls. Got new alpha, resources, and directions for further research. The 2-3 hours of conversations helped save soooo much time filtering out the noise.

I was pretty quickly able to put together my "golden basket" list for DeAI with their help. And I already have a packed week of calls next week with some of the teams mentioned below.

post image
X

Worth noting that Pluralis just announced their seed round led by USV and CoinFund this week.

Also, it became clear to me that I seriously need to understand RL (reinforcement learning) and why it's game changing for DeAI. Multiple, smart people all pointed to this as the next big shift. That's going to be the focus for next week.

post image

Twitter list for open source & DeAI

I finally took some time this week to go through my recent follows and make a Twitter list of people I believe are worth keeping tabs on when it comes to open source, decentralized, and distributed AI.

There are 50 or so folks in it right now but I'll be slowly adding over time, enjoy!


Enterprise AI will inevitably shift to Open Source

Many long time TOC readers know that I'm a sucker for tech history.

While eating lunch on Wednesday, I was watching this Bill Gurley - Brad Gerstner discussion. Around the 14 min mark, Gurley gives a beautiful analogy as to why corporations ("smart buyers") will start migrating to open source solutions.

Play Video

Basically in the '90s, as the # of internet companies was taking off, high-traffic businesses were willing to pay for the Sun + Oracle server-DB stack. It was the professional thing to do and all the companies wanted the top of the line server solution. And this lasted throughout the dot com bubble.

However, in parallel, the open source community was relentlessly building out Linux and MySQL. And by the early to mid 2000s, the "LAMP" stack had gotten robust enough to where the new set of startups could bypass the Sun stronghold and just get started for free. Think MySpace, Facebook, etc.

post image

Open source's "meets expectations + trivial cost" > first class b2b solutions for the majority of customers.

I also asked o1 for some numbers on Linux vs Unix domination over the '90s and 2000s. Look at how different the server market share looked in just one decade.

post image

Based on that analogy, it's not crazy to think that corporations will have a lot more say in what set of models they want to use in the next 5-7 years. No different than how enterprises had multiple cloud solutions to pick from in the mid-late 2010s after AWS dominated the first couple years of cloud computing.

Bill's main point is that if we really are moving to the point of 99% inference and other layers in the AI stack start taking more importance as models get commoditized, then it's probably going to be the case that the enterprise AI framework shifts to:

  • open source general models

  • fireworks ai / open router model orchestrator solutions for inference

  • company specific tooling

  • fine tuning with private data using cloud providers

This also lines up exactly with what Satya Nadella said on a recent Dwarkesh episode:

"Consumer markets sometimes can be winner-take-all, but anything where the buyer is a corporation, an enterprise, an IT department, they will want multiple suppliers. And so you got to be one of the multiple suppliers.

That, I think, is what will happen even on the model side. There will be open-source. There will be a governor. Just like on Windows, one of the big lessons learned for me was, if you have a closed-source operating system, there will be a complement to it, which will be open source.

And so to some degree that's a real check on what happens. I think in models there is one dimension of, maybe there will be a few closed source, but there will definitely be an open source alternative, and the open-source alternative will actually make sure that the closed-source, winner-take-all is mitigated."

And in terms of consumer AI, the analogy was more akin to the search wars. Meaning, it is in fact possible that consumer AI is a winner take all game.

post image

But! The key point Gurley mentioned which is worth taking note of is the competitor set for each "war" in the image above. For search, it was pretty clear that Google's page rank model was superior and the rest were falling behind quickly from a business plus a technical perspective.

On the contrary, with the current AI wars, the set of logos on the right all hold a certain advantage to the point where it's simply not possible to eliminate anyone just yet.

Obviously, ChatGPT has impressive numbers and "normie mindshare".

post image
Exploding Numbers

But, you also have Elon, Zuck, and Pichai quickly following up with their own breakthroughs and using their deep pockets to dig deeper into the generative AI advertising moat.

As Ben Thompson explained last October in his bullish Meta AI post:

post image

Not only that, but the last couple of months have showed us that now even Chinese competitors such as DeepSeek, Baidu's Ernie, etc. are not only on the racetrack but running side by side a few lanes away.

post image
Datacamp

So what are the key takeaways? Well, for me:

  1. On the enterprise side, I need to seriously lock in on companies like Fireworks, Hugging Face, and Together AI. That means understanding their business models, how they're catering hosting and inference solutions for enterprise customers, their feedback loops with the open source community, etc.

  2. And on the consumer side, of course it's about keeping up with the numbers. But more importantly, having an on the ground pulse of what different audience segments are thinking about in terms of model performance.

    For example, at one of the AI meetups I went to this week, the developers all agreed that Claude Sonnet 3.5 was in fact better than 3.7. They all thought 3.7 was too "one shot" oriented and it would often double down on the wrong direction too confidently.

    Another example is talking to people outside of tech and getting their views on things like Google's AI answers or how they're using GPT. Random example, but I recently found out my mom uses the WhatsApp AI (Llama) a couple of times per week! I was shocked to see her use it so casually.

post image

That's all for today's post, I'll see you all next week!

Hope all of you have a great weekend :)

- YB

Collect this post as an NFT.

Terminally Onchain by YB

Subscribe to Terminally Onchain by YB to receive new posts directly to your inbox.

Over 9.6k subscribers

Gurley's Sun-Oracle analogy for Enterprise AI