Cover photo

Waning Of The God-Models

The growth trajectory of AI training and innovation has produced groundbreaking results in the last decade, but it raises questions about the long-term sustainability of centralized AI development models. Historically, technological advancements have often followed an S-curve model, wherein early-stage rapid improvements eventually slow as the technology matures. It’s difficult to apply the principles of Moore’s Law to the arc of model training and the resulting deterioration of the expected exponentiality of computational capacity, as the variables that apply to a transistors/efficiency curve with Moore’s Law aren’t quite analogous to the variables of the finite resources/compute performance curve anticipated in the coming innovation cycles by the capital-rich frontier model titans.

In a talk that Ex-Google CEO Eric Schmidt participated in with Time Magazine earlier this year, while discussing the “threat space,” he relayed that:

“...the current models are growing at roughly four times per generation. Essentially this–  they're called scaling laws– they haven't shown any degradation in performance. In other words, it just looks like if you do more and more and more, you get better and better and better, eventually you hit declining returns. We haven't seen that yet so most people that I've talked to in the teams that I manage believe we've got another couple of rounds before we really face these issues. Maybe it's two rounds, maybe it's three rounds, and a round is like 18 months, so that tells you you have three to five years collectively to get our act together.”

While the context is that Schmidt is warning that the global industry needs to address AI threat issues in relation to regulation and open source development, it’s helpful to understand how this "leap benchmark" cycle may affect the market dominance of what Luke Saunders describes as the “God-models” in his recent article published with Delphi Digital on X. Saunders hypothesizes that decentralized AI systems not only offer a solution to the growing ethical and efficiency challenges of the “God-models,” he outlines the current trajectory of AI development, predicting the emergence of a "many-models" ecosystem over a centralized "God-models" approach, and posits that blockchain technology is the ideal foundation for this decentralized AI ecosystem.  

Initial progress in AI has been swift, driven by access to vast amounts of data and computational power. However, Schmidt's analysis suggests that while we have not yet reached the plateau of the S-curve, it is inevitable that the explosive growth of frontier performance currently seen in AI will decelerate. As all model training increases in complexity, the returns on increasing data and computational power will diminish. Over time, the capital needed as buying power to transform energy and infrastructure resources to capture data and drive computation becomes inefficient. As resources become more finite, and capital seeks out frictionless markets, the “Many Models” system of specialized clusters of decentralized AI offer increasingly efficient deployment of compute and more fluid forms of liquid transactions. This turning point is critical for rethinking AI’s development paradigm as a whole.

The Rise of Decentralized AI Systems

In Saunders’ analysis, he envisions a diverse ecosystem of smaller, specialized AI models. These models would be more flexible and cost-effective, designed for specific tasks and markets, rather than trying to generalize across a wide array of applications like the current "God-models." decentralized AI could democratize innovation, lowering the barriers to entry for smaller players and providing tailored solutions more efficiently than their centralized counterparts. The dominance of centralized AI models is largely due to the immense resources required to train and maintain these systems. Tech giants such as Google, Microsoft, and OpenAI have secured a monopoly on cutting-edge AI by leveraging their vast capital and infrastructure, effectively marginalizing smaller competitors. These centralized models benefit from economies of scale, controlling the majority of funding, talent, and data necessary for building state-of-the-art AI systems. 

However, Schmidt’s insight that AI computational performance will eventually degrade suggests that the competitive edge of these massive models will erode as the advantages of scale begin to fade, and decentralized models will be well-positioned to compete. These smaller, specialized models will not require the vast overhead of their centralized counterparts and will focus on optimizing performance for specific, narrow tasks. This approach could significantly reduce start-up and iterative costs while enhancing performance for particular applications, making decentralized models an attractive alternative. Moreover, the decentralized model economy has the potential to thrive on a Web3/crypto infrastructure, enabling open, permissionless, and censorship-resistant systems. Such infrastructure would support a more distributed coordination, data sharing, and access to computational resources, further democratizing the development and deployment of AI systems.

The Role Of Crypto In The “Many Models”

A decentralized AI ecosystem would rely heavily on Decentralized Physical Infrastructure Networks (DePIN), with blockchain technologies offering a robust foundation for this future. Blockchain networks can provide a transparent, secure method for curating resources, facilitating collaboration across disparate entities, and ensuring equitable access to AI technology. Smart contracts can automate resource allocation and incentivize contribution to decentralized models, allowing for a more equitable and broader distribution of rewarding innovators and participants in the system. AI Agents with embedded liquidity and algorithmic text-to-action prompting could learn and act on optimized transactional decision-making in markets. With the development of Tokenized real world assets (RWAs), AI agents could even autonomously transact in onchain assets that represent virtually any physical or digital commodity. Clusters of AI agents that hyper-specialize in particular intelligence sectors could share and work together with clusters of AI agents that analyze market conditions, collaborate to process real-time data, and perform smart contract executions at incomprehensible speed and volume to a professional human in an equivalent space and time.

The Implications of a Decentralized AI Future

The shift towards a decentralized AI ecosystem presents significant implications for the future of AI research, development, and application. Firstly, it could lower the barrier to entry for smaller entities, fostering a more diverse and competitive market. Secondly, it may drive more tailored innovations, as smaller models are optimized for specific tasks, contrasting with the broad, generalized approach of centralized AI. Finally, a decentralized system could mitigate the risks associated with monopolistic control over AI technology, promoting a more democratic and transparent approach to AI development. The current trajectory of AI innovation, characterized by centralized "God-models," is nearing a point of diminishing returns. Saunders’ vision of a decentralized AI ecosystem, supported by a decentralized, blockchain-based infrastructure, provides a compelling alternative to the dominance of tech giants. This transition could democratize AI innovation, fostering a more inclusive and competitive environment where diverse, specialized models can work together to drive the next wave of AI progress.

Join the movement toward decentralized onchain AI in the AI Protocol Discord


Loading...
highlight
Collect this post to permanently own it.
The AI Protocol logo
Subscribe to The AI Protocol and never miss a post.
#decentralized#ai#artificial intelligence#blockchain#agentic#depin#web3#defi