Some thoughts on DeAI topic

Some truth, some thoughts, and how I see the future

Disclaimer: Before You Start Reading This Blog Post

The initial motivation for this post was to coincide with the launch of the new AI Arena. However, I had been considering writing about AI x Web3 for some time. So, why not cover both topics in one article? Here we are.

Before you read this, here's some background about me to help you understand my perspective. I believe this is vital as it will help you understand the points I make and why I make them. I did not study AI professionally (I did not major in AI). My relationship with AI began during my undergraduate studies, where I studied MATLAB and Computer Vision. That was when I first learned about CNN, RNN, OCR, and neural networks. Later, I attended the University of Edinburgh for my master's degree in High-Performance Computing with Data Science. While I did not directly study machine learning or artificial intelligence, my studies closely connected with machine learning. HPC is all about computing and building a centralized computation cluster that is more efficient and cost-effective. During my studies, I spent 80% of my time using Cirrus, the latest supercomputer in the UK (with 152 V100 GPUs). We wrote low-level C/CUDA code, and parallelism study was a daily norm. By the way, you can also parallelize Python in four different ways. So, this blog will be written from a computational perspective.

Lastly, this article was written on May 25th, so I will only use the information available before this date to prove my points. Any new announcements made after this date should not be considered while reading this blog.


Blog Structure

I will cover the following structure to express my views on AI x Web3:

  1. Model, computation, and data - the three key factors

  2. Breaking down machine learning steps - how can we decentralize machine learning

  3. What is Blockchain x AI?

  4. Conclusion

Model, Computation, Data - The Three Key Factors

We understand that artificial intelligence (AI) comprises three elements: the model, computation, and data.

  • Model: This represents the algorithm or architecture that defines how we process and learn from data.

  • Computation refers to the necessary hardware or computational resources, such as CPU, GPU, TPU, or cloud clusters, required to run the models.

  • Data: This is the information used to train and test the performance of an AI model.

Dissecting these three factors, we observe numerous innovations or projects that apply computation and data, add a decentralization characteristic to work actions, and incentivize workers based on their proof of work. This approach makes sense to me. However, it seems we're creating a blueprint larger than reality. Statistically, few companies within the traditional AI sector have a strong revenue stream. Yet, in the Web3/Crypto realm, this critical viewpoint seems to have vanished; we're no longer focused on the revenue stream. I just wanted to highlight some of the issues I've noticed.

Secondly, I believe the openness of the token market plays a significant role in causing some of the issues I foresee. When we talk about Web3, we must consider the technological impact and the financial impact of tokens within the entire ecosystem. Compared to the traditional demand-supply relationship, they usually need a Product-Market Fit (PMF), which can lead to revenue streams, potentially leading to an IPO later on. However, in crypto, everything is different. The PMF is driven by user acquisition, focusing less on what users need and more on the expectation of incentivization.

We cannot definitively say whether this is right or wrong, but it strongly impacts the bias of "AI MEME coins." In my personal opinion, we are still very early in the field of AI x Web3. The whole AI industry is in its early stages with Generative AI or interactive AI (initiated by GPT-3.5). Many developments seem more like experiments than mature directions.

Lastly, questions on decentralization and entry. The current status of Web3 x AI is isolated from the AI sector. Some experiences I had recently really brought my attention to the whole user experience flow. Honestly, if I were not a crypto user, I would give up. I was testing our validation node deployment on the Akash network. I think the Akash network provides a great user experience and product. The user action flow from selecting a template to deployment is great. What hampers the user experience is getting the token. Due to different listings and my KYC region, I could not easily purchase $AKT. Instead, I first needed to purchase $ATOM, transfer it to my Keplr wallet, and then use the Gonsis bridge. And because Akash is built on top of the Cosmos ecosystem, it has its own chain, as does the Gonsis bridge. The $AKT I got from the Gonsis bridge is a wrapped asset only available within the Gonsis network; I needed to bridge it again for the AKT network. Ultimately, getting the token took me around 1-2 hours. It's very inconvenient, in my opinion. But this is the reality. I believe we need a better entry point to the whole system. Of course, Akash is not a common example; most projects are living on the EVM-compatible ecosystem, but still.

Breaking down machine learning steps

We talked about three key factors in machine learning, but how can we link three factors together? We will segment machine learning into steps and bridge it up with the factors.

In the previous section, we discussed three factors. You may be confused about how these factors relate to each other, what AI is exactly, or how we create an AI. Hopefully, after this section, all your confusion will be resolved.

To create an AI model, we need to go through the machine learning/deep learning process. Both methods help us transition data into a model via algorithms. Academically, we break down the machine learning process into four stages: data collection, data processing, training, and evaluation, where we also have data, computation, and base model as the key factors.

Let's delve into the four stages of machine learning. The data collection stage often requires manual labour, such as web scraping and survey collection. This is followed by data processing, which involves data cleaning and corrections. The training phase then begins, heavily reliant on two factors: powerful hardware like GPUs and low network latency for efficient handling of large datasets. Lastly, the evaluation stage assesses the model's performance using different metrics.

Since we've broken down the stages of machine learning, we can see that some stages may involve decentralization, while others may not be feasible. Each stage requires different resources and actions, but what makes it suitable for blockchain involvement, from my point of view, are mainly two things: incentivizing people based on their contribution and validating their work or tracing the work's origin. Data collection, preprocessing, and model validation seem viable if we use this as a standard. In the next section, I will explain why training is unsuitable for decentralization.

Returning to the point of decentralizing the stages. In general, decentralizing or making the process distributive will impact performance and create more non-ideal issues. However, with decentralization, we can have a permissionless environment that offers better sovereignty for individuals. For example, we see companies working on data labelling or data collection. These directions require people and incentives and the possibility of creating a positive cycle between the user and the application. This is similar to data preprocessing and model validation, where I see the opportunity to make it decentralized and involve more parties. i.e., model validation can operate similarly to proof of stake.

From my perspective, the role of decentralization in model training is minimal. The primary question is how decentralization can be effectively applied to training. While some might argue for decentralized computation, I question the necessity of transitioning from a centralized framework. I don't see the need. The use of GPU clusters exemplifies extreme centralization, so why decentralize the computation part? I believe the means of decentralized computation within the industry is more of an open GPU matchmaking market that competes for lower prices than traditional cloud service providers. However, in a way, they are truly no different from cloud services.

So, what is blockchain x AI in my mind?

I believe the decentralization in AI plays a strong role in changing the production relationship; this part is more of a self-reflection on my understanding and curiosity.

I often ask myself, do we really need blockchain for AI? This question brings a lot of conflict and doubt to my mind. Making things decentralized adds complexity. Sometimes, I have a strong bias due to my background in computation with supercomputers. I question why we want to add a decentralized layer to a centralized product. However, as my understanding of decentralization deepens, I realize that combining blockchain and AI may not necessarily relate to tech innovation. (After all, there's a reason why DGX exists.) Traditionally, centralized authorities holding models can lead to black boxes or permission access issues. We certainly want a free and accessible environment for models. However, there's also the matter of working relationships. Due to the black box process, we often don't know how a model is trained or produced. Could we use community support to address this?

Let's take a step back. To me, blockchain is a way to renovate production relationships. The instant reward mechanism and transactions have changed how traditional working relationships function. Traditionally, reward mechanisms are often delayed, which means your mind is more focused on future happiness. However, with blockchain, rewards are an ongoing allocation process. By participating in the network, providing liquidity, and interacting with protocols, users get rewards for their actions directly.

My thesis proposes that using blockchain for production relationship renovation could help us create an open, permissionless, and trustless environment for AI. This idea may seem utopian, but I strongly believe in the value blockchain brings to traditional AI in terms of incentivization and validation. The idea of a more open and trustless system, where peers can validate each other, appeals to me. However, I seek to see its practical applications and product-market fit. I currently use AI for two main purposes: correcting my grammar and code and perhaps drafting templates for various tasks. Its usage is quite limited. I'm also a fan of Fireflies, a tool for meeting notes, although I seldom review them.

Conclusion

This blog contains many of my personal biases. I work on an AI x Web3 project and have evaluated many others. While I believe there's potential in the AI x Web3 space, I'm skeptical of companies and projects that claim it's the only way forward. We're still determining its viability. In my observation, around 80% of these projects fail, and only 20% succeed. Yet, many present it as a surefire path to the future. The field remains experimental.

Lastly, I am pretty upset with the name DeAI; we cannot decentralize a product, guys, but the process. Can we call it DeML instead?

Loading...
highlight
Collect this post to permanently own it.
timtimtim.eth logo
Subscribe to timtimtim.eth and never miss a post.
#blockchain#ai