Cover photo

On-site Interview with Huang Renxun: 20 Critical Questions, Discussing GPU Pricing and Chinese Exports, Challenging AGI Timeline

| The first great thing that artificial intelligence has achieved is narrowing the technological gap.

On March 19th, a live report from San Jose, USA, stated that the NVIDIA GTC conference, the most anticipated AI technology event in the US tech industry, is currently underway. Today, NVIDIA founder and CEO Huang Renxun, along with global media engaged in discussions on 20 key issues, including the impact of US-China friction on NVIDIA, plans for exporting GPU products to China, pricing and sales strategies for Blackwell, GPU-related matters, and the supply and demand situation of TSMC's CoWoS.

▲The image of Huang Renxun smiling while addressing a reporter's question has been exposed.

NVIDIA's latest flagship AI chip, the Blackwell GPU, adopts a dual-chip design, whereas previous generations like the H100 and H200 were single-chip designs, making direct pricing comparisons challenging. Huang Renxun emphasized that different systems would have price differences, and NVIDIA is focusing on the entire data center business, not just chip sales.

Additionally, according to Huang Renxun, Samsung, which had fallen significantly behind in the competition for HBM3E, has now caught up with NVIDIA's luxury ship. NVIDIA is currently testing Samsung's HBM and has announced plans to use it.

These days, the GTC conference appears exceptionally lively. Around the San Jose Convention Center, NVIDIA GTC banners flutter, and the streets are crowded with attendees sporting the iconic NVIDIA green badges. Some of NVIDIA's partners are providing creative support, such as YuShu Technology deploying a team of robotic dogs for entertaining interactions with local dogs, and WEKA parking several eye-catching purple cars nearby boldly labeled with "Now certified with NVIDIA DGX SuperPOD".

▲Near the GTC venue, there are eye-catching purple cars parked along the roadside, adorned with advertising slogans.


Apart from NVIDIA's new products, Huang Renxun also shared his views on various topics, including OpenAI's video generation model Sora, OpenAI CEO Sam Altman's chip expansion plan, how to predict the AGI timetable, whether AI will eliminate coders, and how to deal with the hype. He also discussed his perspectives as the founder of the AI chip startup Groq.

Especially noteworthy is the impending showdown with Groq, which is on the verge of becoming a reverse television drama. Shortly after the conclusion of NVIDIA's GTC keynote speech, Groq, a major player in large-scale model inference chips, released an article directly addressing and reinforcing NVIDIA's assertion: "It's still faster." Today, Groq added, "...and it still consumes less power."

Asked for his thoughts on the matter at a media conference, Huang responded, "I really don't know much about it, and I can't make a wise assessment...... The chip exists to implement the software. Our job is to facilitate the invention of the next ChatGPT. If it was Llama-7B, I would be very surprised and shocked. ”

Before the matter was over, Groq founder and CEO Jonathan Ross immediately posted a photo with Huang on social media: "I've met Huang before, and his team updated GTC specifically this week in response to Groq, so it seems unlikely that I don't know much about Groq." In other words, Groq can run a 70 billion parameter model faster than Nvidia can run a 7 billion parameter model. Experience it: groq.com"

The cutting-edge American AI chip companies obviously attach great importance to GTC and pay close attention to it.

Cerebras, which has just released its third-generation wafer-level chips, today held Cerebras AI Day less than a 10-minute walk from the GTC exhibition area, where it announced that "the world's fastest AI chip CS-3 with 4 trillion transistors", "Qualcomm was selected to deliver unprecedented performance in AI inference", and the AI supercomputer G42 with a computing power of 8EFLOPS broke ground, and shared the core of wafer-level architecture, AI capability gap, GPU challenges, Large models are best trained on large chips, and new multimodal large models are released.

▲Passing by the Cerebras AI Day venue


Cerebras did not forget to step on the GPU in the post: "On CS-3, we were able to train at scale with an order of magnitude performance advantage over GPUs. But even our largest clusters themselves operate as individual devices...... Now 👏🏻 applause!"

图片

01.

How much does the friction between China and the United States affect NVIDIA?


1.How does the tension between China and the United States affect production manufacturing and systems?

Huang Renxun responded, "Yes, there are two things we must do: one is to ensure that we understand and comply with policies, and the other is to enhance the resilience of our supply chain as much as possible."

The world's supply chain is very complex, he gave an example, saying that HGX has 35,000 parts, of which 8 come from TSMC, and a large portion come from China, as is the case with the automotive and defense industries.

He believes that the goals of various countries are not necessarily conflicting: "The scenario of doomsday is unlikely to happen, I hope it won't happen. What we can do is related to resilience and compliance."


2.How has the relationship between NVIDIA and TSMC developed over the past two years, including chip, packaging, and the Blackwell dual-chip design?

Huang Renxun referred to the collaboration between NVIDIA and TSMC as "one of the closest in the industry." While NVIDIA's tasks are challenging, TSMC excels in its role. NVIDIA deals with computing dies, CPU dies, GPU dies, CoWoS substrates, and memory from Micron, SK Hynix, and Samsung, with assembly in Taiwan. The supply chain is not simple, requiring coordination among large companies to manage NVIDIA's tasks.

"They are also aware of the need for more CoWoS. We will address all of it," he said. Cross-company collaboration is beneficial, with one company assembling them, another testing, and yet another building systems. Testing supercomputers requires a massive data center at the manufacturing level.

"Blackwell is a miracle, but we must achieve it at the system level. People ask me if we manufacture GPUs like SoCs, but what I see are racks, cables, and switches—this is my mental model of GPUs. TSMC is crucial to us," Huang Renxun said.

3.Regarding TSMC, companies always want more. Could you discuss the supply and demand situation for NVIDIA this year and next year? For example, is NVIDIA's CoWoS demand three times higher this year compared to last year?

"You want exact numbers, that's interesting," Huang Renxun said. NVIDIA's demand for CoWoS is very high this year and will be higher next year, as it is in the beginning stages of AI transformation—only $100 billion has been invested in this journey, and there is still a long way to go. Huang Renxun is very confident in TSMC's growth, saying they are excellent partners and should be where they are now. He believes people are working extremely hard, and the technology is in a perfect position. Generative AI is in an incredible position.

4.How much does NVIDIA plan to sell its new networking technology to China, and can you inform China about specific tendencies in integrating other technologies into computing chips?

"I haven't announced much this year, I'm a bit greedy," Huang Renxun said. This is what we are going to announce. Whenever and wherever we sell to China, there are export controls, so we will consider this issue. For China, we have L20 and H20. We are making every effort to optimize it for certain Chinese customers.

5.As cloud computing companies increasingly develop their own chips, NVIDIA is turning to cloud services. How do you view this phenomenon? Will their self-developed chips affect prices? What is NVIDIA's cloud computing strategy and solution in China?

Huang Renxun replied that NVIDIA produces HGX, which is then sold to Dell, who puts it into computers and sells it. NVIDIA develops software to run on Dell devices, creating market demand to help sell these computers. "We collaborate with cloud service providers to place NVIDIA Cloud in their clouds," he emphasized. "We are not a cloud computing company; our cloud is called DGX Cloud, but in fact, we are part of their cloud. Our goal is to bring customers to the cloud and allow them to trade on this machine."

"We will cultivate developers, and we will create demand for cloud services," he said. "This has nothing to do with anyone's chips—NVIDIA is a computing platform company and must develop our own developers—this is the reason for GTC's existence."

"If we were an x86 company, why would we still hold developer conferences?" Huang Renxun sharply asked. "What are developer conferences for? Because the architecture is still being accepted, its usage is complex, and we haven't overcome it, so DRAM doesn't need developer conferences, the Internet doesn't need developer conferences, but computing platforms like ours need them because we need developers, and these developers will appreciate that NVIDIA is everywhere in every cloud."

02.

解释Blackwell定价:

没想卖GPU,数据中心才是追求


Raymond James analysts estimate that NVIDIA's manufacturing cost per H100 is approximately $3,320, while the cost per B200 is around $6,000. The cost of the GB200 solution is significantly higher than that of an 80GB single-chip GH100. An H100 is priced between $25,000 to $30,000, with the new GPU prices expected to be 50% to 60% higher than the H100.

However, NVIDIA has not publicly disclosed its pricing. This is also evident from NVIDIA's rare omission of a detailed page for the B200 on its official website, only providing introductory information for DGX B200 and DGX B200 SuperPOD. The Blackwell architecture introduction page has also not been launched yet.

▲NVIDIA Official Website Directory Screenshot Collage (Green Sections Denote New Products Released at This Year's GTC Conference)


This week, during an interview with CNBC, Huang Renxun revealed that the research and development budget for the new GPU architecture is approximately $10 billion, and the price of the Blackwell GPU is around $30,000 to $40,000. Regarding this matter, Huang Renxun provided additional explanations during today's media briefing:

6.What is the pricing range for Blackwell? You previously mentioned the price for each Blackwell GPU is $30,000 to $40,000. Also, regarding TAM, what proportion do you aim to capture within the $250 billion TAM?

Huang Renxun replied, "I just wanted everyone to have a rough understanding of the pricing of our products and did not intend to quote prices—we are not selling chips, but systems."

He explained that Blackwell is priced differently for different systems, not just Blackwell; the system also includes NVLink, with different configurations. Nvidia will price each product, and pricing will continue to come from TCO as usual. "Nvidia does not manufacture chips; Nvidia builds data centers," emphasized Huang Renxun.

Nvidia constructs full-stack systems and all software, fine-tuning them for high performance to build data centers. Nvidia breaks down data centers into many modules, allowing customers to configure them according to their needs and decide how much and how to purchase.

One reason is that perhaps your network, storage, control platform, security, and management are different, so Nvidia and you break down everything together, helping you explore how to integrate them into your system, with dedicated teams to provide assistance.

Therefore, this is not about buying chips; it's not the way people used to sell chips; it's about designing and integrating data centers, and Nvidia's business model reflects this.

Regarding how much Nvidia aims to capture within the $250 billion TAM? Huang Renxun said Nvidia's opportunity is not in the GPU market but in the chip market. The GPU market is fundamentally different from the market Nvidia is pursuing; Nvidia is focusing on data centers. The global data center market is about 200 billion euros, which is one of its components. Nvidia's opportunity is a part of this $250 billion, which will grow now. AI has been proven to be quite successful; last year it was $250 billion, in line with a growth rate of 20-25%. The long-term opportunity will be $1 trillion to $2 trillion, depending on the timeline.

7.When building platforms like Blackwell, how do you estimate (customers') computing demands? The goal is basically to increase computing; how do you consider power, efficiency, and sustainability?

"We must figure out the physical limits, reach them, and surpass them," said Huang Renxun. How to surpass them is to make things more energy-efficient, for example, you can train GPT with 1/4 of the power.

Hopper requires tasks with 8,000 GPUs, while Blackwell only needs 2,000 GPUs, consuming less energy in the same amount of time. Because it's more energy-efficient, it can challenge the limits. Energy efficiency and cost efficiency are top priorities. Nvidia speeds up token generation for large language models by 30 times, saving a lot of energy, reducing the energy required to produce the same tokens to 1/30 of the original amount.

8.Apart from HBM, how do you view Samsung and SK Hynix's production?

Huang Renxun joked, "It's like asking TSMC, do you still like Nvidia besides outsourcing, besides GPUs?"

He shared that HBM is complex, with high added value. Nvidia has spent a lot of money on HBM!

"We are testing Samsung's HBM, and we will use it," Huang Renxun revealed. "Samsung is a good partner. South Korea has the highest production volume of advanced memory in the world. HBM is very complex; it's not like DDR5. It's a technological miracle. That's why it's so fast. HBM is like logic, and it's becoming more and more complex and semi-customized."

He praised HBM as a miracle, and due to generative AI, DDR across the entire data center has become a thing of the past, with the future belonging to HBM.

"The upgrade cycle of Samsung and SK Hynix is incredible. Our partners will grow with us. We will replace DDR with HBM in data centers. Energy efficiency has improved a lot," Huang Renxun said. This is Nvidia's way of making the world more sustainable—more advanced memory, lower power consumption.

9.What is the overall strategy and long-term goal of Nvidia's AI foundry collaborating with enterprises?

Huang Renxun said the goal of the foundry is to manufacture software, not just as tools but remember, Nvidia has always been a software company. Nvidia created two important software programs a long time ago, one called OptiX, which later became RTX; the other is called cuDNN, an AI library, and we have many different libraries.

The future libraries are a form of microservices, not only described by mathematics but also in AI. These libraries, Nvidia calls them cuFFT, cuBLAS, cuLitho— in the future, they will be NIM. These NIMs are super complex software, Nvidia packages them so that you can access them through a website or download them, run them in the cloud or on a computer, workstation. Nvidia will make NIM performance better.

When enterprises run these libraries, custom operating systems will be authorized, with authorization fees of $4,500 per GPU per year, and you can run as many models as you want on them.

03.

The AI chip competitor openly challenged, Huang Renxun counterattacked:
"I really don't understand.

10.What are your comments on chip startups like Groq? Groq tweeted yesterday, saying they want to be faster than your "kids"?

"I really don't understand much about it, so I can't make a wise judgment," Huang Renxun believes token generation is difficult, depending on the model you want, each model requires its special partitioning.

In his view, becoming a Transformer is not the end for all models—each Transformer is relevant because they all have attention; but they are all different, some are feedforward or MoE (Mixture of Experts), some MoE has 2 experts, some have 4, and the division of labor is different, so each model requires very special optimization.

If the computer is too fragile and designed to do very specific things, it becomes a configurable computer, not a programmable one. It won't benefit you from the speed of software innovation.

Huang Renxun believes the reason for not underestimating the CPU miracle is that, due to its programmability, over time, CPUs have overcome these configurable things on motherboards and PCs. The genius of software engineers can be realized through CPUs, and if you fix it into a chip, you cut off the chip talent of software users. What it really needs to do is to benefit from both.

He said Nvidia has found a special form of computation, using a parallel stream computing model, which is fault-tolerant, has excellent performance, and is programmable. There has been an architecture since AlexNet, running through all models, and eventually, Transformers appeared, with a bunch of variations. These models continuously evolve in state space, memory, and architecture.

"It's important that we can make a model with a level," Huang Renxun said, "The existence of chips is to implement software. Our job is to facilitate the invention

04.

How do you view
OpenAI CEO's chip factory network plan?



11、Sam Altman has been in extensive discussions with people across the entire chip industry about expanding scope and scale. Have you talked to him? What do you think he wants to do? How does this affect you and Nvidia?

"I don't know his intentions unless he believes generative AI is a huge market opportunity, which I agree with," said Huang Renxun.

He started from the basics, talking about how today computers generate pixels, retrieve, decompress, and display. People think the whole process requires very little energy, but in fact, it is quite the opposite. The reason is that every prompt, everything, every time you use your phone, it sends data to a data center somewhere to get a response in a way that makes sense from the perspective of recommendation systems, and then sends it back to you.

For example, if he had to run to his office every time you asked him a question instead of answering directly, it would waste time and energy. He believes that the way to work together should be to expand AI generation. More and more computation in the future will be generative rather than retrieval, and each generation must be smart and contextually relevant.

"I believe, and I think Sam believes, that almost every pixel on every computer, every time you interact with a computer, is generated by a generative chip," he hopes Blackwell and subsequent iterations will continue to make significant contributions in this area.

"I wouldn't be surprised if everyone's computer experience is generative, but it's not like that today. This is a huge opportunity, and I think I would agree with that," Huang Renxun said.


05.

Will AI writing code mean humans no longer need to learn programming?



12、You previously mentioned that no one needs to learn programming anymore. Does that imply that people shouldn't learn programming skills?

Huang Renxun believes that people are learning many skills, such as playing piano or violin, which are genuinely challenging. He also thinks that subjects like mathematics, algebra, calculus, and differential equations are skills people should learn as much as possible. However, for success, programming skills are not necessarily indispensable.

"There was a time when many big names around the world advocated that everyone must learn programming, and therefore you're inefficient," he shared. "But I think that's wrong. Learning C++ is not a person's job; it's the computer's job to make C++ work."

In his view, AI has already made the biggest contribution to society—you don't have to be a C++ engineer to succeed; you just need to be a timely engineer. For example, humans communicate through dialogue, and we need to learn how to prompt AI, just like giving cues to teammates in sports to get the results you want. This depends on the work you want to do, the high-quality results you want to achieve, whether you're seeking more creativity, or whether you want to be more specific in the outcome. Depending on different answers and different people, you will provide different prompts.

"I believe the first great thing AI has done is bridging the technological gap. Look at all the videos on YouTube; they're created by people using AI, not by writing any code, so I find that fascinating," Huang Renxun said. "But if someone wants to learn programming—please do so. We are hiring programmers!"



06.

Setting a timeline for AGI, are you afraid of AGI?



13、You previously mentioned that AGI would be achieved within 5 years. Is this timetable still valid? Are you afraid of AGI?

Huang Renxun slightly retorted, "First, let's define AGI." He paused for a moment, then continued, "I hesitated because, as I mentioned earlier, I'm sure it's challenging for everyone to achieve this. I want you to define AGI specifically so that each of us knows when we can reach it."

He expressed dissatisfaction with the practice of misrepresenting his statements in previous news reports: "Every time I answer this question, I specify the AGI standard. But every time it's reported, nobody specifies it. So it depends on what your goal is. My goal is to communicate with you. Your goal is to figure out what story you want to tell."

"OK, so I believe in AGI, as I mentioned, maybe within 5 years. AGI, which is general intelligence, I don't know how we define each other, which is why we have so many different terms to describe each other's intelligence," he said.

According to Huang Renxun, predicting when we will see a universal AGI depends on how we define AGI and requires clarifying AGI's specific meaning in the question.

He gave two examples, such as defining where Santa Clara is, which is very specific; and defining New Year's, even though it occurs in different time zones, everyone knows when New Year's arrives.

But AGI is somewhat different. Huang Renxun said, if we specify AGI as something specific, such as a software program that achieves excellent results (above 80%) after completing a set of tests, better than most people or even everyone, do you think a computer can achieve this in 5 years? The answer might be affirmative.

These tests could include mathematics, reading, logic, academic, economic tests, as well as qualifications for lawyers, pre-medical exams, and so on.

14、How will our lives change in the future with large language models and base models?

Huang Renxun believes the question lies in how we acquire our own large language models.

"There are several methods to achieve this. Initially, we thought you kept fine-tuning, but fine-tuning is time-consuming. Then we discovered prompting fine-tuning, we discovered long-context windows, working memory. I think the answer is the combination of all these factors," he said.

In his view, in the future, you only need to adjust one layer of weights for fine-tuning. You don't need to adjust all of them, just fine-tune one layer like LoRA. Low-cost fine-tuning, prompting engineering, context, memory storage, all these together constitute your custom large language model. It can be in some cloud service, or it can be on your computer.

15.Where is the biggest growth opportunity for software? Is it microservices?

Huang Renxun said that NVIDIA's recent opportunities are in two types of data center computing, one is about modernizing computing in data centers, and the other is about generating new hints in data centers.

NVIDIA's aim is to help customers manufacture AI. Llama, Mixtral, Grok... Many teams create AI, but these AIs are difficult to use. The base models are primitive and not user-friendly.

NVIDIA will create some of these, then select some mainstream open-source partners, and turn these models into usable models of product quality. It also needs to provide services, such as NeMo.

"We won't just invent AI, we'll also manufacture AI software, so that everyone can use them. Our software is about a $1 billion running rate. I think manufacturing AI can definitely accomplish a lot," Huang Renxun said.

  1. Can the problem of AI hallucinations, where some critical tasks require 100% correctness, be solved?

Huang Renxun believes that hallucinations can be resolved as long as the answers are thoroughly researched.

He explained that adding a rule where for every answer, you must look up the answer, is the RAG retrieval-enhanced generation. If you make a query, it should first conduct a search, not fabricate an answer and output it, but prioritize accurately answering the content, then feedback to the user. If this AI is important, it doesn't just answer you, it conducts research first, determines which answer is the best, and then summarizes. This is not a hallucination, it's a research assistant. It also depends on the critical situation - more guardrails or timely engineering.

For critical task answers, such as health advice or similar questions, Huang Renxun believes that checking and cross-referencing multiple resources and known factual sources may be the way forward.

16、Can the problem of AI hallucinations, where some critical tasks require 100% correctness, be solved?

Huang Renxun believes that hallucinations can be resolved as long as the answers are thoroughly researched.

He explained that adding a rule where for every answer, you must look up the answer, is the RAG retrieval-enhanced generation. If you make a query, it should first conduct a search, not fabricate an answer and output it, but prioritize accurately answering the content, then feedback to the user. If this AI is important, it doesn't just answer you, it conducts research first, determines which answer is the best, and then summarizes. This is not a hallucination, it's a research assistant. It also depends on critical situations - more guardrails or timely engineering.

For critical task answers, such as health advice or similar questions, Huang Renxun believes that checking and cross-referencing multiple resources and known factual sources may be the way forward.

17.You mentioned using generative AI and simulation to train robots on a large scale, but many things are difficult to simulate, especially when robots leave the building environment. What limitations do you think simulations will have? How should we deal with these limitations when we encounter them?

Huang Renxun said there are several different ways to think about this problem. The first is to build your idea of a large language model. Remember, the large language model operates in an unconstrained, unstructured world. This may be a problem, but it learns a lot from it. The generalization ability of the large language model is amazing, and then you obtain the context window through iteration or prompting.

For example, if you want to make an omelette in the kitchen, only you can specify the problem specifically, specify the background, the tools you can use, describe the robot's environment, and the robot should be able to generalize effectively.

This is the moment of ChatGPT for robots. There are still some issues to be resolved, but inference can be seen. All of this can generate tokens, and these tokens have already been generated by the robot. Robots learning from software make sense. Software doesn't understand the difference between them, it's just a token. So you have to organize all the postures, standardize all outputs, generalize the environment, input context, reinforce human feedback, give it a bunch of appropriate question-answer examples, and appropriate answers in philosophy, chemistry, mathematics.

Some of these are described on the page. You may need over 10,000 large model examples to make ChatGPT. Our brains can distinguish between text and robot actions, computers can only see numbers, and they don't know the difference between these things.

18.Regarding computer games, last year you said that every pixel would be generated and rendered at real-time frame rates. How far do you think we are from this world where every pixel is generated at real-time frame rates? What is your vision for games/non-games?

Huang Renxun believes that almost all technologies, S-curves won't be longer than technologies. Once it becomes practical and better, like ChatGPT, I think it won't take more than 10 years. In 10 years, you are another kind of expert; after 5 years, things are changing in real-time, everything is happening. So you just need to decide how far we have come in this regard. It's been about 2 years now. In the next 5-10 years, the situation will be basically like this.

19.You mentioned that many industries will experience a ChatGPT moment. Could you talk about one that excites you for technical reasons, first contact reasons, or impact reasons?

Huang Renxun said that some that excite him are for technical reasons, some are because of first contact, and some are because of impact.

"I'm very excited about Sora, OpenAI has done a great job, we saw the same situation last year with the autonomous driving company Wayve, and you also saw some examples of what we did, almost two years ago, about generating videos from works." he said.

To generate videos, the model must understand physics, so when you put down a cup, the cup is on the table, not in the middle of the table. It's sensible. It doesn't have to abide by the laws of physics, but it must be wise, understand all the physical laws.

Secondly, Huang Renxun believes that the work NVIDIA has done on the Earth-2 climate digital twin cloud platform's generative AI model CoreDiff has a huge impact on predicting weather within a 2-3 km range. NVIDIA has increased its efficiency by 3000 times while increasing its speed by 1000 times. It can predict flight routes in extreme weather conditions and sample more frequently in chaotic weather, up to 10,000 times. The ability to get the most likely answer has been greatly enhanced in this example.

Third, the work being done in molecular generation and drug discovery involves identifying drug molecules that exhibit highly desirable target protein characteristics. Similar to AlphaGo, it can be incorporated into a reinforcement learning loop, generating various connections between molecules and proteins, and then exploring the breadth of space. This is very exciting.

20.
Please delve deeper into your views on drug discovery, protein structure prediction, and molecular design, and their impact on other fields?

Huang Renxun said, "We may be the largest quantum computing company that doesn't manufacture quantum computers. We're here because we believe in it, we want to be here, we just don't think it's necessary to build another one." QPU is an accelerator, much like GPU, for some very specific things.

NVIDIA has built cuQuantum for quantum simulation. It can have 34-36 qubits. People use it to simulate quantum circuits in computing. We can use it for post-quantum encryption, preparing for the quantum world because when quantum arrives, all data will be properly encoded and encrypted. NVIDIA can contribute to everyone and collaborate with most quantum computing companies in the world. Huang Renxun believes it will take time to bring breakthroughs.

For digital biology, NIM's gravity comes from the hope of digital biology. BioNeMo is NVIDIA's first NIM. These models are very complex, so NVIDIA packages them in a special way so that all researchers can use them. They are used in many places. Input a chemical protein, it will tell you if binding function is effective; or input a chemical substance and ask it to produce other chemical substances.

InFancy.AI logo
Subscribe to InFancy.AI and never miss a post.