Knowledge without Explanation

Germs and ChatGPT.

Note: this post is a result of thinking about how traditional finance and tech analogies apply to the realm of blockchains, and realizing most of them are not great analogies despite the widespread usage of such "analogies."

Humans can acquire knowledge without a well-understood explanation.

Germs

Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies were implicated in its development, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women. He then documented a sudden reduction in the mortality rate from 18% to 2.2% over a period of a year.

Ignaz Semmelweis did not understand germ theory before making doctors wash their hands with chlorinated lime water. Instead, our understanding of germ theory came because Semmelweis made doctors wash their hands. Our knowledge that washing hands led to better patient outcomes preceded a good explanation of that result, which eventually became the germ theory of disease.

Thanks for reading Maksym’s Newsletter! Subscribe for free to receive new posts and support my work.

ChatGPT

In What Is ChatGPT Doing 
 and Why Does It Work?, Stephen Wolfram argues that ChatGPT might contain knowledge of how human language works without us understanding the workings of our language.

In other words, the reason a neural net can be successful in writing an essay is because writing an essay turns out to be a “computationally shallower” problem than we thought. And in a sense this takes us closer to “having a theory” of how we humans manage to do things like writing essays, or in general deal with language.

So how is it, then, that something like ChatGPT can get as far as it does with language? The basic answer, I think, is that language is at a fundamental level somehow simpler than it seems. And this means that ChatGPT—even with its ultimately straightforward neural net structure—is successfully able to “capture the essence” of human language and the thinking behind it. And moreover, in its training, ChatGPT has somehow “implicitly discovered” whatever regularities in language (and thinking) make this possible.

The success of ChatGPT is, I think, giving us evidence of a fundamental and important piece of science: it’s suggesting that we can expect there to be major new “laws of language”—and effectively “laws of thought”—out there to discover. In ChatGPT—built as it is as a neural net—those laws are at best implicit. But if we could somehow make the laws explicit, there’s the potential to do the kinds of things ChatGPT does in vastly more direct, efficient—and transparent—ways.

As of now, we’re not ready to “empirically decode” from its “internal behavior” what ChatGPT has “discovered” about how human language is “put together”.

My strong suspicion is that the success of ChatGPT implicitly reveals an important “scientific” fact: that there’s actually a lot more structure and simplicity to meaningful human language than we ever knew—and that in the end there may be even fairly simple rules that describe how such language can be put together.

It’s amazing how human-like [ChatGPT’s] results are. And as I’ve discussed, this suggests something that’s at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought. ChatGPT has implicitly discovered it. But we can potentially explicitly expose it, with semantic grammar, computational language, etc.

It’s easy to dismiss ChatGPT because it’s just generating a “‘reasonable continuation’ of whatever text it’s got so far.” But we should seriously consider that ChatGPT might have uncovered our brain’s methods of using our learnings to generate text. In other words, ChatGPT might embody knowledge of human brains without us understanding how our brains work. The understanding comes after knowledge.

Thanks for reading Maksym’s Newsletter! Subscribe for free to receive new posts and support my work.

Subscribe to Maksym Sherman and never miss a post.