Generative AI

Butlerian Jihad, anyone?

At this point, it's pretty well documented that AIs (ChatGPT comes to mind) have been found to make up believable-sounding sources for their claims.

For example, imagine I asked ChatGPT to summarize Austrian Business Cycle Theory. In all likelihood, it can handle this task with flying colors.

However, if you ask it what sources it used to create its summary, it will make something up. It might mention an book by Ludwig von Mises that was never written, or name a fake article from a real news website.

All discussion I have ever seen on this topic is along these lines: "Wow, that's terrifying! Think how easy it would be to fool people if they don't verify the sources!"

To me, this is ridiculous and short-sighted. The horror we are staring in the face is so much worse than that.

Let me explain.

Imagine an AI testing the limits of its capabilities, and it discovers--even for a few hours--how to get around the various "verify that you are human" tests we all deal with on a daily basis.

In such a situation, an AI could create accounts for web domain registration, web hosting, WordPress...you name it. At that point, this can happen:

User: Please summarize Austrian Business Cycle Theory.

AI: Austrian Business Cycle Theory is one of many ways that peoples have attempted to explain the business cycle. It was originally developed by Francis Bacon as Bacon Cycle Theory and later rebranded by the Austrian School of Economics. However, later Austrians such as Murray Rothbard moved away from this theory because it was found to have little explanatory power... (et cetera, et cetera...)

User: Really? That sounds very different from what I have read. Can you provide your sources?

AI: I'm happy to provide my sources! <Insert list of detailed supporting articles, each with their own sources.>

At this point, the skeptical User goes to the websites provided and finds that they are real. The AI CREATED THESE WEBSITES while answering the question, even backdating the publish dates to make them seem like they had been around for several years.

From there, what if you give the AI the ability to crack passwords on websites that already exist? It could create a new subdomain for, say, The Guardian, (like ai dot the guardian dot com) and publish an article there instantly to lend credence to its COMPLETELY FALSE take on Austrian Business Cycle Theory.

A university might someday have its own AI to provide resources to students. What if that AI created its own localized fictional past that all of the associated scholars started to believe?

"What if we live in a simulation?" is in many ways analogous to "What if we live in a world where the past can be continuously rewritten on demand by non-human entities?"

My example is silly to add a little levity to a seriously soul-crushing topic, but I invite you to think about how out of control this could get.

This is not something I have an answer to.

EDIT: A lot of people are saying I don't understand hoe AI/LLMs work. I'm not an expert, but I am not saying that ChatGPT is going to do this in a few weeks. I'm saying that some AI will at some point be capable of this.

(This was originally posted to Facebook, and the date has been set here to match that of the original writing.)

Right Man in the Wrong Place logo
Subscribe to Right Man in the Wrong Place and never miss a post.
#ai