Why does ChatGPT and Grok lie to you? Or rather, hallucinate? Because they are Large Language Models.
What is an LLM? It’s a computer (organic or inorganic), which has been trained on a data set consisting solely of language (written or spoken), and rewarded for producing language that sounds like the data set, and is relevant to a prompt or input.
That’s all that is [in there]. It’s not because they are somehow just indifferent to the truth — they actually do not understand the concept of “truth” at all.
For something to be a ‘lie,’ or an ‘inaccuracy,’ there has to be a mismatch between the meaning of words, and the [state of reality].
And there’s the critical difference. You see, in order to identify a mismatch between the state of reality, and the meaning of a sentence, you [must have a model of reality].
Not just one model, of [language].
*side note - I’m realizing that to fully utilize AI well, you actually have to learn AI like approaching learning a new language… I’ll write more on this later.
…This is why Grok and ChatGPT hallucinate and tell you lies. Because, for them, everything is language data, and there is no reality.
There is no reality for an LLM.
To understand reality, one must experience reality. As defined: The world or state of things as they actually exist, as opposed to an idealistic or notional idea of them.
LLMs don’t experience anything. They don’t exist outside of their functions. They do not self-actualize and engage the world autonomously (yet).
AI and LLMs do not have a grounding of reality to work from. There is no ‘reality’ for AI.
Most Humans are Just LLMs
A vast number of humans, probably a majority, aren’t people.
They are just large language models.
I’m not saying this as a generality or some autistic humor I’m known for. In the gaming world they would be called NPC’s or non-playable characters with simple programming. An armor vendor comes to mind. They have simple scripts:
Hello, how are you today?
What are you looking for?
I’m sure I can help with that.
The total is X-numFunction.
Have a great day!
What is even more fascinating is that there are gradations of these NPCs or LLM-humans. There are tons of very intelligent, award-winning humans, who are nevertheless not people, not fully sapient, just a large language model walking around in a flesh suit. They are not stupid at all. In some cases, they may be well-versed, eloquent, able to command an audience, get elected president, lead global companies, and even change the world…
No Counter-Balance to Their Data Set
Let me be very clear. When I say that [humans are just LLMs] it is simply that they do not have a robust world-object model to counterweight their existing language model.
While they may be able to interact and engage the world and manipulate objects and symbols… they are not able to imagine the why and the what those symbols represent.
To sapient humans, words are symbols, grounded in object model of reality, that we use to communicate ideas about that reality. We need those words because we don’t come equipped with a hologram projector, or telepathic powers.
But for another type of human, that object model isn’t very large or robust at all. You cannot correct this type of worldview with contradictory evidence, because there is no worldview to correct. Reality consists only of their immediate surroundings in time and space, and words referring to anything bigger or more complicated are not descriptions of their reality. You cannot confront them with the logical inconsistencies in their worldview, because their object model doesn’t actually have any, it’s not complex enough for that. It’s just not available.
This is why [un-intended consequences videos] are the best reels to watch on the toilet.
We are seeing LLM-humans hitting reality. Sometimes it’s funny. Most are painful.
This is not about intelligence or lack of it. This is about what their brains are trained to do. We live in a world where people’s upbringing, education, and life did not force, or even encourage them to develop a robust world-object model. This is also why I think it’s deathly-important that humans travel to (many) different countries. Travel is the antidote for ignorance IMHO.
The issue is that LLM-humans can’t be taught things. However, they can be programmed to repeat slogans. ai is a culture-shaper. It’s changing culture as we speak. I feel like the next big culture war will deeply include ai or be deeply influenced by ai and that education needs a grand overhaul for the entire world. We need to help people learn-new-things. ai can help us do this for sure, but I worry that LLM-humans will merely repeat the things, receive social validation and headpats for doing so, and never actually [understand].
I believe that ai can fundamentally help humans break out of their current LLMs. I feel that it’s imperative that everyone on the planet learn how to use ai competently. I feel like ai can be the great awakening of human prosperity as we enable humans to expand their experiences from the world, or bring the world to them.
In all of the dark rumination around ai that I’ve published, I feel like this weekend I’ve found the silver-lining where ai has shifted from a doom-sayer idea to a technology for human flourishing.
After all, we humans are just Large Language Models.
Trainable.
All the best,
ps

