DESCRIPTION: There is no “I” in your AI.

Lets put all the memes and bullshit aside, for a moment and talk serously about GAI (hello, mr. Carmack, sir).

There is what every “AI researcher” should know about Knowledge and Intelligence (yes, both capitalized).

There is so-called “reality” prior to any knowledge or intelligence. Any reasonable thinker has been arrived at the ultimate reality.

“I am That” (“That Thou Art”) of Chandogya Upanishad is the “end of knowledge” and the “arrival” to the ultimate “truth”, with implies existence of “That” and one being just a sub-process (a wave) in It.

There is just nothing to talk about. Sectarian dogmatism is irrelevant against this ultimate result of an observing human mind.

So, about knowledge and intelligence?

All Knowledge is within That and in nowhere else. There is no abstract knowledge apart from That and all the true knowledge could (and should) be traced back to some or other aspect of That.

All the proper mathematics can be traced back to “ultimate reality”.

One more time: Knowledge is not within individuals (agents), Knowledge is within the shared environment (Universe). From this everything follows.

When you (as an agent) observes some locality of our shared environment we literally “see” knowledge embedded in it everywhere.

It is just the “natural” habit of always asking “why it is so?”. Most of Knowledge is in the “whys” and little bit more in “hows”.

Not just every human artifact has some reasoning of /why it has been done this particular way. Every natural phenomenon has its whys.

Whys are special because they ought to show (describe) the causes - what caused this or that.

What we call the Causality principle is behind everything That Is. Whys are less wrong explanations of it.

Notice that there is always an ultimate true “graph” of causality which “produced” everything that is out there. There is nothing not connected within this imaginary directed graph of causes and effects.

The edge of this “front” is Everything That Is.

So, what about GAI?

It has to have the ability to “read” knowledge from What Is and put it back in some commonly agreed form of information, so all the other agents could read it too.

It is that simple. Intelligence is multi-agent phenomena and it is require multiple agents to observe the same “reality” (from different angles) and to share what they “know”.

Human knowledge is the shared knowledge implicit in all the human artifacts, just as causality is implicit in every aspect of Mother Nature.

We do not have to reinvent all the mathematics by every single person and we do not “learn” every time from a ground zero.

Even animals have “knowledge” about its environment embedded in the structure of their brains and neural system.

All the living forms has been evolved to match the constraints of its environment in this parricular locality.

Lets state in one more time: Knowledge in not in the texts (most of texts are bullshit). Texts (and speech) are just encodings. Knowledge is in What Is and an Intelligence is the ability to observe and trace the causality and capture, extract and generalize knowledge.

There is no “I” in AI

There is no knowledge or intelligence in current LLMs. Only information extracted from texts.

There is, literally, (and this is the best and the most adequate metaphor) no more knowledge or intelligence in an output of an LLM that in an audio output of a parrot (a bird) which mimics human speech.

Current AI only mimics of (cosplays if you will) intelligence. It only looks intelligent (just like all the Chuds).

Intelligence, again, is a habit and ability of tracing back (unwind) to What Is.

It is like non-stop verifying of the findings of a scientific method.

The famous Subjective vs Objective false dichotomy is only applicable to abstract bullshit. There is nothing “subjective” in any natural phenomena or a human artifact (everything has its ultimate whys and the exact “path” (a directed graphs) of how exactly it came to be in this particular way).

Every person has such a graph, a wast one, no different that any kitten or a dog (everyone has been conditioned by the events which happened to him).

So how do we even start to begin?

Ultimately, the answer has been given some 2500 years ago - “To See Things As They Really Are”.

But understanding require knowledge of “facts” or the principles behind the Causality itself.

Such knowledge come from a multi-agent effort and is stored (literally) within the shared environment and within what we call a shared culture.

This means it has to be a multi-agent shared environment before we even begin.

Then, of course, there has to be a shared (agreed upon or standardized) “encoding”, not for merely “information” (it is too low-level) but for “description of extracted knowledge” (just like it is with mathematical texts and notation).

May be the agents will “learn” its own “mathematical knowledge” (which is a set of properly captured generalizations stored in the shared culture).

Who knows. But what we do know is that there is still no “I” in AI, not even close. Just very complex parroting at the level of texts and speech (which, again, are just encodings).

To see things as they really are is to be able to trace everything back to the actual causes (to What Is) using evolved methodologies (which are basically ability to see “natural” experiments and to perform well-controlled scientific ones).

Again, it is not in the texts, it is a process, like everything else in this Universe.

Sorry, Chuds, you are just emitting streams of verbalized bullshit in order to impress each other (and “validate” yourselves) according to your current cultural (sectarian) norms.