Finally, I have got it Right (sorry – Less Wrong).

What modern LLMs (and the underlying algorithms) have captured is not “intelligence”, but “[environmental/social] conditioning” – the ability to build an abstract structure (a “map” of a territory) and then use as a representation of “reality”.

What is remarkable and unprecedented, is that the process of training is a valid mathematical model (approximation) to that very algorithm, which is mother Evolution came up with – to “update” (reinforce) the neural connections, marked with a “stress-related chemicals” at the moment of actual experience (which the mathematical model abstracted as “weights”) when we’re asleep (meaning an external process).

My description is, of course, inaccurate and crude, but the main principle is exactly right.

If you are, like me, a well-read Humbert, you probably have already seen the various quotes from some of the “smartest people on Earth” that their realizations [suddenly] came up after they cease to think hard about the topic (on a walk, in a shower and what not).

The old, (Freudian bullshit) explanation is that the “unconscious processes” keep working in the background (which is not even wrong in principle), but the “Right Understanding” is that an constantly updated neural structure (inner representation) is used for “inference”, and eventually, it matches the actual structure of What Is.

All the AI arms race is about building this representation (an abstract structure) within a data-center and to sell the resulting cognitive illusion of a superior intelligence on a per-token basis to the norimies and the normie businesses.

Why they are so confident and not even looking back, spending as if there is no tomorrow? Well, because, in principle, this simple set of algorithms – reinforcement learning based on intermittent states (before the final goal has been achieved, as been demonstrated in the Alpha Go model, where the goal state is way too distant and there are lots of em) can, again, in principle, capture anything. The credit is not to some “academics” or meme-AI-researchers, but to the mother Evolution.

What could possible go wrong, then? Exactly what went wrong with all the ancient religions, superstitions, cults, tantric teachings and what not – just saying (or writing in a book) that something is “true” is not enough.

What they have achieved there is not “Intelligence”, but automated, industry-scale “Conditioning”. It is that simple.

Intelligence is exactly to come up with the methodology for distinguishing a spoken bullshit (socially “known”) from what has been correctly captured as an accurate and valid concept of the mind. Intelligence is to come up with a scientific method to dispel the socially constructed bullshit, and to realize that almost every single utterance you hear or read is such a socially constructed bullshit.

Intelligence is the very ability to trace a mathematical concept all the way back to What Is – see what has been captured, generalized, abstracted out, conceptualized and named, and then rebuild it on your own, validating every step and see how each validated step leads to the next one, which has to be validated too (against everything which already been validated so far).

No model does anything like this, just like no animal or a cult followers does this. They just “trust” their conditioning (which is for animals is the best thing they can possible do).

Again, building an oversimplified inner representation of the vastly complex but “stable-enough” shared environment, and then using this inner representation for selecting actions to take is, by far, the greatest Evolutionary adaptation ever, and this, and only this, is what the LLMs actually mimicking.

The clowns in the Valley are cock-sure thinking that, eventually, when they feed all the genrated slop (“supervised” or “validated” by the user’s feedback) back into the model, they will have a perfect “map” which will always be correct representation of the territory. They won’t, and it is not so difficult to see exactly why – the proper validation process, as in mathematics or experimental science is lacking, in principle, and without this, all the spoken or written bullshit is just socially constructed bullshit, by definition.

And yet, the code it spews out without any “Intelligence” or “Understanding” involved is simply amazing, and it compiles and runs. The explanations of complex and convoluted cross-disciplinary topics is convincing and “smart” and, unless you see the whole cognitive illusion at work (for that you have, just like in math, trace everything back to the training set and the algorithms, and rebuild the whole pipeline in your mind), you can easily mistake the rope for a snake. And this is what all the billions has been bet on.

It is not a coincidence that the ancient metaphors from the Eastern Philosophy of the Mind are a perfect match – this is exactly what is “going on” – a socially constructed illusory world made out of words, which veil out the What Is, and “To See Things As They Are” (the most profound maxim humanity ever captured in words, if you ask me), including the very processes (actual social dynamics) which produce the oversimplified, comfortable illusions, is as actual in the “age of AI” as it has been 2 millennia ago. The universal principles does not change.

So, not an Intelligence, but an appearance of it. The environmental Conditioning done right, with a simple but working (correctly captured the “essence” of constantly updating the representation) mathematical model.

What is the bottom like? Well, everyone who blindly trust a model will be eventually fucked up, or at least face a painful disillusionment, just like someone who used prayers instead of antibiotics.

By the way, I owe all the “insights” to the ancient Theravada Buddhism tradition, the early Upanishads, and J Krishnamurti – the great fighter of social conditioning.