We are living through a paradigmatic shift, the one described in the “Scientific Revolution” by Thomas Kuhn. As I mentioned many times, texts and even crappy code became very, very cheap, just like a processed junk-food or a low-effort street-food slop. This is the “shift” and the end of so-called “knowledge work” as we know it. At least this is the end of the pretentious “knowledge work”, when one just pretends to be an expert is social settings, using very straightforward verbal and non-verbal cues to signal their “knowledge” and “expertise”, just as a priest would do in the not so distant past.

Lots of people have already realized that good writing is a hard, labour-intensive work, which is now being replaced by a nearly zero-cost AI-generated slop. Just like a pretentious writing by an imposer, the AI-generated slop is often too abstract and too general, the proverbal “hand-waving” and a “word salad”, which is, again, goes back to doctrinal and dogmatic texts and speeches, which very few people could understand to be an utter made-believe bullshit.

Good writing is as hard as writing a quality mathematical proofs, meaningful poetry (which actually captures something worthy in a beautiful form) or even a prose – something after reading which other people would overwhelmingly say “wow”. We all knew some good writers from the golden age of the English Fiction, at least some very good parts of some good books.

With the source code the criteria are even simpler – it has to be correct, just like any written mathematics. Correctness is exactly where these probability-based LLMs are, in principle (and by definition of the underlying algorithms), inherently bad. And this very correctness is exactly what we expect and want from them, and being correct (without errors) is precisely what a non-bullshit “knowledge work” is all about.

Nowadays, however, what would pass as a “knowledge” is just a exuberant verbiage about the subject in general, almost always with some references to “limitations of rigorous sciences” and difficulties (and the costs) of conducting properly designed, reproducible experiments/, which, just like proper mathematical proofs, are necessary, and never optional.

All these modern media formats, especially pop-sci podcasts (which make millions on product placements, narrative building and plain ads) and crappy fast-print tech “books” are just manifestations of the “knowledge work bubble”, which is now bursting, just like the dot-com bubble in the early 2000s. The “knowledge work” is not a “work” at all, it is just a pretentious and often meaningless verbal and non-verbal signalling, which is now being replaced by AI-generated slop.

This is the paradigmatic shift, which is now happening, and it is not going to be reversed.

The question is, of course ~What To Be Done?- Now What?

Unlike mathematics with errors, a generated code with errors are “almost right” and “acceptable”, due to its near-zero costs. It has even a reasonable (and the only) use case – to generate a vebose boilerplate code of crappy, badly designed OO APIs, archaic and legacy (like Win32) or just stupid ones (as in webshit frameworks). Auto-completing such vebose crap (as per “glorified autocomplete”), even with occasional minor errors, is the only reasonable use case for these LLMs, which is not a “knowledge”, leave alone “thinking” or “reasoning” at all, but just, again, a glorified autocomplete.

Generating a boilerplate code and “skeletons” of modules (or verbose, over-abstracted class hierarchies) is a very real, measurable by all the common metrics, productivity boost – we just have to admit that this (and only this) is what keeps the gigantic unprecedented AI bubble from bursting, at least for now. Using several LLMs “in parallel” to generate slop for the same prompt is even better, because it allows one to see the alternatives without being able to come up with them on their own, which is a very real productivity boost, too.

Go and watch these ~@karpathy videos on YouTube, where he explains how to use LLMs to generate code, and you will see that he is not even trying to hide the fact that this is just a glorified autocomplete, and only creates a very convincing illusion of “thinking” and “reasoning”.

BTW, the infamous “Turing Test”, which is based on “cannot tell the difference whether it is a human or a machine”, turned out to be a naive bullshit, suitable only for Liberal Arts majors and people who use the word “creative” way too often, and this is not a test of “intelligence” at all, but just a test of good-enough, convincing illusion (the ancient “fundamental” Maya, if you will).

The problem, however, is that the slop would be accepted and chosen over actual life-long painfully-attained expertise, based on excessive reading (which takes a lot of time), intelligent analysis and actual experience. No one needs all this anymore, when the slop is cheap and almost useful.

I could go on about the “crappy books” industry, and these narcissistic “Cal Newports”, who, after coming up with a few good generalizations, keep producing streams of a subpar printend verbiage, not unlike these LLMs. I would see a webshit becoming even more bloated, even more verbose with the salad of necessary and redundant abstractions, which is now being generated by these LLMs, and which is now being accepted as a “knowledge work” and “expertise”. The problem is that something way worse that webshit or “calnewportism” is already a “New Normal”.

Again, while the programming syntax errors can be routinely caught by the tools, simple semantic errors can be caught by some low-paid H1Bs or even oursourced (with kickbacks) to somewhere in the third-world, the actual correctness of the code is not something that can be easily checked. Understanding cannot be outsourced, in principle. When it does, all we get is some form of a disaster, be it obesity (cell ’s chronic metabolic disorder) epidemics or something similar.

The real problem is in the low-effort “plain texts”, which nowadays they generate and send to each other without underlying understanding. There is no way to “mechanically” catch the subtle errors and “hallucinations” which only an actual expert can spot on. This ongoing accumulation of errors and hallucinations is what we call “enshittification of knowledge itself”, which is as bad as an actual “information loss”.

And yes. all these “IMO gold-medal level” announcements are just manifestations of a very sophisticated and computationally intensive text-based cognitive illusion, no more, no less.