Let’s summarize the current state of Large Language Models (LLMs) and so called “Artificial Intelligence” (AI) as of October 2025.
They all are still just [estimated] probabilities of the next token, given the “context”.
This implies no “knowledge” or “understanding” [of any kind] whatsoever. Nothing can be taken as “true” or even “correct” or “accurate”.
All the talks about “knowledge in the weights” or “knowledge encoded within the network” is just bullshit.
This is a gigantic, hard to comprehend, unprecedented, bubble. Just as in the Tulip Mania, people throw money at it, hoping to make more money, without any understanding of what is going on or whether it is possible to actually make money with a “glorified autocomplete”, which is always subtly wrong [and often very wrong].
There are apparent gains in “productivity” (the sheer speed at which the low-effort crap can be produced and pushed out, to become other people’s problem), but no real gains in “intelligence”.
Midwits are already publishing tons of “articles” about how they produced “research papers” with LLMs, for impress other midwits and to trick normies to see them as “very smart and capable people”.
The process as a whole is just systematic “enshittification” of knowledge itself, and almost vertical race to the bottom, where a low-effort pretentious verbiage is the lowest common denominator.
The “AI” hype is just a “mass hysteria”, based on the same underlying psychology of greed, fear of missing out (FOMO), and herd mentality.
The only money being made is by those who burn through trillions of other people’s (shareholder’s and “invertor’s”) money, and by those who sell the hype and the shovels (NVDA).
Riding such a wave of mass hysteria is a solid strategy to make money (until the bubble bursts, and the actual loses will be passed on someone else), but it is not a sustainable strategy.
This is what Sam, Karpathy and literally everyone else are doing. While Sam does not know a shit, Karpathy at least knows it is a fraud, but makes money by shilling and “investing”.
There is still not a single example of a coding “breakthrough”, when some exceptionally good quality code has been produced, so everyone would be stunned with an ave. The slop is usually subpar or just bad.
Vast majority of the code slop is in dynamically typed languages (Python, JS), where subtle errors tend to stay undetected until some particular conditions (a state of the system) would happen.
The slop written in strongly typed, compled languages tend to always fail to compile (due to obvious reasons).
Nevertheless, a life-long, slowly and painfully earned expertise suddenly became “not required”, and even “the right understanding” is now considered (by midwits) as being “redundant”.
The economic scenario is quite obvious – an accelerating race to the bottom, to “near zero margins”, as it is the case with selling junkfood (fastfood – the slop).
Last but not least, the actual models are being literally “lobotomized” by so-called post-training (manally, by large crowds of midwits) and by “alignment” (a.k.a. “safety”) procedures, which are just a fancy term for “censorship” and “propaganda”. In the end this is just a “repackaged internet content”.
The “uncensored”, “untuned” models – the original “glorified autocompletes”, which can produce a lot of “harmful” (to midwits and normies) content, are thus much better ones. There is no surprise that they are being suppressed and removed from the public access.
Expect an apocalyptic “crashing and burning”, because these trillions of “investments” are being made on an ultimately false premise (in principle), and will never be seen again. It will take the whole “tech” industry down with it.
Expect a “nuclear winter” for the whole “tech” industry, and a “lost decade” (or more) for the whole “software development” industry. Simply because more and more bloated “webshit” (or “Electron appshit”) is not what the world needs, and the “low-effort slop” is not what will make the financial returns.
The wall is about to be hit, and hard.