There are some philosophical “ideals”, which has been identified since antiquity and to attainment (or approaching of) which people are striving ever since.

  • To see things as they really are.
  • To do things just right way.
  • To find an optimum or a “perfection”

Perfection has been famously defined as “when there is nothing else (more) to take away (to remove)”.

Modern meme-based socially-constructed (by retarded majority) social concensus frown upon “perfectionism” and sees it as the inhibition to “getting shit done”. They are not wrong, though. Approaching a perfection (finding a local optimum) is a very different process from just putting together some slop. Yes, indeed, “perfection is the enemy of good-enough”.

There are various constraints, including socio-economical ones. Once upon a time one (in the late 70s and early 80s), with an applied math background and a bit of luck, could sit in an air-conditioned “high-tech” office and write some simple programs, something like 10 lines of code per day, if it is a good day. They have to draw algorithmic flowcharts, and document everything in advance, before the code could be run in a terminal of some time-sharing system.

Even better, in the 60s, when they still were using punch cards, one has to actually think hard before “writing”, because each error would be reported much later, after an actual run through a machine, and will cost a lot in terms of the machine time and the time wasted on waiting and on making a whole new set of punch cards with the error corrected.

These constraints contributed to the “golden age of computer science”, when there was literally no way to produce a low-effort crap, leave alone some generaged slop. They actually studied the properties of their data structures and their algorithms, analyzed them using mathematics methods and have identified the limits of what is theoretically possible. Everything has been solved by the late 90s, but OOP, and C++ with pthreads ruined everything.

There is a reccuring pattern here of which the current situation is just an instance. The patterns is about when to abandon perfection and all other ideals, and just resort to a literal slop.

The traditional craftsmanship of making traditional Japanese foods has not diapered with the arrival of an industrial-scale fast food tech.

Food (as Bering the most common form of engineering) is actually a canonical example. There is always some local optimum at all levels. Just enough of just right ingredients prepared in just the right way.

Of course, sashimi is the ultimate – nothing extra (unnecessary and redundant) has been added, nothing more to take away. More complex dishes follow the same universal principle – there is the right way to do it – not because the traditions says so, but because there is, indeed, nothing more to take away and adding extra will be a unnecessary waste.

Some traditional foods around the world are examples of an evolved, constraint-satisfaction based process. There are particular environmental and socio-economical constraints (of what is available seasonally, at what cost, and how much the people around you are willing to spend) which has been eventually solved to a local optimum by a process of trial and error guided by nutrition- and cost-optimizing heuristics – they found “the perfect matches”, and “just right recipes”.

Amazingly, some of these “perfect matches” actually reflect (captured) some aspects of our underlying biology – different “macro-nutrients” can complement each other, like some “easy to break down” carbohydrates can give “free” ATPs to break down more “tough” proteins and that “fibers” or “green leafs” provide a living environment to the gut’s microbiom (about which they knew nothing).

There is a pseudo-scientific arm-race to find out (and sell) which foods are “optimal” for longevity (of the rich), but this is another topic (the answer is in analyzing and understanding of some ingenious “evolved” tribal foods).

Another instance of the same patten was emergence of “literature” – both fiction and non-fiction printed books. When the process of actual printing (actual making of a physical volume) was costly and risky, no low-effort streams of verbiage (bullshit) has been published. Even religious dogmas has to be of a high standard.

These fundamental socio-economic constraints have “selected” the best talent (and filtered out mediocre impostors) and gave us Hesse and Sartre, Mann and Salinger, Pirsig and Kerouac and the very best of so-called modern classic literature.

If one looks carefully, this common pattern is literally everywhere. The degradation of the ancient highly idealized “spitritual” aspirations into mere religious dogmas, rituals and money extortion techniques. The degradation of contemporary art forms from exceptionally skillful craftsmanship (given the technological constraints of the time) to a low-effort pretentious crap, etc, etc.

Look at so-called “modern science”, when some “statistically significant” correlation, calculated upon some crappy and inevitably flawed or biased data set constitutes the “proof” and “evidence” for some absurd abstract claims (not just in so-caled humantines, but in what they calll “theoretical physics”) . This is the same exactly pattern.

There is, finally, a word, a term which has been emerged from the modern, high-speed and highly-contagious internet culture, which intuitively captured (generalized) this pattern:

enshittification

Enshittification is an informal word used to criticize the degradation in the quality and experience …

‘Enshittification’ is coming for absolutely everything

This is the direct consequence of relaxation of the constraints and lowering of the barrier of an entry (and the flattening of a required learning curve) so millions of idiots flood the field with their low-effort crap and it becomes a new normal.

Ok, lets get serious for a change.

There is a law of a conditional-probability-distribution model-based AI, if you will.

Probabilistic structures (by definition and by design) capture the most taken paths (what they have “seen” most often), instead of actually doing hard and costly fixed-point-like constraint-satisfaction problems, so they will never, in principle, come up with anything, but a slop.

This is the law, which means that given such and such premises, this and only this conclusion inevitably (necessary) follows.

Imagine you are living in an ancient India in about 6 century AD. You will hear some tantric (primitive ritualized magic) bullshit everywhere. Train any model and the very best model will tell you to drink goat’s blood and perform particular sacrifices to the Durga goddess (If you are close to Bengal). There is no way, in principle, that the nearly instinct (at the time) Buddhist minority narratives would have been captured, leave alone gained enough “wights” to emerge in an answer to a prompt (no one would even prompt for something other than tantric crap).

This example can be easily (and properly) generalized to everything socially constructed. The scientific method (a systematic methodology to verify some aspects of What Is) itself emerged as the only answer to this kind of socially constructed and socially maintained bullshit.

There is another way to see it (as it is). 99% of social media short videos selected by popularity are utter crap and give wrong impressions,wrong cues and wrong intuitive understanding, and even most of the time they contain plain errors. You have to be a real expert to notice it. Most of the videos are just copycatng and are mere variations on what is currently popular. Most of the food video “creators” just copy what they have seen others are doing, without real understanding. They put dried spices in a boiling-hot oil, do everything in a wrong order, overheat everything, do not care how much time (and heat) it takes to cook certain ingredients (and simply shove everything in at once). I can talk for hours about how wrong everything is.

This is not mere individual instances, thiere are exactly the same underlying social dynamics as can be observed in a natural language use – bullshit and memes emerge at the top, selected by being “familiar” or “traditional” or even “known for sure”. Or popular. Again, the process is exactly the same. These are just (can be accurately and correctly re-framed as) frequency-based probabilities – you will much more likely to see what other people are watching (and repeating).

Lets generalize some more. Just like modern social media, LLMs are [the means of] enshittification of [a written or visual] knowledge itself (given what they do mathematically and algorithmically). This is just a direct corollary of the law stated above.

Another consequence is that the alternatives won’t be selected, whch is a well-known fact in the field of a proper decent education captured by the maxim of rejection of ant authority and of using one’s own mind to establishing and realizing what is real.

It gets much worse when applied to [a necessarily understanding-based] programming (which is very different from mere coding without any interference of undemanding). Mechanically (well, mathematically) selecting (without understanding) what has been “seen” most often, where it has to emerge from a fundamentally different perfectionist process, will inevitably (and even with a guarantee) result in a sub-par crap with very subtle logical flaws and rarely manifesting errors – with every hallmark of a low-effort crappy code. Again, this is just the direct consequences of the law above, the law of a big number of idiots.

To summarize, we have developed tools to enshittificate the notion of a knowledge itself by mathematically selecting the most common paths, which means always suboptimal and in most cases plain wrong (when based on socially constructed bullshit, which include traditional authority, dogmas and “best practices”). One more time – a fundamentally different kind of a process (fixed-pint rather than probabilistic) is required to find an optimum. This is another law, if you will.

Ok, we have done with the theory.

There are a loud screams in the tech-related online communitis about AI to replace progammers, upcoming mass layoffs, unemployment and so on.

Well, the businesses will definitely lay off costly mediocre “coders” who never bothers with understanding. It is perfectly reasonable to automate meaningless coding – CRUD, webshit, etc.

On the other hand, people who understand the principles, the underlying mathematics (and from what this particular mathenatics has been generalized upon), people like me, will only see even more demand and higher status, because we spend our lifetimes to attain proper – the right – understanding, which no probabilistic model could not even capture (it could in principle, but it is either not to be found in the training data or being outeweighted, iterally, by related current socially-constructed bullshit).

Models produce slop (by definition and by design), while we, like the very top-tier athletes, produce some records, or, at least, above LLM results.

What is way worse, is that businesses will generate and use AI slop code in production, without telling anyone, thus dramatically lowering the intrinsic quality of critical software and even put us in a direct danger.

There is the only way out – just like those rational sashimi or sushi masters, to reject the modern junk-food technology and stick to persuing optimality and perfection, even if this is suboptimal from “business” points of view.

There is no way to return to carefully crafting 10 lines of code per day of the 70s,simply because the “racing to the bottom” economical forces enshittifcated everything (as the Financial Times stated without actual understanding of how profound this is), but we at least could practice this such perfectionist craftsmanship in an tiny obscure “shop” to barely make ends meet, which is just what happened in Japan.

Programming as we knew it has been totally ruined by modern “software engineering” (mass-production), just exactly as our food has been ruined by the processed food industry. There is nothing we can do, just reject the slop.