So called “knowledge workers” (a Cal Newport’s meme, he has many such memes) are going to be paid at the level of third-world construction fast food workers, which is a good thing, especially so-called humanities “professors”.

Here is how.

A state-of-the art LLM model, even a miserable “30B” ones (Ike Qwen3.5-30B), which could even be ran locally, “compresses” (with loss of information, of course) all the written verbiage it has “seen” (being trained with) into a single abstract structure – which is, no surprise to me, but it shall be an “aha” for (You) – a directed acyclic graph, with a lot of redundancy in it.

Arguably, such a DAG is a kind of structure which neurons form, and this structure can, in principle, encode (or represent) any kind of information about the Outside World (the Universe) ran by the Causality Principle, since the Whole Universe Itself is such a DAG (here you should say “wow”). No, really, absolutely everything you see around is an edge of such a DAG of Causality.

Imagine looking at a large Banyan tree right from above. What do you see? A half of a “sphere”, or a whole “front” of leaves. But each leaf has a “connection” to the roots, and each leaf “came to be” as a continuation of a process started by the seed, and what it is “right now”, is, indeed, an edge of a DAG of Causality (at all levels) . Wow.

Everything else, including (You) is such a “tree” (the ancients were intuitively right with some of their symbolism).

Anyway, the evolved universal structure to represent “The Inner [Semantic] Map of The Territory [shaped by Causality itself]” is a weighted DAG because Evolution converged on it first (by endless trials and errors), and mathematicians discovered (arrived at) it, by observations and generaliziations, somewhere in the 90s.

So, the representation is “just right” and, arguably, even Universal (a global optimum, since it corresponds (captures) the “actual” structure of Reality itself).

All the variations of the Curry Howard isomorphism will, eventually, converge to this very same Universal DAG (the one and the same Elephant, which all the blind philosophers are touching), which underlays everything What Is. This is why the abstract structures behind maths, logic, functional programming, electric schematics, vector algebras, and what not, looks so similar. Everything is a Tree [of Life]. And a Hierarchy of Layers “under the hood”.

There is, however, a problem. One could build an arbitrary representation, depending on one’s Social Conditioning (which is, of course, is what the process of “training” of a model is and corresponds to). The best philosophers of the Mind, since Upanishads, knew (captured) this process intuitively.

So, if one would train an LLM on the corpus of Indian Tantric texts (circa 11th century AD) it would advise you to drink blood of a freshly sacrificed goat as the best thing you could possibly do.

This is exactly what all the so-called humanities are – a current “scientific”^W sectarian consensus on a socially constructed abstract bullshit, different within each particular sect.

Literally everything, which is not an experimental science or a rigorously verified mathematics, is such kind of sectarian consensus in principle. The whole “Philosophy of Science” arose to just to distinguish the [approximations to] “Truth” from socially constructed abstract bullshit – scientifically known versus socially “known” (yes, yes, J. Krishnamurti,, the great).

Whatever these “researchers” in humanities do in years of “research” , being paid to support an upper-middle class lifestyle, including the fucking sabbaticals, now can be done in a day or two, provided you know what to ask for (prompt).

The life-long process of creating a less wrong “semantic map” of a chosen sub-field of the “territory” is now “compressed” is any of these top 10 LLMs, except when your are doing experimental science, maths or engineering, which is an applied maths and applied scientific method, and can be prompted out in seconds. And the price of such “knowledge” (not of the actual experience) is a couple of rice meals per day.

What I am saying here is this: every single kind of an abstract pseudo-scientific bullshit (not rigorously verified, experimentally confirmed or tested by the real world conditions) can be prompted out in a few seconds and researched in a couple of days. Every single Cal Newport’s “book” can be prompted out in a couple of hours. Every “research”.

Whatever has been stated without an evidence can be dismissed without an explanation. Finally. And the price of such pseudo-intellectual slop is, finally, approaching that of a greasy rise-based meal, which is the way it should be.

The actual Intellectual is striving as much as it is humanly possible To See (Understand) Things As They [Really] Are, if you ask me. No LLM is capable of such understanding in principle. The text above explains the whys.

Okay, now what? Well, who tf knows. The only comparable social dynamic was the gradual dismissal of the Catholic dogmatism with so-called modern science (based on a reproducible experiment according to a falsifiable theory ), but instead of a couple hundreds of years this “paradigm shift” fill take less than a decade.

What we do know is that so-called “Knowledge work”, at least as these narcissistic Cals Newports and the millions of his likes are doing, is dead for good. One could prompt out any abstract bullshit one wants in just a few seconds and it will be on par with their “books” and their “research”.

By the way, the slop it produces on vastly complex and inherently poorly understood scientific topics. such as actual biology behind this or that observed phenomena, is as convoluted and polluted with rightly entangled ill-defined abstractions, just like these ancient Tantric texts. No surprise here – the goal is not to be “Right” or correct, the goal is to have a decent living out of it.

Amazingly, the same principle applies to a low-effort amateur imperative “coding” –if something imperative/OO crap compiles or runs it now (finally) costs as much as that humanities “research”, but has way more substantial intrinsic value – Windows 11 more or less works somehow.

Since writing a large, complex software projects is, indeed, an engineering discipline – an applied maths and scientific method (in the testing and validation aspects) it cannot be automated, in principle, even by the swarms of slop generators (“agents” they call them nowadays), simply because the correctness and quality does not reside at the level of tokens.

Yes, a “better” code structure, better language idioms, better design patterns and even better modularity can be “captured” by training on better codebases which already has them within, but the slop as a whole will always be less than the sum of its parts in a very subtle ways (does not live off its promises), and it is not that complicated to see why.

Again, the code has its “reality checks” (the compiler, the unit and validation tests, the user’s feedback, etc), while humanities (sectarian) babbling isn’t. But the slop will always be a mere cognitive illusion (by definition), since the rigorous reasoning of any kind has never been applied in the process.

UPDATE:

I am often getting carried away, so my writing becomes sloppy for the sake of evoking the right emotion and induce the right intuitions, and I use metaphors and hand-waving a bit too much. So, let me clarify whatever I could.

We should try to build everything quickly right from the ground up.

What is an intuitive understanding? It is a particular gut feeling (a “signal” of the no-verbal, ancient “animal” mind) that our partial understanding, a tiny fragment of the enormous jigsaw puzzle we are trying to complete, is “just right”. Animals have such feelings about almost every aspect of their shared environment (because brain evolved to capture the crucial environmental constraints). They do not know, but feel something is “right”, and so are humans (sometimes).

As “external observers” we never see the whole jigsaw puzzle – our mental capacity is just extremely insufficient to even grasp it (the famous 7 plus/minus 2 chunks of information meme is out cognitive limit).

Even our visual system is incapable to process every detail of the visual field, so the brain creates a grossly oversimplified “view” of actual reality, and “the rest of the brain” observes this oversimplified view, and never Reality itself in its fullness. This notion, by the way, has been realised and understood even before Upanishads, and this is the most fundamental one, it applies to almost everything – from our very naive but good-enough most of the time (as quick and dirty flight-or-fight heuristics) “mental models” to the only way we could possibly program – the notion of an abstraction, which is exactly such an oversimplified view (or a model) which discards most of seemingly irrelevant details and, hopefully, captures the “essence” (“what matters”).

Okay, we never observe the whole “jigsaw puzzle”, and even if we try our best, what we could perceive, in principle, are just a few fragments here and there, among huge gaps of emptiness (or a blur, if you will).

This is more or less what our minds do with oversimplified metaphors – we associate some intuitively understood or realized “fragments” with some similar but more familiar mental constructs, and “jump” between these metaphors back and forth. Each metaphor hides (encapsulates) vast depths of irrelevant but very real complexity. Again, this is how to program (the only way).

So, the simplest single layer (or a multiplayer) Perceptron (its mathematical representation) is not a “tree”, but a Directed Acyclic Graph (a DAG). A “tree” is just a metaphor that binds these abstract structures and associated abstract concepts together.

The whole abstract structure is a single mathematical expression and thus is an AST which is particular kind of a DAG. It is not a random coincidence that [nested] mathematical expressions form a DAG. This has something to do with the underlying Causality of the Universe, from which mathematical notions has been captured and properly genralized, abstracted out, conceptualized and named.

Yes, addition and multiplication (scaling, in our case) are both commutative, so we can compute the whole expression (which , again, is an AST, and thus a particular kind of a DAG) in any “order”, but the notion of directedness is not in how it can be computed – reduced (or evaluated) – but how the information “flows” from inputs to outputs and never in the other direction.

The same weighted sum can be computed in different orders due to commutativity of all the operations involved.

By the way, when we “embed” our DAG within a matrix, just like drawing it on a sheet of checked paper (zeroing out the “unused” elements) and then use vectorized operations of on whole matrices, nothing changes structurally. The properties of “directedness” and of the lack of loops are preserved.

Acyclic mean that there are no loops in this structure. Yes, there are joins, so a particular fragment looks like or appears as a cycle (or a potential loop), but it is neither. There is no “going backwards or in the opposite direction”, and there is no notion of “looping or iteration” in principle. Think of a delta of a river (again, just another metaphor) – the water never flows backwards or loops.

The notion of a feedback loop, which is absolutely fundamental and necessary in all biology, is Out There, but it is always at a higher level of abstraction, and “implemented” by some external, with respect to this particular structure, process.

This is so fundamental, that I have to repeat – any kind of a feedback loop is always at another level, by an external process, in biology and elsewhere (to which the structure is the “data”, if you will).

The Back Propagation implements such a feedback loop – it literally modifies (updates) the weights on so-called “backward pass”. The “forward pass” and the “inference algorithm” have no idea that it ever existed or being performed (this is, of course, loose coupling and separation of concerns).

So, even if it is named “back[ward] propagation” it does not imply that something “flows” backward through the structure itself, and this is how it remains “directed” (and “acyclic”).

Yes, the gradients (partial derivatives) are being computed and the weights being systematically updated (by an external process) “backwards”, but it does not change the actual structure or “reverse the arrows”.

There is another “view”. A multilevel Perceptron is a “fully connected” structure, which means that each node has a single connection to every node in the “next” (and here the “directedness” manifests itself) layer.

Most of these connections turns out to be redundant, just as in the development (maturation) of a brain, and the evolution came up with the [external] process of pruning, which removes seemingly redundant (unused) connections (literally).

This is the jumping back and forth between metaphors I have told you above. I cannot think in a different way, may be you can. There is more, of course.

In any brain electrical signals flows only in a particular direction and never “backwards”. To signal something “back” to the brain a distinct “wire” is required. This is how (and why) the realization that the brain structures (made out of neuron cells) are also various instances of a DAG.

“No loops” is a trivial property. Mother Nature does not iterate or count (it uses something that we call thresholds instead).

A “tree” has a single trunk and lots of branches (but wait! there are lots of branches under the ground), but again, a biological tree is a process and the shape of such process is a DAG. This is what the metaphor is for – to show that processes may have particular shapes, and that all the processes in the universe are directed. This is difficult to “prove”, so we will designate this fact as an “intuitive understanding”.

By the way, the “directedness” is not in Time (time is an abstraction of the mind), but in Causality (which is real). Think of an explosion (as a process) – there is no notion of Time in it, only of Causality (and of a locality).

Just a little bit more of jumping back and forth. Imagine, if you will, a small bug which crawls from the trunk of a tree to the tip of some of its branch (and then fly away). While the tree is, indeed, a tree, the actual path the bug would take is a linear sequence (of steps).

Yes, it could “potentially” choose a different branch at each “fork”, but only in imagination (in the mind of an external observer). In reality it will always “choose” a single one, and other branches will become irrelevant (unless it would preform a systematic backtracking search, which it doesn’t).

Yes, an external observer could be aware of the whole tree and of the bug on it (and this could be the bug itself, if it would possess introspection and abstract thinking!) but the path will be a sequence of steps nevertheless.

The [bullshit] abstract notion of that there /is (or would be) a whole different Universe if a bug chooses a different branch is a hallmark of sophisticated abstract bullshitting. These notions do not exist outside of the mind of an external observer and exist only as the play of imagination. (yes, if you would marry that one girl instead… everything would be different…. but you wasn’t, and that other “path” is only imaginary and hypothetical).

There is another one. The road map (and the actual tarmac roads) do exist, and are not potential, but you can take only one at a time (at each fork), so the other roads (on a map or in the territory) are “out there”, but once you (a bug) have taken one, some parts of the graph become irrelevant (again, assuming the bug does not backtrack, since it just cannot, not in Time but in Causality).

Why all this crap? The shape of a process defined by a code in a proper programming language is a DAG. At each “conditional” (or a pattern-matching expression) only one branch can be taken (notice the intentional lack of “at a time” – there is no notion of time in proper programming languages). The other branches do exist, but are hypothetical and irrelevant, as if they do not even exist. The only one who “actually observes that the other branches do exist out there” is an external observer at the [different] level of abstraction, above the evaluation process.

This is as universal and fundamental as the feedback loops above – some potential paths through the code do, indeed, exist (as a tree or a road network) but when the process evolves, the only one branch can be taken (in principle) and the other branches are as irrelevant as if they are non-existent.

In a fully connected NN each connection (weight) contribute some amount (input) to each node of the next layer, and it appears that all the paths have been taken at once. Well, the water flows through all the steams of a river’s delta, but each individual water molecule takes just one fork (the lack of “at a time” again), and “the bug” cannot see the “other water”, only an external observer (from a higher level of abstraction) can.

So, the “same water flows through a NN at once in one direction (some streams are narrow, some wide)”, if you will, and there are no loops. This analogy is the best I can do. And if you want to visualize Causality itself –the shape of this process this is the best metaphor (and a tree outside your window is the second best).

A human mind is always being confused like this, unless it has been trained for very rigorous and systematic process of thinking (by years and years of actual practice) it would easily confuse one for the other and/or miss the underlying universal structural isomorphism. Do not worry, it took humanity god knows how long to arrive at the Curry Howard Isomorphism, which is just a tip of an Elephant.

There is a catch. These abstracted out structures and abstracted out shapes of processes appear to be different, but the properties and the building blocks of the abstract structures (forms) are the same. This (and only this is the one and the same Elephant mentioned above.

There is an intuitive understanding, or an Upanishadic seer’s realization, that the underlying structure of “every process” is one and the same (at all levels), and it is what we’ve captured and abstracted out as a formal notion of a DAG in the branch of mathematics called Graph Theory. Reality comes prior to maths (Kant).

At least it has been shown that any mathematical or logical expression (which can be represented as an AST) is a DAG, and thus any computation (a process of evaluating or reducing such an expression) is (has a shape and the properties of) a DAG. The 50+ years of non-bullshit FP research have demonstrated this fact.

At another level of abstraction, any function composition is a (realized as) nesting (of expressions), and currying (of functions) is nesting (of function calls) which is isomorphic to composition (is structurally the same), and that all morphisms (“individual arrows between dots”) are “directed” (yes, yes, we’re jumping back and forth through the gaps).

The “aha” here is that this isn’t accidental (not a random coincidence) and that it reflects the underlying universal notion of [multiple] Causality, which [alone] runs the Universe, and once you have captured something correctly, you have to (or rather already have) correctly capture the underlying structure [too].

Now you are fully enlightened. Go ahead and post something on your favourite social media.

By the way, I wrote a whole dialect of Haskell, with a much clearer and uniform syntax, which is, in turn, based on just (only) these universal structural building blocks – a “fork”, a “join” and a “step” (for a lack of even more general terms) or Disjunction, Conjunction and Implication, if you will, or a Sum-type, a Product-type and an “Arrow/Step” type at the higher level of abstraction, and so on), it is on Github.