Once upon a time I find it difficult to read these lecture notes in https://www.cl.cam.ac.uk/teaching/1718/L28/materials.html because of ESL and math “fear”. Nowadays I could, probably, explain the fundamentals more clearly in just a few pages.

The “End Of Knowledge” (of seeking to Understand) can be achieved by just a few simple realizations, which means attaining one’s own “Right Understanding” through direct experience and one’s own “a-ha moments”.

Arguably, the “Upanishads” of Programming began with this book: Abstraction and Specification in Program Development by Barbara Liskov and John V. Guttag. No, a few years earlier Michael A. Jackson wrote Principles of Program Design, which emphasized the fundamental principle that one’s program structure should reflect (actually to be in one-to-one correspondence with) the problem structure (which can be defined as an inherent layered structure of complexity of the domain ). But Liskov and Guttag took it to a whole new level.

What the heck is the Inherent Structure [of domain’s] complexity? in 1962 Herbert A. Simon wrote a seminal paper The Architecture of Complexity, which shared the observations that almost all Nature’s complexity (at least in Biology) has a hierarchical and layred structure (which is, indeed, “natural”) and introduced the idea of “nearly decomposable systems”.

In 1981, in his book The Sciences of the Artificial, Simon further elaborated on this idea, emphasizing that complex systems are often composed of interrelated subsystems that can be understood independently to a significant extent. Again, this “lose coupling” of subsystems is a fundamental principle of complexity in Nature. Close-coupling does not evolve well.

The most fundamental principle, however, which is necessary for a good programming and is the major underlying principle of non-bullshit software design is that the code has to be at the same, appropriate (high) level as the concepts of the problem domain, which means no low-level types, generics (they are implementation details of the low-level libraries), no machine types, no low-level imperative looping and indexing syntactic constructs at the highest level of the domain’s model – user-defined Algebraic Data Types, the corresponding mostly-functional DSLs (of associated operations) which has to be used by the “surrounding” level of domain (“business”) logic.

So-called Clean Architecture (or that another Hexagonal one) arise “neurally” from the discipline of properly partitioning the code into layers of DSLs (vertically) and modules – one per domain concept – (horizontally) separating them by clear, non-leaking abstraction barriers, which is what a cell-membrane with all its “pumps” and “portals” (or at a higher level – an organ’s outer layer) is. The concepts of a “port” in Scheme, SML or Erlang has been losely modeled on biology by the MIT guys who have lots of top-tier biologists around.

By knowing just this much you are already ahead of 90% of “programmers” on YouTube, because these are fundamental principles, while they are bragging about knowing particulars (of some fucking C++ lmao).

One more required paper (I could not easily find the original Howard’s paper, so I link this – The Curry-Howard correspondence today by Xavier Leroy).

The Curry-Howard correspondence (also known as the Curry-Howard isomorphism) shows that, the logical connectives and quantifiers in logic correspond to type constructors in classic FP languages, and that logical proofs correspond to computer programs.

But the real “a-ha moment” is that they have the same [algebraic] structure, and this is not a random coincidence, this is direct consequence of the fact that there is a universal underlying structure Out There .

When one “chops off” a direct acyclic graph (DAG) into its most basic building blocks, one would find “forks, joins and steps”, and these building blocks have the same algebraic structure as logical connectives (AND, OR, IMPLIES, NOT, FOR ALL, THERE EXISTS, etc) and type constructors (product types, sum types, function types, etc).

Here you have to say “a-ha” again.

There is some more. The classic Algorithmic Charting Techniques of the 70s, which can express every possible algorithm (as proven by the famous Structured program theorem), have the same algebraic structure as well. So are the EE schematics of digital circuits.

Okay, now what? Well, it seems that The Way to Program, which is just another name for the Way of Managing Complexity, is to use mathematical models, which consist of augmented with the Algebraic Types highly syntactically sugared Simple Typed Lambda Calculus, extended with some primitive (machine) types.

And indeed, back in the late 90s, programming has been “solved” by the ML family of languages (Standard ML, OCaml, F#, etc) and Miranda and then Haskell. And yes, Lists, Trees and Tables were enough of data structures (Abstract Data Types) to express any common data structure of complexity in almost any domain.

Have you noticed that your neurons also have a layered structure and that axons and dendrites form DAGs? The computational graphs of modern Deep Learning frameworks (TensorFlow, PyTorch, JAX, etc) also have the same algebraic structure (of a DAG) and the composable “pieces” are the structurally the same as logical connectives and Algebraic Type-constructors.

Now you should say “uhhh”.

So, it seems that if I package the fundamental [nested and parameterized] Algebraic Data Types inside classic [non-leaking] Abstract Data Types (ADTs) as per Barbara LIskov, and that the resulting abstract structure of the sum-, product- and function types (and the type-classes they “naturally” form) have the one-to-one correspondence with the inherent structure of complexity of the domain (as per Michael Jackson or nowadays Scott Wlaschin), then I can express any algorithm (as nested functions over these ADTs) that solves any problem in that domain.

If I would achieve a “perfect” structural match,not just I could say that I have achieved “Right Understanding” of that domain, and that my program is a “Right Program” for that domain, but that I approached a local optimum, or a structural “fixed point”.

This shows, among other things, that such “fixed points” do exist and are, actually (not just theoretically) attainable, with only such means as function composition, algebraic data types, nesting and layering (the hierarchy, if properly captured, will emerge naturally), by applying the abstraction by parameterization and abstraction by specification principles (to get proper ADTs).

Have you ever noticed that one could not tell whether a function has a n arguments or a single n-tuple as a sole argument? A curried function is the “ultimate join”, and it captures The Modus Ponens (Causality itself) as the universal notion of “Necessary And Sufficient”? An pair (or an n-tuple) is the product-type. All the “slots” of an enzyme has to be filled, just as all the necessary and sufficient conditions has to be meet.

It is not a random coincidence that a type of a curried function and a corresponding n-tuple are isomorphic to each other – both are “a set of arrows coming in” (the ordering is irrelevant) .

The shortcircuiting if (or a cond), is, of course, the “ultimate fork”, and captures the notion Potentiality itself, and the “proper” notion of an exclusive OR (“or both” happens only in pure mathematics). Only one “path” (or an outgoing arrow) could be taken (notice the absence of “at a time”).

What about loop? There aren’t ones. Recursion is a “spiral”, not a “cycle” (even two of them – nested “calls” (chained “arrows”) all the way to the base-case and then actual unfolding of “returns”). Look, ma, no loops. If your language have imperative stateful mutating loops instead…

Have I told you that the graph-reduction with the same term rewriting rules (based on substitution of an equal for an equal), lust like a human calculators did with a pen and paper, is the ultimate evaluation strategy of “lazy” pure-functional languages? The Lambda Calculus (math and logic are also “by need”). SPJ wrote a whole book about this discovery.

Maybe this is the right time to say “Om” or something.

This is the understanding which the classic FP guys probably had at the peak of the Golden Age of Programming, which culminated with the Haskell 2010. This is what that “advanced” Cambridge Course is about, if you skip all the very cool but mostly irrelevant (compared to this) derivations.

Now you have some realizations, which lift you above some 95% of the population.

Now what?

Well, programming is suddenly can be fun and be simple again. You just model your domain with sets, logic and relations (as functions) and then translate it into abstract data types which encapsulate the algebraic data types, that match the structure of the problem domain.

Then you can write it down in any decent language – Haskell, Ocaml, Scala3 or even Rust (which is not even mostly-functional but is mimicking some aspects of FP and could have an ad-hoc referential transparency by avoiding mut and &mut in principle, which is what your domain layer absolutely should do anyway).

One more time, the particular programming (implementation) languages are almost irrelevant, while some are way better designed (by mathematicians, and you can tell!) than the others. As long as you have enough support of classic algebraic structures in your languages (as Rust is struggling to get) everything will “naturally” falls to its places.

Again, this is not just an opinion, man, there is a “path” all the way from the universal notions of Causality and Potentiality (which can be, in principle, captured by a DAG) to your petty domain, be it yet another shopping site or a crapto trading bot.

There is no other structure Out There. It is all that one might ever find, just as when you see all possible permutations of something much simpler.

To be fair, there is some more structure. There is nesting (as in cells) and abstraction barriers with “pumps” and “posts” (which are just “lamdas” of an exported (public) Abstract Interface associated with an ADT). There are the notions of a “physical locality” and of the “shared environment, with nested frames” (which is what is required even to evaluate some expressions of the lambda calculus by pure substitution of a piece of paper).

All this, however, has been captured and intuitively understood by the MIT guys who designed the universal machine – the Metacircular Evaluator for Scheme, which back then was a clean implementation of the stronly typed Lambda Calculus.

Here I probably should bootstrap all the fundamental algebraic structures from the Lambda Calculus (application, abstraction, binding) all the way up to Monads, which just capture a Monoid (the particular way to compose) of EndoFunctors, but it is rather tedious and has been done by much better qualified people.

The meme I will give you is this:

, | -> = ()

This is all you need.

For convenience and it is good to have Currying (which is just nesting, which is, in turn, how composition is implemented) which has been discovered, not invented, and type-parameters and thus type-constructors, as well as data-constructors as “type-tags” for strutural pattern-matching.

It is also good to have partial functions as a “sum” of individual clauses (arrows), as SML and Haskell have. Everything has already been discovered more than once, and there is a reason why.