Once one has been exposed to a wrong concepts or just bullshit it is very difficult to unlearn what is “already known” and to see things as they really are.

Especially when a necessary abstraction barrier is neglected or not even well-understood and some implementation aspects are mixed arbitrary with definitions of an abstraction itself, and with parts of a supposedly abstract interface, which has to form (establish) the abstraction barrier.

To have a proper understanding we have to clearly separate the abstraction, its implementation, and the abstract interface from each other. This is the required (necessary) way of thinking.

Abstraction is (has to be defined in) just simple mathematics – sets and relations (equivalence, equality,functions). An interface, which is an abstraction barrier, is just names (type-signatures) and corresponding formal specifications (sets of rules). An implementation is a Georges’s problem (just kidding).

So, lets begin with abstractions.

“Lets just compose values based on traits (type-classes) so that all the operators (combinators) form Monoids”. Each such set of operators is an embedded DSL. Lets have layers of these.

Lets have streams (of “packets”) everywhere and compose them like they do in an electrical engineering schematics, but using arrows-between-dots diagrams from Category theory.

Our steam processing will be just like electronic devices, except that ours are “discrete”, not “analog” under the good. Whew, solved a half of programming.

Here is how. There are some “universal patterns” out there. The most fundamental one is Something | Nothing (look, ma, a sum-type!)., where “Nothing” can be precisely defined as \(\forall x, x + (-x)\).

This abstract notion has a generalization of a “collection of at most one element”. When it is empty it has Nothing in it. The empty set, when considered only as a unique singleton marker usually denotes Nothing in a more general (the most abstract) formalism (no infinite nesting nonsense, please).The cardinality of an empty set is 0, not 1.

Similarly, the size of an empty collection (with Nothing in it) is zero. So is the length of an empty list. This is, of course, an identity of addition and an identity element for any combinator.

The “size” of a non-empty “collection” is \(1\). Just as it is with sets, It does not matter what kind of element it is. It matters only for programmers who deal with implementations and representations.

So far so good. Here is another universal patten – Completed | Failed, or Success | Failure, which is NOT just an “error”, but a pair of classes of possible outcomes, one of which requires a backtracking and a restart (with or without adjusting)

When we succeed we usually have got someting, so we parameterize it (yes, the Abstraction by Parameterization principle).

There are even Left | Right, which are human-centric notions, and also “naturally” parameterized.

The most crucial thing about sum-types (disjoint unions) is that its shape (“length”) defines the shape of a corresponding matching expression –defines the number of distinct clauses, each of which is a separate “path” or most abstractly – a distinct “outgoing arrow”.

The second most crucial realization is that values of the same kind (not just of the same shape) can be composed using combinators which conceptually preserve the shape.

Notice that only one of the two possible pathways can be taken at a time – “can only fell into a single hole” (again, a Universal Notion), so the resulting path (sub-graph) will always be a sequence (*->*->*).

At the level of a programming language (formal semantics) a whole multi-clause matching expression will be substituted with a single clause (one of many potential outcomes).

One more realization – when these operations are nested (composed) the “path” (a sequence of arrows) or an outcome (a “tag” on a value) has already been caused (happend), so it literally cannot be changed, only propagated (moved or passed along).

This is how to do applied mathematical modeling properly – always to trace everything back to What Is.

So, an “introduction” (a sum-type) is isomorphic to an elimination (a matching expression), the number of possible clauses is the number of possible branches, but only one if them will be actually taken (substituted and evaluated), so the resulting “path” will always be a sequence of steps (no forks). Yes, a Universal Norton.

One more time – there is no “railway” out there. They are only potentialities, possible outcomes. The railway metaphor is wrong (not two, not of the same length, but “parallel” only in the sense of having “nothing in common”).

Nesting is how a composition of functions is defined (and implemented). Functions under composition form a Monoid. So should all the combinators of a particular sum-type.

These are the underlying properly generalied abstract notions of any kind of stream processing DSLs – the best we can do in principle (this is an old result). The actual state-machines are implementation details and are below this level of abstraction.

Notice that any implementation of a Monoid or a Monad (as corresponding type-classes or traits) are not even there (at this level). We are using the abstract mathematical notions “themselves”, not their realizations or implementations in terms of some programming language.

Mentioning them (or any aspect of it) is just an error - a violation of an abstraction. A conceptual segfault.

Lets talk about operators (still mathematics). They can be of two distinct forms (shapes).

  • \(\cdot \rarr \cdot\) (a transformation, a single “step”)
  • \(\cdot \rarr \cdot \rarr \cdot\) (a binary operator – combines two into one)

where the second form is a curried function (which can be partially applied).

Everything composes perfectly, as in arithmetic, as long as there are Monods.

Binary operators themselves, however, do not compose with (.) (cannot be nested).

So we need to invent “Kleisli arrows” (\(a \rarr m a\)) and then compose these using flatten or join (a dirty hack). The proper way is (<=<).

You can think of it an “arrows upwards”, so the whole thing is as artificial as complex Numbers. The same general technique being used.

This is because a Kleisli arrow itself is a generalization of crossing an abstract abstraction barrier so to speak. Making a Complex number from a Real number is crossing a mathematical abstraction barrier, and literally adding an imaginary Y-axis (changing a structure).

Behind (above) that abstraction barrier there are some combinators too, but we cannot use them directly from below. First we have to lift our values up there - pass them through an abstract cell-membrane.

Another metaphor for lifting is to attach some abstract “molecule” to a value (another “tag”), so it becomes (a part of) something else and ceases to be what it was before. Wrapping is the right word, because it itself is “unchanged” (remains intact).

This is an important distinction – becoming a part of something else (of a compound) versus being transformed or broken apart – these are very different classes of outcomes (and they require an incomparable amounts of energy).

No such notions remain in pure mathematics, of course. The lifted value is a member of a different set, so the equality relation is no longer defined, and it is not necessarily remain the same within an aggregate (a tuple or a set).

Notice how “physical abstraction barriers” are very real and impenetrable. So should be ours (which are merely proper generalizations from What Is).

So, this is how we do branching (on a flow chart) properly. We do sequencing with nesting of function calls (function composition), and we are using recursion for looping.

Recursion is much more general than merely imperative loops, it is a generalization of a spiral-shaped process with “unfolding till a base-case”. Mutual recursion is the way to do state-machines.

We will talk about Monads next.