Have you ever thought why the Set Theory and Predicate Logic looks “the same” when being visualized using Venn and Euler diagrams?

Are these partitions is the most fundamental abstract building block?

Most of the examples which are used to explain logic has been drawn from “natural categories” of biological species - mammals, reptiles, men.

These are distinct partitions indeed, but how they came to be as they are?

It is because somewhere in the past a literal “fork”, a mutation (or a whole set of these) occurred (and the resulting population survived).

Schematically, it was a “split” event, denoted as \(<\). Or a “fork”.

After such kind of a “split” or a “fork” two distinct “partitions” have been emerged - one which has a new distinct property, and the other which does not.

At all levels, from DNA to whole species the same reasoning (or a universal law) applies.

Notice that this process has a direction, and the notion of something being prior to something else - an implicit ordering.

When we reduce everything to bunch of dots and errors (connected into directed graphs) we would be able to see the Causality Itself at work.

This is why we see all the dots and arrows in most fundamental mathematical and logical formalisms.

So, “logic works” because (and only because) the Causality is out there, it has its “patterns” and “directionality” (which one may mistake for the abstract notion of Time).

Function composition which is one of the most fundamental building blocks of the Lambda Calculus (an implicit Environment is another one) has been “implemented as nesting” to capture the notion of directionality, is depicted as a two joined arrows - the fundamental building block.

This is also signifies the fact than no “fork” occurred there.

All logic and mathematics could be unfolded back to these “universal patterns of Causality” – “steps”, “forks” and “joins”.

Joins are the captured notion of “necessary and sufficient”, and currying is a perfectly captured “implementation” – all slots has to be filled in, and the intermediate state “partially applied lambda” is out there.

Forks are all imaginary and hypothetical (it is always either one way or another, as if multiple ways do not actually exist), but the results of the events are real – the resulting “partitions” are distinct and observable.

This is an important notion. Every water molecule in a stream seem to have an innumerable branching factor of an imaginary tree of potential “forks”, yet, it goes it unique “linear” path without a single fork.

The principle is this: In order to be valid and sound logic, math, machine learning algorithtms /has to be applied to actual patterns – the properly captured and generalized data.

Just like addition is properly abstracted away from what is being added (put together), some rules of logic are the properly captured abstractions (by the same process within the Mind of an External Observer).

It is not arbitrary that the notion of a Narural Number could be defined as a cardinality of a Set. This is the proper capture (the unary number system of scratches on a wall).

Sets capture the universal notions of partitioning (as results of “forks”) which the Mind observes (and which are out there prior to it).

The whole Tree of Life has its structure due to what we call the causality principle, which, in turn, is due to the laws of the Universe (at this partocular locality).

Logic, maths and now the “deep learning” are “results” (proper generalizations of some aspects) of these processes – Observing the same “mountain” (Universe) from different distances and points of view.

No fundamental “correspondences” or “isomorphisms” (at the level of dots and arrows) are arbitrary.

The notions of the premises, of the shared environment, of the \[\Gamma\], of a global namespace (and its nested frames) are capturing the fact of “prior results” - of stable intermediate results, necessary for new transitions (steps).

The point is to realize the whys these correspondences exist in the first place, and then to use these fundamental building blocks “just right” in our programs.

All the fundamental notions has already been captured (but not yet properly systematized).

  • Dots and arrows at the level of closures
  • Sets of values as simplest types (partitions)
  • Sets of interfaces (which are individual type-signatures)
  • Algebraic types (\(|\), $,$, \(\rightarrow\) )
  • Set Union at a type level,
  • Set Intersection at a type level
  • A Subset relation (as sub-classing)
  • concatenation as a composition of Traits
  • Haskell type-classes as required subset at a interface level.
  • And the Environments in which everything happens

The same building blocks are necessarily underlying any form of logic, and this is why Curry-Howard correspondence do exist.

Last but not least, it is all can be imagined a directed graph with retrospective “forks” and “joins”, which are not “loops”.

Iteration is a spiral, not a loop, just a recursive processes are.