Just like Max Cohen stated in “Pi”, there are patterns everywhere. Not, however, because of mathematics (which he called “the language of the Universe”), but, and this is crucial – prior to it.

Having the maths “after” Reality (What Is), not “prior” to it, like “esoterics” (or rather plain idiots) put it is the only proper philosophy (of the ancient East), which puts everything into their right places.

What mathematics is really? It is a “set” (a bunch) of observed (by the Mind), captured and properly generalized abstract notions – artifical abstractions of the mind (of an external observer), along with derived and/or discovered properties of these abstractions, including the notion of a Set itself, which is (a proper generalization of) how the Mind (of an observer) categorizes “stuff” it observes.

The question of “existence” of mathematical “objects” can be answered in exactly this way – properly generalized abstract notions – products of the mind of an external observer. The obeserved phenomena from which this or that abstraction, including Natural Numners and Sets, has been generalized, of course, “objectively” exists – a differrent kind of existence than of these derived (propetly captured and then generalized) abstractions.

It is different from “existence” of gods, which “exist” as ephemeral abstractions in shared cultures based on a spoken language (the orally transmitterd traditions of the ancients) and on a writing system (the early written-text-based traditions and cultures), lust like mathematics itself (as a subject) exist within a shared language-based culture, mostly in a written down form.

The mathematical abstractions are by no means ephimeral, which is what I mean by “properly captured and generalzed” ones. The unerlying phenomena actually do exist.

So, lets observe some common, recurring patterns within this underlying phenomena (existence) around us, and try, relying only on intutions, just like the ancient Upanishadic and Buddhist thinkers did, to estabish some principles that presumably underly them.

The common structural patterns, of corse, are:

  • sequences
  • tree-like structures
  • lookup tables

All of these actually exist in what we call molecular and cell biology. Not just that, but the said biology rely on (actually uses) these fundamental structural patterns.

Tree-like structures (a common structural pattern), including the actual botanical trees, are, indeed, everywhere. It seems that “the Causality Itself” has this particular form (or a shape).

Here is how.

We have tree-lile structures (which we may generalize and formalize as directed, acylic graphs) at all levels.

  • a stream or a river
  • a curried (multi-argument) function
  • a composition of curried functions (which is just another function)
  • a single “neuron” (both natural and mathematical)
  • “connected” areas (networks) within a brain
  • a “neural system” (as a whole) within any biological organism

In the context of math, the rivers were the source of the fundamental notion of a Derivative and of the Chain Rule – the question was “how much each stream contributes” (assumming an over-simplified model, where there is no drain or evaporation) and how do we calculate it.

Structurally, when we reduce everything to “arrows between dots”, this is related to the most fundamental “abstraction by parameterization” principle, which has been generalized from observing indiivdual (multi-argument) functions.

One more time - every multi-argument (curried) functiion is a tree, every unary function is an arrow. Both define a single step within a computational graph. Moreover, these are all possible forms.

The compositions of these has been studied within the Category theory, and the main result is that there are some common, more general possible shapes of “arrows between dots” – just a very few, and, most importantly, this is all that is there.

The “potential (possible, existing) paths” through any code are such directed graphs (there are no loops, only general recursion, which is spiral-shaped). The actual (taken) paths thus are always just sequences of steps (chained arrows between dots), and the order is established by nesting and unfolding of a recursion from its base-case.

And this is enough for everything – everything (the Causality Itself) could be reduced to a “trees (acyclic directed graphs) of arrows between dots” (again, no cycles, but spirals).

It is not a coincedence that the neurons in a brain and artifical neural (and computer) networks have the same kind of a shape, and thus can be adequately modeled as a bunch ot “arrows between dots”, augmented with some “weights”.

The notion of a “weighted sum” is as fundamental as addition and scaling themselves (which is what it is), but this is another (closely related) story – a weighed sum is a properly captured Universal common pattern itself.

ASTs

ASTs, Neural nets, fully connected (redundancy), but always directed.

Does not have to be fully connected. Evolution discovers a “better” connectivity.

There are another “not-a-coincedences”

  • Precedece of mathematical sub-expression
  • ASTs and Lisps (with the evaluation rules)
  • lazy evaluation (a pure graph reduction)

MIT Scheme tradition

Within the classic MIT Scheme tradtion (Sussman mostly) there were lots of intuitive attempts to capture the patterns from biology – the “Lambda is the ultimate” papers, for example.

The shape of expressions and the strict rule of how they are being reduced (with exception of just a few special forms) was in iteself an insight. People at MIT were smart-enough to notice the common patterns.

Another insight was how sequences, lookup tables and trees were made out of conses. Quite similar to how biological molecular strucures have been made out of sequences (folded proteins).

The Meta-Circular Evaluator (a Universal Intepreter) itself was just “lambdas and conses” (so to speak).

What was missed is that the abstraction by parameterization (and by specification) principles of B.Liskov are applicable at all levels – from lambdas to whole modules (which were notably neglected back then).

Anoher “neglected” notion was the lazy evaluation, which turned out to be (corresponds to) a pure graph reduction – lazy evaluation produce very different kind of intermediate structures out of thunks than the strict one, which runs in a “constant space”.

The notion that “non-strict evaluation” is “how everything works in biology” has been captured by some other very smart people. But still,there are just trees and sequences (of reductions).

The Curry-Howard isomorphism

Isomorphism itself implies the same kind of underlying strucural patterns, and these are “trees” (ASTs) and lockup tables.

Ok, I will clarify this later.