The thing I hate the most is when some of these fucking YouTube content “creators”, which decide to monetize an AI coding clickbait with low-effort subpar videos, say “aaand boom!” when another chunk of a slop has been spewed out by an AI.
This “boom!” is an insult to the last 60 years of the programming languages research (including the math-based theory) and to the “old sages” which crafted their languages and standard libraries in the best possible, “just right”, perfect, in the sense of “nothing more to take away” by surveying all the available literature, non-bullshit papers and spending months of anguish and self-doubt.
Barbara Liskov (of CLU), Robin Milner (of ML) Steele and Sussman (of Scheme), David Turner (of Miranda), Paul Hudak (and some other members of the committee ), Xavier Leroy (of Ocaml), Martin Odersky (of Scala), to name a few, as long as all the good people who have to actually struggle before the just right words could come out.
I am by no means in the same league, but I am also struggling to find the right words every time I am trying to speak (ESL helps a lot, by the way). No, it is not like that character from the “Plague”, who struggled to wrote an opening sentence for years, but still.
And, of course, every single one on YouTube is a fucking expert nowadays, showing us how uber-smart and clever they are in using this amazing new technology, they have a quick answer of everything, just like an totally ignorant but repulsively over-confident working class ghetto dweller or just a teenager from a rich family who has been sheltered from anything real.
So, what is behind that “aaand boom!”? It is a bit challenging to write a concise summary, so I will just throw in some universal notions there, hoping the underlying structure would become clearly visible.
We, probably, shall begin from an understanding of the very process by which the fundamental and universal abstract notions of what we call Mathematics has been discovered and developed by the mind of an “External Observer” (which also produced all the beautiful intuitive knowledge of Upanishads and the Traditional Buddhism).
The mind observed, generalized, captured the commonality, abstracted out and named Numbers, Sets, and some Algebraic Structures, among other things, as well as the Modus Ponens and the universal notions which we call logical connectives and qualifiers.
What has been captured (the different views of the same “Magic Mountain”) are, arguably, the different aspects of the simulation, such that “everything has its structure”, and there are “structural” properties in anything around us at all levels. The fact almost all the atoms in your body right now are older that the Earth and even Sun tells one something about the universality of the “Immutable Data” principle, while the Process of Evolution at the cellular level tells us something about the importance of “Immutable Bindings” which occasionally can be “broken for good” (so we have set-car! and set-cdr! in Scheme, but we do not have set-number! ).
The simulation, of course, is of “shaking energy fields under different conditions, so in some localities they become stable-enough to become basic building blocks not just of what we call matter, but to Life Itself”. Okay, I am getting carried away.
The important realization is that good parts of Math are not arbitrary and not “invented”, but indeed “discovered”. What was “indented” were applications of Math to the real world problems, which include precise mathematical modeling, using mathematics as the language for formally capturing and describing some aspects of What Is (numbers, lines including axis, motion, permutations and what not).
The same principle applies to good programming languages, which are not “invented” either, but also “discovered”. Here is how.
We have to ask a question no one of the youtube bullshitters ever imagining of asking, but what the actually smart people listed above have one, such as “of what kind of things the properly captured (non-bullshit) mathematical theories were built off”?
It turns out that they are the very same universal logical connectives, numbers, sets and discovered (observed,generalized and captured) algebraic structures. Not just that, but mathematicians have been used “Abstract Data Types” for ages without formalizing or even naming them.
The Group theory is just a bunch of informally (from the CS point of view) defined ADTs (Barbara Liskov was the first to noticed this and Phil Wadler was the first to properly captured and formalize this notion).
It also has been shown that the Set Theory is good enough to formalize all of the “good” mathematics, and the Simple Typed Lambda Calculus is good-enough to define and “mechanically” calculate any commutable function (and the category theory is good enough to formalize all of the “good” set theory).
The discovery of so-called Curry-Howard Isomorphism showed us that the underlying structure of “everything” is a Direct Acyclic Graph, and that the basic distinct building blocks of such a Graph has been independently captured (and named differently) by both logicians, applied mathematicians. (the “Propositions as Types” meme is way less important).
These discoveries are, in a sense, “fixed points of What Is”, the closest one can come to the “Abstraction Barrier of the Simulation”. Okay, okay.
A Set is an unordered “mental construct” of distinct, elements without repetitions. When one has to list the elements of a set (articulate or spell out), one has to use some particular ordering, but the ordering is not part of the definition of a set. A Set is defined by its elements only.
There are abstract operations on sets, and the rules of manipulating sets (union, intersection, difference, power set, Cartesian product, etc) are well-defined and have been discovered (not invented).
A Function is a particular kind of a relation between two sets (or the same set), such that each element of the first set (the domain) is associated with exactly one element of the second set (the codomain). The rules of manipulating functions (composition, identity function, inverse function, etc) are also well-defined and have been discovered (not invented).
A type is a set of values, together with a set of operations that can be performed on those values. The rules of manipulating the values, which define and constraint the particular operations on them (such as addition, multiplication, etc) and the “laws” (associativity, commutativity, distributivity) are also well-defined and have been discovered (not invented).
I am repeating myself, but this is important. Sets, the universal notion of “such that”, which use captured “properties” (which determine subsets and define corresponding predicates), relations (which define operations) and set the constraints and bounds is what is behind any “boom”.
Just as the original Lambda Calculus was “too unrestricted” (allowing application of anything to anything, which lead to paradoxes and bullshit), types as mere sets of values are “too unrestricted”, as well as families of functions, which is what, say, an int (int) (in fucking C++ or Java) actually stands for.
So we need to capture “structural properties” of functions, types, interfaces (which is exactly what the type-classes or traits are for), modules and protocols as well.
Notice that these all are just named subsets, produced by a corresponding “such that” set of constraints. This is Vedanta of good programming languages, if you will.
- A function is a set of “arrows between dots”.
- A type is a set of values.
- A scope is a set of bindings (of names to values)
- An Abstract interface is a set of type-signatures (which include names).
- A trait is a set of interfaces, with “bounds”
- A trait “bound” is a set of constraints on the interfaces
- A module is a set of exported abstract interfaces
- A protocol is a set of informally stated rules.
The last major fundamental discovery was the Type-Classes of Philip Wadler, which formally captured exactly these universal mathematical notions at the level of type-signatures (and Abstract Interfaces).
- “such that” at the level of abstract interfaces (sets of type-signatures)
- “is a” (“and is also an”)
- a set union of “operations” (less strict that the set-subset relation)
- associated but informally stated “laws” – “such that” constraints on relations between functions
At the [abstract] structural level, there are just a few universal distinct shapes (common patterns, pieces of an Universal DAG), which the Algebraic Types capture (the notion of the Same Locality and even the notion of Potentiality – a possible outcome, and the Modus Ponens itself).
The Category theory shows what are all the possible compositions of such abstract structures and which properties they have.
There is something to realize: these are the “fixed points” for everything, which can be traced back to the “very structure of” What Is.
Not just “proper, just right” semantics, but even syntactic forms. The old sages have tried their best to capture the best part of mathematical notation in their languages, which were just executable math.
The notions of Abstract Data Type, of an Algebraic Data Type and of a Type-class with corresponding sets of constraints (or a trait) are central to Programming as a discipline, and thus shall be given “syntactic precedence” in the language. just like ML, Miranda and Haskell intuitively tried to accomplish.
It took at least 60 years to formulate some of these notions and god knows how many millennia since the numbers has been generalized and abstracted out and the arithmetic has been discovered.
This is, of course, just an emotional and impulsive sketch on a whim, but nothing there is wrong. There are (a statement of fact, not just an opinion) the right ways to model and program, there are the “fixed points” to almost any properly captured and properly generalized abstraction, and there are fixed points to the syntactic forms with which these can be captured, denied and expressed in code.
But no,this has nothing to do with “aaand boom!” and the slop it produces.