I finally found a well-written no-bullshit book about CS. It says, among other things:
There is no need to define a representation of the values False
and True
in terms of values of some other type.
Conceptually, values of type Bool
are simply (denoted by) the expressions False
and True
.
Of course, the computer’s internal representation of Bool
and all other data is in terms of bits, but we don’t need to know any of the details of how that works to write programs.
Yeah, this is the main principle behind “high-level” languages - we don’t have to think about any actual machine representations (even do not have to think about representations and implementations of proper ADTs and DSLs). They are “below the level of abstraction” (this is already a meme).
It “naturally follows” then, that good high-level languages are just systems of mathematics and logic, subject to the constraints of a machine representations, just like SML, Miranda, Haskell or Ocaml.
The first such language was, of course, LISP. Paul Graham wrote his modern classic “On Lisp” which tells you all the whys. R5RS Scheme made it almost perfect. Ironically, modern Python became what Common Lisp supposed to be, still being inferior to it, but this is another story. The main principle is to stay high-level, or, paraphrasing Graham – “to grow your own embedded DSL to match the conceptual level of a problem”. The LISP tradition was sort of an Upanishadic one – they have grasped the reality.
Just like one uses Lists
(with built-in syntactic sugar in Haskell) without thinking beyond the concept and its abstract properties (as being a sequence), one should do the same with all other ADTs, especially one’s own. They has to be proper non-leaking abstractions.
One can tell that a language is bad (really) when one has to stumble upon a low-level (representation-level) constructs and syntax (which has to be clearly and completely separated out by the abstract data types).
Even references could be made as a proper ADT.
This is why Java and C++ are so bad - we always have to deal with the “machine types” and aspects of an imperative machine code at a higher level of abstraction. Add to this unnecessary stupid verbosity and the need to write a lot of boilerplate (which is a mix of low level constructs and used-defined types), and wallah.
There are even worse languages, which breaks universal mathematical and logical assumptions (PHP and Javascript) because the “designers” of these languages were not just unqualified, but lacked the basic education. We do not talk about these languages here, just like fine people do not talk about anal porn.
(Just google “The Fractal of Bad Design” and “FTW, Javascript”).
Yes, it is possible to write proper high-level ADTs and classes in C++, even redefining (overloading) the common operators, but very few people do this because so much understanding and discipline is required.
Good languages, like Scala 3, rise the level of abstraction. The classic languages of the ML family, which were designed as the basis for theorem provers, are implementations of mathematics and systems of logic.
There, literally, thousands of years of rigorous thinking and corresponding traditions and even culture has been properly “codified” and “implemented” in these classic languages.
This is how one actually climbs on shoulders of the Titans.
The “lesson” is, as usual, that All We Need is Lambda (augmented with Algebraic Types, Type-Classes or Traits and Extension Methods).
And that the types are properly defines as Sets of operations on values (together with relations, invariants and “laws” among them), just as Sets themselves has been defined by mathematicians.