Trying to understand complex social systems was my favorite timepass. I went through Eastern philosophy and religions. algorithmic trading, Informix system administration and functional programming. of course. I am a FreeBSD and Gentoo addict too. I almost always compile my stuff from sources (so I know all the dependencies by heart).

In particular, Eastern Philosophy (the Early Upanishads and Early Buddhism) helped me to sort out what the current AI tools can achieve and what they cannot do in principle (and I showed why). This is my major achievement so far.

Just like it is with a food - we better have to know everything - the sourcing, the processing, the packaging or the nuances of cooking. With software it is even much harder to do and the social system is even more fraudulent, similar to blockchain and shitcoins, where there are nothing but self-impostors, low-effort crap and plain scams .

Lots of guys are simply uneducated and unqualified (hello, vitaliks) but nevertheless they are way over-confident and produce a low effort (without much thinking) lowest quality imperative spaghetti (and even books), without bothering with underlying theories (developed and refined for decades by way brighter people) and even with a proper education – they just pile up (commit and push) more and more crap.

So, ideally, lets stay, I absolutely want trait-based libraries, with a minimum dependencies, each of which just do one thing and do it well (just right). This is, by the way, how one should organize one’s own projects – a bunch of composable small libraries, which, ideally, are used to build hierarchies (layers) of embedded DSLs. A properly designed DSL, like that of the Octave, has Monoids everywhere (because the domain does).

“Just right” have lots of meanings at each level. At the highest level I want traits (Wadler’s type-classes), proper Algebraic Data Types, and above all – “no-unsafe” (#![forbid(unsafe_code)]), which “proves” that the authors actually did their homeworks. Or just to write Haskell (no one pays for it).

On the other hand, at the highest level, I absolutely want to adhere to the Barbara Liskov’s principles, of having proper non-leaking abstractions, proper ADTs and therefore stable abstract interfaces, with multiple implementations, which implies gradual improvements and even “mature” (without a “pre”) optimizations. Very few programmers understand that this is the essence of serious programming. The Substitution Principle is also important, of course.

All the actually working vastly complex software (like Google Chrome) has been built around stable “abstract” interfaces and protocols, as a composition of stable (vertical and horizontal) partitions – penetrable cell membranes. It is that fundamental and this is precisely why this software actually works (being an imperative crap coded in C++, which is ridden by undefined behaviors).

So, I want all the developers of the dependencies I use to be as good (well-educated, math-literate, principle-guided, cultured, with fine aesthetics and refined tastes) as possible, which is, of course. just a dream (that’s me in the corner…).

What should I do? Well, Google (as a world’s most advanced C++ software shop) figured it out log ago – you carefully chose and then selfhost (vendor) your dependencies, performing regular code-reveres of the critical parts and even of all updates (carefully examining each diff). They have resources for that, while I myself don’t.

Another approach is to have a principle-guided, math-based, done-right and well-polished standard libraries, like of Ocaml, Haskell, F# and perhaps Scala3 and Rust (which is at least trait-based and consistently uses Algebraic types), and just stick to it (all the great compilers – SML/NJ, GHC, ocamlc, scalac has been built this way). But if you have seen things like Scalatest or Serde you probably want to have at least some external dependencies, which has been done right (scalatest is just amazing set of DSLs).

There is another “heuristic”: all serious classic language develop their FFI facilities so that they could call the code they don’t want or can’t re-implement. This is the only way – can’t rewrite - just load and call it.

The practice must fucus on how to clearly identify what I don’t want even to look at (due to immense complexity) and just use the specification of its APIs (and hoping for a miracle). Wrapping the calls into thin completely isolated modules behind a minimal mostly-functional interface, or even a Monad (only the internal module/library/crate has the dependency).

I by no means want to re-implement stuff like Mesa or the Vulkan stack or, god forbid, OpenSSL. The way is to build thin wrappers, just like people do for highest quality numeric Fortran libraries – the use-case for all FFIs (yet another cue).

Even better approach would be to extract and even partially rewrite (simplifying and cutting off the crap) the only code I need and repackage it in my own libraries (the way I ripped off libnginx.so long ago), but it is ok for a slow-running hobby project of a perfectionist or even a “religious practitioner” (FP is a religion, you know), but won’t work in the mundate world outside your window.

Over-abstraction (unnecessary, redundant abstractions) is the root of all evil. Another evil is to write general frameworks. This is why I do not want tokio, (which is actually a runtime), frameworks and especially middlewares, but again, extracting and self-hosting the code I really use is a very long and unpleasant task. Above all I do not want amateur async crap as dependencies.

The principle is that things like async (the fucking metastazying cancer) are appropriate, leave alone necessary only to very specific workloads (use-cases) and presumably it should be minimalisticly designed for each use separately.

What I mean is that over-generalized anync framework is bullshit. Sometimes one need only futures and the simplest means to run them (as in Scala3), sometimes it is lightweight coooperative multitasking (as in Erlang), other times it is just a wrapper on epoll or kqueue . Erlang did it right (with processes), Java did it wrong with Tasks.

Extracting relevant code from a huge mess like tokio or the Twitter Util is almost impossible, but using the right and minimal abstractions is another the most fundamental principle. One will appreciate it when actally see what a fucking ball of mud all these tokio-based frameworks actually are.

By the way, just as real (actual) understanding comes from the ability to trace all the abstract concepts back to the aspects of reality from which they have been generalized (and to recreate them all the way back), so it is with programming – one has to write from the first principles, from ones own understanding, just as I write this file (yes, I can trace every concept I use).

It is not even that difficult to draw the sets and relations on paper as bunch of lists (enumerations) and arrows between dots. Just like any good model, it has to be minimal and good-enough for a specific aspect of reality, not something “too general”.

There is another heuristic – all the shapes your code could possible take are studied with the so-called Category theory. All your async crap has to be reduced to introductions and corresponding eliminations, to a “fork” and, eventually, a “join”. The principle that a concurrent code shall be pure functional and be properly structured (as Functors and Monoids ) is not “just an opinion” – it is the best we can do in principle. Looks like the cats-effects guys already got this right (but bloated it with implementation of every unnecessary and redundant abstraction out there). F# seems the most sane of them all.

The worst part is that this kind of writing of software is possible only in some academic settings and generally uncompetitive in the “real world”. So, again, we have to occasionally eat the over-processed junk food of programming (crappy code pushed upon us by uneducated and unqualified, just like that PHP thing).

But a good taste, necessary elegance, solid principles and even perfectionism are the must – these all are halmarks of a high art. classic literature and of the highest Japanase craftsmanship. An art is a required part, not optional.

So, Rust, Scala3, Ocaml, F# and Haskell (each of which has been made by mathematicians and perfectionists) with a few (very few) carefully chosen dependencies, in a slow (100 lines per week) principle-guided and based on understanding “sprital shaped” (improve till good-enough) bottom-up process. Fortunately, the medium we work with (the expressions and plain text) is the most mailinable, unlike paints, clay or any other materials.

Ideally, we have to rewrite it (refactor) again and again just right (with Algebraict types, proper mathematical type-classes and traits) untill nothing can be removed or simplified further. The problem? No one pays for this kind of artistic programming anymore. So I just write articles (from an organ in the first tence) based on a wishful thinking.

Notice, that using proper Algebraic data types allows one to gradully add new elements to both sum-types and product types without breaking anything, because we process them by pattern-marching or by seleciton by a symbol (a name). This simplifies modications (inevitable and necessary changes) and maintenance. The modern static type-system would help even more.

Oh, I forgot to add – writing everything in Haskell (with only the same depenencies as GHC and cabal) will approach perfection, since one produces a declarative, executable mathematical (well, technically it is a pure logic – the System F-Omega) model of the domain of choice.

But Rust, Ocaml or Scala3 are also OK.