Recently I have poisoned myself by watching some talking heads on YouTube. The topic was “something something structural system design”.
There is a “correspondence” (not a true isomorphism, but still) between the structural patterns in molecular and cell biology and patterns in a pure functional code – an augmented Lambda Calculus.
Both “systems” are heavily constrained by the execution environment (molecular structures of cell biology is the code and the data, the Universe (in a particular locality) is the runtime).
Lots of the “ancient tradition” programmers (in the 70s) have similar obvious intuitions. Here is how to see this “correspondence” more clearly.
The async cancer
The reason I mentioned it here is that it lead to a profound insight (or rather a late discovery).
The problem with async is that idiots put it literally everywhere, while the actual use-cases are very limited and always specialized. Yes, this generalizes to the statement that async, in principle, is not general-enough and should never be thought as “the right way to program”.
Lets “unpack” (it is so funny to mock some narcissistic degens). There is only one “kind” of “activities” which are suitable for async – it is when you wait for something completely decoupled (independent) from what you do.
The best example is an ordering of a delivery of some food. Once order has been placed, one could (continue to) do whetever one wants (except, perhaps, going out for a long time). This is the principal point – there is no coupling between (expected arrival of) food and what one could do.
The example of the another kind of activities is when one is about to hit some nails and is waiting for a hammer to be delivered. Contrary to what “popular bullhitters (writers)” would say, there is nothing much to do, so one is essentially ruining an idle loop, waiting for an async task to be completed, which semantically is not different from just “blocking” on it.
This is a crucial distinction – when a function cannot do anything meaningful before an async operation is completed, it should not be used.
This rule is similar to the rule about exceptions (which is a weak form of async) – if you would catch it in the same block of code, do not throw it at all, just use an option type.
These are not some “opinions” – these are rules to reduce (to not to introduce) an unnecessary and redundant complexity, which is the main principle of non-bullshit prigramming.
One more observation. The classic use case for async has been outlined in “classic” Chess Grossmaster architectural pattern, popularized back then by the nginx
authors.
Yes, when you have “actually nothing to do” before this opponent “completes”, you may “play with another opponents”. This implies some queue (of opponents) and just processing the next “item” in it, unless it is empty. If all the “opponents” are “not completed”, you have to “do nothing” anyway.
Unlike human chess players, a procedure (or a block of code) usually has noting to do until the data it would use is “incomplete”. This means very scope and “granularity” at the level of code. There is no “grossmaster” in here, except for a rare exception, when one explicitly implements processing of “scheduled” (expected) events or tasks.
One more time, you almost never actually need any asynchrony, because the actual nature of the most computations are synchronous and blocking.
This reflects the generalized notion that a process busy-waits (if not blocks) on an empty channel (queue, whatever).
The specialization of the fundamental notion of an “empty” as of “not being available yet” is still fundamental – one cannot avoid it or pretend that it isn’t Out There. Busy-waiting (do-nothing loop) instead of “honest” blocking is much worse, due to all the added unnecessary complexity.
The async monad
The guys at Microsoft Research (I guess) discovered (not invented) the way to put a necessary (required) proper abstraction barrier (a Monad type) between two fundamentally different kinds of code – pure functions and async “tasks”.
There is a subtle but crucial point about the pure-functional code and of using Monads with it.
Any pure functional code is by definition (in principle) declarative – it only defines (declares, literally describes) what has to (ought to) be done (eventually) without the ability (again, in principle) to “actually see” any value. All one could “program” is what has to be done when the value (or more generally - a pattern) is this, and what has to be done (eventually), when a pattern is that. Very few people realize this.
This is how the functional programming paradigm is different from an imperative one, including OOP. Again, this is not just some “popular books” bullshit, this is an actual fact.
So, we could define an async tasks using the same language, but actually run them (eventually) only behind an impenetrable, “one-way” (in principle) abstraction barrier, which is actually necessary and required due to the “mechanics” of how the code will actually be interpreted (executed).
This is just an explicit abstraction barrier in place of an implicit one – exactly as between ordinary imperative procedural code and the interrupt-handlers, which is, technically, the same assembly (machine code) procedures. The actual abstraction barrier is implemented (and enforced) at the machine level.
The point is that it is necessary and required, and better to be explicit than implicit.
This principle, of necessary separation (partition) between different kinds of code, in turn, leads to the discovery of an “architectural pattern” of “putting an impure code to the outer edges”, which is, of course, corresponds (analogous) to the cell-membranes of biological systems. The “pumps” and “gates” on a membrane are, indeed, exported public interfaces.
This is not some abstract bullshit, this is a “correspondence”, similar in its nature to the Curry-Howard one. It captures the universal notions of nesting and “abstraction by parameterization” (cell membranes, individual enzymes) and “partial application”.
the right way
Again, because this is hard to grasp, but is absolutely crucial to realize – there are always different (specialized) kinds of code, and they must be clearly separated (partitioned). This is a universal principle and “architectural pattern”.
The Monad type-class, with its “lifting”, (or Functor) aspect, establishes an “one-way” abstraction barrier, each concrete instance (of the type-class) being itself an ADT.
This is the proper way to separate the async code, because it is by definition (and implementation) is the code of a different kind.
No “frameworks” are required, just a type-discipline. In the actually pure language like Haskell there is simply no other way due to the function composition (via nesting) aspect of monadic values.
There are just declarative composition (nesting) of “maps” (over “lifted” behind the particular abstraction barrier values).
And this is a discovered Universal pattern, which corresponds (not an true isomorphism, but still) to the “structural patterns” discovered and used by a cell biology.
When do we actually need all this?
When the code is necessary of a different kind (IO, async, at least) an explicit and actually impenetrable abstraction barrier (a partition) is necessary. It is that simple - the same language but the different kinds of code – an abstraction barrier must be established.
It is way better to let the compiler to reinforce this abstraction barrier, insted of trying to always keep it in the mind.
This approach solves the problem of how you accessing the async code – you don’t. You just write declaratively what has to be done (eventually) behind this abstraction barrier (at runtime), and thus naturally “push it onto the outer edges”. This is the right way.
Why this is so difficult to grasp? Because there are two confusing views – from inside of a cell membrane and from outside of a cell membrane, and one has to clearly understand which is which.
The underlying abstractions, such as Functor, do not have such a distinction (crated only by the mind of an external observer).
Again, we just have to systematically compose declarative maps (via implicit nesting) to be eventually evaluated behind an abstraction barrier (which will never be actually “seen” from the level of the code).
To realize this is to analytically solve the most of the hardest problems in so-called “structured system design” – the things the babble about on YouTube.