Lets try to solve this too, at least in principle, because why tf not.

This is a much easier but conceptually messier and over-burdened with implementation details problem. The partially-understood solutions are all over the place, ranging from Molecular and System biology to abstract Signal and System theories. Telecom, Packet Switches and Computer Hardware guys have their own working solutions.

We will look at general universal principles and see what Evolutionary Biology came up with. This will be the best we can hope for (because Evolution already did all the trial-and-errors for us).

So, lets try to build it incrementally and bottom-up.

The most general concept of imperative programming is an anonymous block of code, (between some start and end delimiters), even more general than some a proc of assembly code.

Syntactically (and semantically) blocks are “embedded” within imperative looping and conditional statements (FFFUUUUUUU!), this is how general and “universal” they seem to be. So this is where we start.

But not all blocks of code are created equal. Some have very different sets of constraints imposed on them, so they must be clearly semantically distinguished (typed), syntactically marked and semantically (again) separated by clear abstraction barriers. The Haskell guys showed the way.

So, the obvious solution (if one have studied the findings of the last 60 years of the PL theory development) is “more typing”, more “partitioning” and “more abstraction barriers”, which is, basically, is “more Monads at the Compiler’s Intermediate Representation level”. (some Zig clowns – “we have no closures”, “out async is not concurrency” – have a lot to learn, lmao).

At the implementation side we do well-understood and universal “more parameterization”, just like the Haskell guys did with parameterizing a closure with a “dict of methods”, or parameterizing a closure with types for implementing dependent types, what Scala2 general implicits were (they were generalized implicit parameters), which all can be captured to an abstract notion of implicit or explicit contexts (which will go back as far as the universal foundational principle of any human language as an abstract communication system).

We already have “evolved” an appropriate meme for this – “What color is your function” This is an intuitive step in the right direction, no matter what uneducated degens on lowIQ tech boards are saying.

The “right” memes are crucial. As some French “humanists” of the past century have noticed, human societies (and humanity as a whole) runs on successive waves and tides of mass hysterias and related memes. So does what we call “software engineering and methodology”.

As I have written before, we have C as “PHP of the 70s and 80s”, and PHP itself (the Fractal of Bad Design, you know) of the 90s, and related mass hysterias and all their unfortunate but massive outcomes, as obvious examples.

So this is the right meme. It is related to the human notion of “color coding”, which will go back to the “a green tomato (not yet ripe) is not the same as a red tomato” (Mother Nature has no clue that we have observed and used colors as a fundamental cue).

This is a human-universal cognitive notion, and it is everywhere from “maps” to literally everything else. Color-coding of “types” (different kinds of things) is a human-universal technique.

A little on more esoteric and even mysterious side – this is related the 4-colour theorem, which is about non-overlapping regions (on a 2D-plane).

Concurrency is also about non-overlapping “regions” of code (blocks), and parallel is not having a single point in common (no overlap) by definition. The principal conceptual difference is that in “concurrency” the “parallel lines” (of code) are being chopped into chunks, and the runtime tries to not-overlap them.

By the way, enforced consistent coloring (color-coding) of different kinds of blocks of code (a red-ish tint of the background for an imperative crap and IO-doing blocks) is a good idea.

There are some more facts about this particular Universe. The “waiting” cannot be avoided. Partially filled enzymes are literally “waiting” for all the necessary and sufficient conditions to be meet. So are ordinary people when cooking or building a house. Blocking is okay and even “natural”.

The wast majority of the process in this Universe are what we call “sequential”, which means they cannot continue until particular necessary and sufficient conditions will be meet.

The notion of “parallel” in a sense of literally “having nothing in common”, is very tricky and even relativistic – essentially, what one does not observe isn’t happened “to it”. More generally, it is a locality-based notion, related to the square-of-distance falloff of basic forces – “distant events” have no effect and can be assumed as “never happened” or simply ignored.

Good mathematicians and Functional Programming Chads understand the necessity of proper defaults, which enables some crucial properties (which in case of programming amounts to preserving the fundamental Referential Transparency property). The obvious defaults are those of Haskell code, where imperative constructs are very explicit, and explicitly verbosely composed (using the do syntactic sugar or the underlying composition operators). Placing them within a particular Monad is also not optional (and not accidental – partitioning of the code with abstraction barriers is as necessary as cell-membranes).

A truly “parallel”, and even inherently “concurrent” code is rare and probably is badly modeled or badly designed (total process isolation, as in UNIXes before the pthread abomination, which broke all proper abstractions. will be good-enough), so a specialized DSL to write small chunks of highly specialized code blocks will suffice.

Again, proper isolation (partitioning) is required, and this is the essence of the problem, which can be solved by teaching the compiler (and the type system) how not to overlap differnt strands of differnt kinds of code.

Trying to make it general is, in principle, the same mistake as with the original untyped Lambda Calculus, where a naive wish of “everything will be applicable to everything else” just didn’t survive confronting Reality.

The naive wish of keeping everything as general as possible will be crushed by Reality, especially in the context of crappy imperative languages, where each imperative statement and built-in procedure has its own subtle implementation details, closely coupled with the underlying hardware.

“Non-blocking I/O” is a conceptual bullshit meme. Just making I/O “other people’s problem” does not imply that an ordinary sequential code can be somehow meme’d into a parallel one. It will just block in a different place and if required data isn’t available it will just wait.

The best we can do are classic “software interrupts”, used as an implicit notification mechanism. Again, these are just different kind of code, with many subtle constraints applied, and such code simply cannot have just any construct you want.

And this is not about “granularity”, it is, indeed, a typing (classification and non-overlapping) problem, very similar to what Rust does with typing reference by classifying (partitioning) them and by adding a concept of a lifetime to each.

Ok, let’s slow down, and build it up.

I am old enough to remember DOS and “resident programs”, primitive DOS “viruses” in Assembly and their basic building blocks, which were just blocks of assembly code of different kinds.

An “interrupt handler” would be just a block of assembly code, with some implicit informally stated restrictions (as rules) placed on them as constraints. You have to be aware what you can and cannot call from such a “handler”, and in general, you want to do as less as possible – just “signal and return”.

The fact that at a CPU level, interrupts and their handlers are using different signaling mechanisms, instead of just stack-based procedure calls and returns, correspond to the more general principle of “interruption by a signal”, which is biologically universal.

With the async/await syntax, which by the way is a huge meme, we cleverly swept under the rug the fact that the actual signalling is very different, and that the “awaited functions” are not the same as the ones which use ordinary stack-based call-return mechanism.

The problem is, of course, that not any kind code can go into such functions. Traditionally, everyone is trying to sweep this fact under the rug too. except mathematicians, who see the necessity of clear partitioning and that this is the only “natural” way.

So, instead of What color is your function we always had “what color is your assembly block of code”.

In the imperative settings all we need is type-annotations and compiler support (checking, enforcement and type-inference) for blocks of code of different kinds.

The list of “kinds” can vary (it will take time to “stabilize”), but basically we all know what they are – “blocking vs. unblocking (in principle)”, “waiting for completion (pending)” (which is a universal notion and cannot be ignored), “sending and receiving” (as in Erlang’s message-passing idioms), and so on.

Waiting is universal (I am getting pedantic and boring!). Some processes cannot proceed until certain conditions (or thresholds) have been meet. There is a whole class of such processes, which we call traditionally “sequential”.

We absolutely do not want “busy-waiting” (and biology has none of those, just as it does not have “counters” or “clocks”), and we want to be “signaled” (notified, or “interrupted”), which is what biology actually does (learned by doing).

If we, however, want to be systematic, mathematically rigorous and precise, and provably correct, we have to correctly classify (“color”) the proper lexical closures, based on what they could possibly “do” (what kind of “side-effects” they could in principle have).

Just as by having pure expressions (as proper closures) we can abstract away from the actual order of evaluation (as in Lazy pure functional languages), we must abstract away from how computations overlap at the runtime, by ensuring that they cannot overlap in principle (similar to what Rust does with non-overlapping lifetimes), and just tossing them into an appropriate “pool” or a “reactor”. We need specialize them too, at least IO vs. never-IO.

Closures will “naturally” capture all the implementation-details related to “channels”, callbacks (when necessary), etc. This is also well-understood.

This is, probably, what Odersky is doing nowadays with his effect-systems on a nice new shiny grant somewhere in Swiss Alps.

Haskell has all the necessary notions, some of which are turned out to be way to general (yes, again, since the very first untyped Lambda Calculus!), we just need to add “more typing” and “more partitioning”, just enough of it.

Haskell has “thunks” which are the universal building block of “lazyness”. They are uniform, and this is the point. We have to classify, partition and parameterize them with required “contexts”.

There are well-understood whole classes of computations which can be “structured” and “packaged into a Monad” – behind a distinct, impenetrable abstraction barrier, which has implicit serialization (via nesting of “closures” at the implementation level).

Nesting is the only/ way of establishing an particular order of evaluation in a lazy abstract language, such as math or logic, which is not a random coincidence.

Having such “monadic interfaces” for different kinds of “thunks” or “proper lexical closures” within the compiler’s IR is the theoretical and practical solution. We “color code” the closures and put them behind a monadic abstraction barrier.

At the level of abstract arrows, “lifting” and “composition” a -> m a and m a -> m b -> m c is enough.

Now, what do we do with signals (“interrupts”) and “interrupt handlers”? Well, we “lift” them into a “pool” of “compiler-guaranteed (verified) non-overlapping computations” and “forget”. There is nothing we can do.

Each such “specialized closure” has to be parameterized with a “dict” of special “callbacks” (which has to immediately return, everything has to immediately return in a signalling implementation context) it could possibly call.

Implementation details are much less interesting. There will probably be a queue, and “callbacks” just know where it is and place (“move”) values in it and return. Hardware guys got it all right long ago.

The theoretical and practical solution is in proper specialization of classic, general, well-understood even “universal” notions of a non-leaking abstraction, proper, fully self-contained, lexical closure, tagging (as the way of “typing”), as in “tagged unions”, parameterization as the universal way of building abstractions (as per Barbara Liskov), and, yes, Monads, as “lifting + proper Monoidal composition”.

This is how “more partitioning” and “more abstraction barriers” can be achieved at the level of a compiler’s IR.

Just as GHC has a representation of System F Omega as its IR, serious compilers should have it, augmented with systematically partitioned code blocks of different kinds. Which particular kinds – less interesting and has been roughly outlined above.

The important part is that there always been blocks of code of different kinds, heavily constrained with a different sets of informal rules.

The proper way is to define semantics of such blocks with a mathematical rigor, and the Haskell community has already developed all the necessary building block, both at the conceptual and implementation level.

At the level of syntax, the best possible solutions is to have distinct specialized DSLs (syntactic sugars supported by the language), tradtional math-like for pure code, and some imperative ugliness, derived from the C/PHP/C++/Java legacy, intentionally being kept traditionally ugly with all their curly braces as imperative block delimiters.

And this is, basically, it. Good-enough to have the top page of HN or whatever it is (they all the same nowadays). Way better than all these overhyped “zig” posts LOL.

By the way, what do we know, in principle, about “implementation of signalling” in Biology? There are either “electric”, which requires a “wire”, which at least conceptually is a channel, to go from here to there, or it is a “pure, stateless, asynchronous message passing”, when a particular molecular structure (which we may call a hormone) arrives and binds to a particular “receptor”, triggering a particular “cascade” within a cell.

These “cascades” or “pathways” could be best visualized as these “falling pieces” setups, sort of a machinery, humans create for fun.

Signals trigger localized processes (usually release of some particular molecular compound), while “message passing” is a concentration based “slow” but “global” signaling, broadcasting, if you will.

As a side note, while biological “communications” and “protocols” are, indeed, stateless and asynchronous, the higher-level brain activities are essentially based on current synaptic states, which means that what you are is just set of states within a set of synaptic gaps, and melyenation, which is a sort of state too.

Naive oversimplified assumptions usually do not work.