Everything is broken and idiots are everywhere. There is a clown which attention whoring, sorry, publicly arguing (and gaining a lot of unwarranted attention) that one shall vapecode in C.

https://news.ycombinator.com/item?id=46207505

Basically, making such a claim is idiotic on so many levels that it is hard to know where to start. Almost the whole of classic non-bullshit programming language theory research is about how to correctly address C’s shortcomings and semantic issues, and how to avoid the inherent in the design of the language (and the ABI) problems.

The real, non-bullshit solution, inspired, of course, on the decades of research in the classic FP realm, is the modern Scala 3 collection library, which allows one program without imperative looping constructs and imperative array indexing, with just proper method’s chaining, which is, of course, type-safe and efficient (due to inlining).

In general, one has to program at the highest and the most general and abstract level possible (ideally, in a pure Set notation with some minimal classic mathematical notation to denote relations, and ML/Haskell style function syntax is a good approximation to that ideal), and leave the compiler and the optimizer to do the rest.

Some way, way smarter and intellectually refined people, with a proper mathematical background (just like being of different spices from these vocal idiots), intuitively realized that the minimal and the most expressive syntax, based on the traditional refined and minimalist mathematical notation, is the right way (for subtle cognitive reasons – it captures no more than necessary and sufficient in the most minimal way), and the focus should be on Algebraic Data Types, not on imperative constructs.

This is why and how the , | -> (and () since nesting is an implementation of composition) are enough for everything, and shall be this abstract and this high-level, and the syntax must directly capture are reflect these notations. The ML family, unsurprisingly, got it just right.

Having “low-level” types, including generics, at a higher level of domain-specific types and related DSLs (a set of associated operations) is a major design flaw. Any less ignorant person who have seen SNL or Erlang or Haskell will immediately understand why.

One more time, just as the Set Theory is enough to express everything in Mathematics, the minimalistic functional programming notation (with ADTs) is enough to express everything in Programming. This is the most important insight, period.

There is something to learn from the Group Theory – how to build proper abstract data types and properly generalize over them, and the Category theory – how to properly compose functions and data transformation, knowing all the “arrows” between dots which can possibly be Out There.

And then, when one actually need to go low-level, have a specialized embedded low-level DSL, which is properly embedded into the high-level language, and properly type-checked and optimized by the compiler. You can have all your C-like fucking syntax with the fucking curly braces without polluting the math-like high-level code with it.

Yes, the curly braises has to be reserved only for the traditional C/PHP/Java syntax of a embedded DSL, for a stark remainder, and yes, the traditional imperative block of statements is just {;}. You are doing exactly this already by FFI-ing low-level libraries anyway. Just embed the stuff in and better typecheck.

One more time – C was a bunch of clever hacks, not the proper way to program (only idiots want to program in terms of implementation and representation details of everything, at the level slightly higher than assembly, in an inherently imperative way, destructively mutating memory locations). This is like writing mathematics describing the “materials” being used.

The proper way to program is to use the highest-level abstractions possible (think of MATLAB/Octave), and leave the rest to the compiler and the optimizer. This has been realized in the fucking 70s, with CLU and ML, and the whole FP research is about this fact.

Here I feel obliged to remind you of something – the original HN code (back then, I still remember when I untarred the stuff) was some few hundreds of kilobytes for the Arc languages and another few hundreds of KB for news.arc. This is what a high-level programming, unpolluted by low-level types, could accomplish “in practice” (fucking idiots!). Yes, it relied on the high-qualify mzscheme stdlib and runtime, and used pain files (no DBs or ORMs), but the UNIX-kike systems were excellent at optimizing and caching files. And it was good-enough to run the site and to be actually continuously improved.

The real “secret” was in that they’ve used the proper abstractions – at the appropriate level, and the language optimized for that very level – the s-expression and files. Not too low, not too abstract, just right, and yes a mature Scheme implementation did all the “heavy lifting”. rtm did this. No one programs like this anymore. No one even knows the hows and whys.

Yes, Rust (and even Ocaml) could be way better designed if they just add a couple of specialized embedded DSLs – one of the types as in Haskell (the type-signatures is a proper DSL of its own, at its own level) and, for low-level programming, a C-like embedded DSL, when one can express low-level memory manipulations. Just “unsafe” is too general, one has to have “colored closures”, of which the type system is aware, and can properly optimize and check them.

And look, the “elite hackers” just began to suspect something

https://news.ycombinator.com/item?id=46152838

It turns out, the more constraints on the semantics of a language one has – the better (the principle I have formulated half of the year before this HN post) is just an instance of the same universal principle, which forced to restrict way too general The [original] Lambda Calculus formalism with necessary Simple Types – not everything can be applied to everything else.

Similarly, a raw pointer way is too general, even a reference is way too general – not everything can be referenced by without additional (proper) restrictions. The FP ideal, where everything is an immutable binding (both, at the level of code (semantics) and of the data) is too inefficient (for specialized cases), so one there has to be a proper classification (typing) of reference (to imperative memory locations). References has to be “colored” too.

And then the copy and move semantics. If one cannot have immutability, one has to move (by default), it is that simple. Rust did a lot of intuitive insights right (but of course no one bothered to research and make such kind of explicit [properly] theoretical arguments).

So, yes, if one has to vapecode, one shall do so in a languages with a well-researched and the most strict semantics (like Haskell, Ocaml, or Scala3 or, while having no better alternative – Rust). The problem is that there is not enough high-quality code in these niche languages, and the amateur crap on github is as bad as a stupid verbiage on 4chan. (The Haskell ecosystem has been plagued by amateur code, while they are trying to code the very same JavaEE but in a different syntax, with more “annoying” restrictions. The same complain is about Rust – “the borrow checker is so annoying and keeps rejecting my brilliant imperative crap”).

To summarize. Not in C, obviously. C semantically is not that far from PHP, which is the “golden standard”. Not in C++, because of what it is. In Ocaml or Rust. Not in Haskell (unfortunately), because almost all the code is infested with unnecessary, redundant abstractions and all the dependencies (except the ones used by GHC) are fucking over-abstracted crap (yes, one has to actually learn math before touching Haskell).

And last, but not least, please kindly fuck off, you fucking narcissistic POS HN impostors.